langfuse
nebuly
langfuse | nebuly | |
---|---|---|
11 | 105 | |
3,815 | 8,363 | |
30.4% | 0.1% | |
9.9 | 8.4 | |
about 3 hours ago | 7 months ago | |
TypeScript | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langfuse
-
Top Open Source Prompt Engineering Guides & Tools🔧🏗️🚀
Langfuse is an open-source LLM engineering platform that helps teams collaboratively debug, analyze, and iterate on their LLM applications.
- Roast My Docs
-
Show HN: Open-Source LLM Observability and Export to Grafana, Datadog etc.
Congrats on the Show! How’s this different from https://github.com/langfuse/langfuse? The exports seems really interesting
-
RAG observability in 2 lines of code with Llama Index & Langfuse
Thus, we started working on Langfuse.com (GitHub) to establish an open source LLM engineering platform with tightly integrated features for tracing, prompt management, and evaluation. In the beginning we just solved our own and our friends’ problems. Today we are at over 1000 projects which rely on Langfuse, and 2.3k stars on GitHub. You can either self-host Langfuse or use the cloud instance maintained by us.
-
langfuse VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
Ask HN: Who is hiring? (November 2023)
- We want to build a tool that is recommended here on HN: you can build a tool you would want to use yourself.
Please see more details here: https://langfuse.com/careers or reach out directly to me: [email protected]
[1] https://github.com/langfuse/langfuse
[2] https://create.t3.gg/
-
How are generative AI companies monitoring their systems in production?
We struggled with this ourselves while building LLM-based products and then open-sourced our observability/monitoring tool [1]. Many use it to track RAG and agents in production, run custom evals on the production traces (focused on hallucination), and track how metrics are different across releases or customers. Feel free to dm if there is something specific you are looking to solve, happy to help.
[1] https://github.com/langfuse/langfuse
-
LLM Analytics 101 - How to Improve your LLM app
Visit us on Discord and Github to engage with our project.
-
Ask HN: Any tools or frameworks to monitor the usage of OpenAI API keys?
Maybe try https://github.com/langfuse/langfuse
It was recently shared on HN
- Show HN: Langfuse – Open-source observability and analytics for LLM apps
nebuly
- Nebuly – The LLM Analytics Platform
- Ask HN: Any tools or frameworks to monitor the usage of OpenAI API keys?
-
What are you building with LLMs? I'm writing an article about what people are building with LLMs
Hi everyone. I’m the creator of ChatLLaMA https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama, an opensource framework to train LLMs with limited resources and create There’s been amazing usage of LLMs in these days, from chatbots to retrieve about company’s product information, to cooking assistants for traditional dishes, and much more. And you? What you building or would love to build with LLMs? Let me know and I’ll share the article about your stories soon. https://qpvirevo4tz.typeform.com/to/T3PruEuE Cheers
-
Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA
How does it differentiate from the original ChatLLaMA? https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
-
🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
Was this made with the ChatLLaMA library? https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama
- Meta LLM LLaMA leaked, all over the internet as we speak
- Meta LLM LLAMA leaked, it's all over the internet as we speak.
- Meta LLM LLAMMA leaked, it's all over the internet as we speak.
-
Plug and play modules to optimize the performances of your AI systems
Some of the available modules include:
Speedster: Automatically apply the best set of SOTA optimization techniques to achieve the maximum inference speed-up on your hardware. https://github.com/nebuly-ai/nebullvm/blob/main/apps/acceler...
Nos: Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas. https://github.com/nebuly-ai/nos
ChatLLaMA: Build faster and cheaper ChatGPT-like training process based on LLaMA architectures. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
OpenAlphaTensor: Increase the computational performances of an AI model with custom-generated matrix multiplication algorithm fine-tuned for your specific hardware. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
Forward-Forward: The Forward Forward algorithm is a method for training deep neural networks that replaces the backpropagation forward and backward passes with two forward passes. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
- Open source implementation for LLaMA-based ChatGPT
What are some alternatives?
trulens - Evaluation and Tracking for LLM Experiments
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
llama_index - LlamaIndex is a data framework for your LLM applications
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
langchain - 🦜🔗 Build context-aware reasoning applications
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
agenta - The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
opentelemetry-instrument-openai-py - OpenTelemetry instrumentation for the OpenAI Python library
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
examples - Your one-stop-shop to try Xata out. From packages to apps, whatever you need to get started.
deepsparse - Sparsity-aware deep learning inference runtime for CPUs