langroid
async-profiler
langroid | async-profiler | |
---|---|---|
15 | 10 | |
1,698 | 7,175 | |
21.4% | 1.9% | |
9.8 | 8.7 | |
1 day ago | 9 days ago | |
Python | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langroid
-
OpenAI: Streaming is now available in the Assistants API
This was indeed true in the beginning, and I don’t know if this has changed. Inserting messages with Assistant role is crucial for many reasons, such as if you want to implement caching, or otherwise edit/compress a previous assistant response for cost or other reason.
At the time I implemented a work-around in Langroid[1]: since you can only insert a “user” role message, prepend the content with ASSISTANT: whenever you want it to be treated as an assistant role. This actually works as expected and I was able to do caching. I explained it in this forum:
https://community.openai.com/t/add-custom-roles-to-messages-...
[1] the Langroid code that adds a message with a given role, using this above “assistant spoofing trick”:
https://github.com/langroid/langroid/blob/main/langroid/agen...
- FLaNK Stack 29 Jan 2024
-
Ollama Python and JavaScript Libraries
Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).
For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]
[1] https://github.com/oobabooga/text-generation-webui/issues/53...
[2] https://github.com/langroid/langroid/blob/main/langroid/lang...
Related question - I assume ollama auto detects and applies the right chat formatting template for a model?
-
Pushing ChatGPT's Structured Data Support to Its Limits
we (like simpleaichat from OP) leverage Pydantic to specify the desired structured output, and under the hood Langroid translates it to either the OpenAI function-calling params or (for LLMs that don’t natively support fn-calling), auto-insert appropriate instructions into tje system-prompt. We call this mechanism a ToolMessage:
https://github.com/langroid/langroid/blob/main/langroid/agen...
We take this idea much further — you can define a method in a ChatAgent to “handle” the tool and attach the tool to the agent. For stateless tools you can define a “handle” method in the tool itself and it gets patched into the ChatAgent as the handler for the tool.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Many services/platforms are careless/disingenuous when they claim they “train” on your documents, where they actually mean they do RAG.
An under-appreciate benefit of RAG is the ability to have the LLM cite sources for its answers (which are in principle automatically/manually verifiable). You lose this citation ability when you finetune on your documents.
In Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers) https://github.com/langroid/langroid
-
Build a search engine, not a vector DB
This resonates with the approach we’ve taken in Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers): our DocChatAgent uses a combination of lexical and semantic retrieval, reranking and relevance extraction to improve precision and recall:
https://github.com/langroid/langroid/blob/main/langroid/agen...
-
HuggingChat – ChatGPT alternative with open source models
In the Langroid library (a multi-agent framework from ex-CMU/UW-Madison researchers) we have these and more. For example here’s a script that combines web search and RAG:
https://github.com/langroid/langroid/blob/main/examples/docq...
-
SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
Thanks, also found Langdroid: https://github.com/langroid/langroid/blob/main/README.md
- memory in ConversationalRetrievalChain removed
- [D] github repositories for ai web search agents
async-profiler
-
JVM Profiling in Action
We'll use async-profiler and flame graphs for profiling. To simplify the process, we'll run the code using JBang.
-
The Return of the Frame Pointers
JIT'ed code is sadly poorly supported, but LLVM has had great hooks for noting each method that is produced and its address. So you can build a simple mixed-mode unwinder, pretty easily, but mostly in process.
I think Intel's DNN things dump their info out to some common file that perf can read instead, but because the *kernels* themselves reuse rbp throughout oneDNN, it's totally useless.
Finally, can any JVM folks explain this claim about DWARF info from the article:
> Doesn't exist for JIT'd runtimes like the Java JVM
that just sounds surprising to me. Is it off by default or literally not available? (Google searches have mostly pointed to people wanting to include the JNI/C side of a JVM stack, like https://github.com/async-profiler/async-profiler/issues/215).
- FLaNK Stack 29 Jan 2024
-
Tracking Java Native Memory with JDK Flight Recorder
debugging native calls in itself is also painful. I have switched to using async-profiler (https://github.com/async-profiler/async-profiler) instead of JFR for most of my usecases.
A. it tracks native calls by default
-
Show HN: Javaflame – Simple Flamegraph for your Java application
https://github.com/async-profiler/async-profiler#flame-graph...
Ok, Windows is not supported. But IntelliJ made a fork which works on Windows.
-
Lettuce (Redis) + Mybatis (MySQL) take up most of the CPU in production - Is it normal? Did you observe that in your environment? Any ways to optimize it?
Hi, today I used async-profiler to check the CPU usage of my Spring Boot app (just a normal backend) in production. Surprisingly, Lettuce (Redis) + Mybatis (MySQL) take up most of the CPU time. I am not talking about wall time here, but CPU time, since I know database requests need to wait for milliseconds and thus wall time will be very long. Therefore, I wonder:
-
A question about Http4s new major version
You can use async-profiler to see what is happening under the hood.
- Reducing code size in (Rust) librsvg by removing an unnecessary generic struct
-
what is your favorite programming trick/tool that not many People know about?
I have used visual vm quite a bit. https://github.com/async-profiler/async-profiler is also amazing... Throw the binary on the system and fire it up. It also profiles down into native code as well if you do that kind of thing.
What are some alternatives?
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
jmh - https://openjdk.org/projects/code-tools/jmh
modelfusion - The TypeScript library for building AI applications.
container-jfr - Secure JDK Flight Recorder management for containerized JVMs
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
jfr-libraries - a list of libraries that generate JFR events
vectordb - A minimal Python package for storing and retrieving text using chunking, embeddings, and vector search.
Arthas - Alibaba Java Diagnostic Tool Arthas/Alibaba Java诊断利器Arthas
Adala - Adala: Autonomous DAta (Labeling) Agent framework
opentelemetry-java-instrumentation - OpenTelemetry auto-instrumentation and instrumentation libraries for Java
chidori - A reactive runtime for building durable AI agents
junit-jfr - a JUnit 5 extension that generates JFR events