LangChain_PDFChat_Oobabooga
text-generation-inference
LangChain_PDFChat_Oobabooga | text-generation-inference | |
---|---|---|
4 | 30 | |
66 | 8,193 | |
- | 3.8% | |
3.0 | 9.6 | |
about 1 year ago | 6 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LangChain_PDFChat_Oobabooga
-
Ask PDF functionality?
sebaxzero/LangChain_PDFChat_Oobabooga: oobaboga -text-generation-webui implementation of wafflecomposite - langchain-ask-pdf-local (github.com)
-
Langchain and self hosted LLaMA hosted API
Here you can find a way to use Oobabooga API with langchain. https://github.com/sebaxzero/LangChain_PDFChat_Oobabooga
-
Using a local LLM for large-scale text analysis
Maybe turn your data into a pdf or text file, then feed it to https://github.com/imartinez/privateGPT, or use Oobabooga with https://github.com/sebaxzero/LangChain_PDFChat_Oobabooga
-
langchain all run locally with gpu using oobabooga
i was doing some testing and manage to use a langchain pdf chat bot with the oobabooga-api, all run locally in my gpu. using this main code langchain-ask-pdf-local with the webui class in oobaboogas-webui-langchain_agent this is the result (100% not my code, i just copy and pasted it) PDFChat_Oobabooga.
text-generation-inference
-
Best LLM Inference Engines and Servers to Deploy LLMs in Production
GitHub repository: https://github.com/huggingface/text-generation-inference
- FLaNK AI-April 22, 2024
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
I wanted to write that TGI inference engine is not Open Source anymore, but they have reverted the license back to Apache 2.0 for the new version TGI v2.0: https://github.com/huggingface/text-generation-inference/rel...
Good news!
- Hugging Face reverts the license back to Apache 2.0
- HuggingFace text-generation-inference is reverting to Apache 2.0 License
- FLaNK Stack 05 Feb 2024
- Is there any open source app to load a model and expose API like OpenAI?
-
AI Code assistant for about 50-70 users
Setting up a server for multiple users is very different from setting up LLM for yourself. A safe bet would be to just use TGI, which supports continuous batching and is very easy to run via Docker on your server. https://github.com/huggingface/text-generation-inference
-
LocalPilot: Open-source GitHub Copilot on your MacBook
Okay, I actually got local co-pilot set up. You will need these 4 things.
1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.
2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)
3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.
4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.
-
Mistral 7B Paper on ArXiv
A simple microservice would be https://github.com/huggingface/text-generation-inference .
Works flawlessly in Docker on my Windows machine, which is extremely shocking.