ctransformers
LangChain_PDFChat_Oobabooga
ctransformers | LangChain_PDFChat_Oobabooga | |
---|---|---|
4 | 4 | |
1,718 | 66 | |
- | - | |
8.6 | 3.0 | |
4 months ago | about 1 year ago | |
C | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ctransformers
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
Does ctransformer (https://github.com/marella/ctransformers#supported-models) support running refact?
I see that model type "gpt_refact" in https://huggingface.co/smallcloudai/Refact-1_6B-fim/blob/mai...
-
How do I utilize these quantized models being uploaded?
You can also use ctransformers with the ggml models if you want to use python rather than c++.
-
Langchain and self hosted LLaMA hosted API
For ggml https://github.com/marella/ctransformers/ and https://github.com/abetlen/llama-cpp-python has a decent server. https://github.com/go-skynet/LocalAI is very active too.
- Also reconnecting with Scala. Interested in LLMs
LangChain_PDFChat_Oobabooga
-
Ask PDF functionality?
sebaxzero/LangChain_PDFChat_Oobabooga: oobaboga -text-generation-webui implementation of wafflecomposite - langchain-ask-pdf-local (github.com)
-
Langchain and self hosted LLaMA hosted API
Here you can find a way to use Oobabooga API with langchain. https://github.com/sebaxzero/LangChain_PDFChat_Oobabooga
-
Using a local LLM for large-scale text analysis
Maybe turn your data into a pdf or text file, then feed it to https://github.com/imartinez/privateGPT, or use Oobabooga with https://github.com/sebaxzero/LangChain_PDFChat_Oobabooga
-
langchain all run locally with gpu using oobabooga
i was doing some testing and manage to use a langchain pdf chat bot with the oobabooga-api, all run locally in my gpu. using this main code langchain-ask-pdf-local with the webui class in oobaboogas-webui-langchain_agent this is the result (100% not my code, i just copy and pasted it) PDFChat_Oobabooga.
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
text-generation-inference - Large Language Model Text Generation Inference
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
artificial-nose - Instructions, source code, and misc. resources needed for building a Tiny ML-powered artificial nose.
langchain-ask-pdf-local - An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
kendryte-standalone-sdk - Standalone SDK for kendryte K210
oobaboogas-webui-langchain_agent - Creates an Langchain Agent which uses the WebUI's API and Wikipedia to work
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
lamp - deep learning and scientific computing framework with native CPU and GPU backend for the Scala programming language