-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
h2ogpt
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
-
lit-gpt
Discontinued Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. [Moved to: https://github.com/Lightning-AI/litgpt]
Thank you, OP. Your examples are truly insightful and align perfectly with what I was hoping to glean from this thread. I've been grappling with the decision of whether to first learn a library like LlamaIndex, or start with fine-tuning LLM.
If you li embeddings and vector DB, you should look into this: https://github.com/HKUNLP/instructor-embedding
Yes, there is a lot of potential. You can check this project for agents: https://github.com/Nuggt-dev/Nuggt/ . Currently I only have "simple" projects: mostly 0-shots LLMs to get some responses. Agents are not yet mature enough to be integrated in production environments.
Yep, same. This works decently well: https://github.com/freedmand/semantra
I'm also working on the finetuning of models for Q&A and I've finetuned llama-7b, falcon-40b, and oasst-pythia-12b using HuggingFace's SFT, H2OGPT's finetuning script and lit-gpt.
I'm also working on the finetuning of models for Q&A and I've finetuned llama-7b, falcon-40b, and oasst-pythia-12b using HuggingFace's SFT, H2OGPT's finetuning script and lit-gpt.