chatdocs
ue5-llama-lora
chatdocs | ue5-llama-lora | |
---|---|---|
11 | 16 | |
650 | 450 | |
- | - | |
6.9 | 2.9 | |
8 months ago | about 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chatdocs
-
Local LLMs GPUs
https://github.com/marella/chatdocs , this one, right? Takes close to a minute to answer
-
Struggling with Local LLMs
https://github.com/marella/chatdocs , this one.
-
Best commercially viable method to ask questions against a set of 30~ PDFs?
See here: https://github.com/marella/chatdocs#configuration (chatdocs.yml file, context_length)
-
Document digest & oobabooga
What about chatdocs? I asked about it here and the author seems to be open to the idea.
-
What is the best way to create a knowledge-base specific LLM chatbot ?
https://github.com/marella/chatdocs is a fork from privateGPT but with many added features, GPU support, chat UI. There is a reddit thread about it https://www.reddit.com/r/LocalLLaMA/comments/14174f4/chatdocs_privategpt_web_ui_gpu_support_more/
-
Need help finding local LLM
and: https://github.com/marella/chatdocs
-
Chatdocs with mixed language source documents
i played a bit around with chatdocs (https://github.com/marella/chatdocs) but unfortunately are my source documents mixed languages, german and english to be specific. The result heavily depends on the language the questions is asked in, which totally makes sense to me.
-
Creating an Org Knowledge Management System
https://github.com/PromtEngineer/localGPT or https://github.com/marella/chatdocs
- FLaNK Stack Weekly for 12 June 2023
-
You can now chat with your documents privately!
Sounds like a great project and I really like the YouTube tutorials. I haven't been able to get it to work inside WSL. I tried another project here with UI and it works https://github.com/marella/chatdocs
ue5-llama-lora
-
[Request] A tracker for all 'useful' llama applications updated every week
https://github.com/bublint/ue5-llama-lora https://www.reddit.com/r/LocalLLaMA/comments/157vzq6/unleashing_the_power_of_language_learning_models/
-
Training Pygmalion on a specific subject. Is it possible?
You can follow this guy's steps. https://github.com/bublint/ue5-llama-lora He basic just throw a txt file let AI read it. Pygmalion 7B or 13B are both llama base, so you can just follow his step to fine-tuning Pygmalion you want. You will need to prepare the character's script,like chat,reaction,background to a file,then just hope AI will learn it correctly,it might need to fine-tuning multiple times if your feed is not enough or AI learn it wrong somehow.
-
What is the best way to create a knowledge-base specific LLM chatbot ?
ue5 lora might be a good starting point. It doesn't use any advanced features at all:
- Is there a way to fine-tune llama on extremely small dataset?
-
Simplifying documentation navigation: here is UE5_documentalist, a personal project that provides a natural language query system
I got the idea from a reddit user that started a project that uses LLMs (forgot his name but here's a link to his repo, and I based my implementation from this TDS article
-
Proof-of-concept with fine-tuning on local data?
Someone did a LoRA finetuning example using the UE5 documentation, which I replicated to make sure, and you do end up producing word patterns from the document, but it doesn't get incorporated as concepts very well
-
Please explain to a 5 years old Lora concept and how to fine tune
GitHub - bublint/ue5-llama-lora: A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools.
-
Is what I need possible currently?
This would be an interesting experiment. People are already doing similar stuff, such as expanding the model's knowledge domain into something specific. Here's an example of how someone created a LoRA for UE5 documentation: https://github.com/bublint/ue5-llama-lora
-
Can I train gpt4-x-alpaca with my own data?
I basically copied the steps from this Unreal Engine 5 LoRA repo and adjusted as needed.
- [P] Finetuning a commercially viable open source LLM (Flan-UL2) using Alpaca, Dolly15K and LoRA
What are some alternatives?
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
Documize - Modern Confluence alternative designed for internal & external docs, built with Go + EmberJS
h2o-llmstudio - H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Olive - Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation.
llama_farm - Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.
roop - one-click face swap
ue5_documentalist - Turning the UE5 documentation to a searchable database
lance - Modern columnar data format for ML and LLMs implemented in Rust. Convert from parquet in 2 lines of code for 100x faster random access, vector index, and data versioning. Compatible with Pandas, DuckDB, Polars, Pyarrow, with more integrations coming..
dolly - Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
documenso - The Open Source DocuSign Alternative.
peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.