Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Simple-llm-finetuner Alternatives
Similar projects and alternatives to simple-llm-finetuner
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
xTuring
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
-
sidekick
Discontinued Universal APIs for unstructured data. Sync documents from SaaS tools to a SQL or vector database, where they can be easily queried by AI applications [Moved to: https://github.com/psychic-api/psychic] (by ai-sidekick)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
simple-llm-finetuner reviews and mentions
-
Ask HN: Resource to learn how to train and use ML Models
Just the appropriate reddit groups and follow folks on twitter, plus use a search engine.
1. Learn to run a model, checkout llama.cpp Tons of free models on huggingface.com
2. Learn to finetune a model - https://github.com/lxe/simple-llm-finetuner
3. Learn to train one. PyTorch, TensorFlow, HuggingFace libraries, etc.
Good luck.
- How can I train my custom dataset on top of Vicuna?
-
[D] The best way to train an LLM on company data
So as far as set up goes, you just need to: “”” Git clone https://github.com/lxe/simple-llama-finetuner Cd simple-llama-finetuner Pip install -r requirements.txt Python app.py ## if you’re on a remote machine (Paperspace is my go to) then you may need to edit the last line of this script to set ‘share=True’ in the launch args “””
-
Show HN: Document Q&A with GPT: web, .pdf, .docx, etc.
oobabooga's textgen webui has a tab for fine tuning now. You only need a single consumer GPU to fine tune up to 33B parameter models at a rate of about 200 epochs per hour, per GPU.
There are also one-click finetuning projects which run on free Google Colab GPUs like https://github.com/lxe/simple-llama-finetuner
It's easy and not complex at all.
-
How do I fine tune 4 bit or 8 bit models?
for a single 4090, easiest way to get started and simple to use: https://github.com/lxe/simple-llama-finetuner
- Are there publicly available datasets other than Alpaca that we can use to fine-tune LLaMA?
- Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
- [Project] Finetune LLaMA-7B on commodity GPUs (and Colab) using your own text
-
A note from our sponsor - InfluxDB
www.influxdata.com | 29 Apr 2024
Stats
lxe/simple-llm-finetuner is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of simple-llm-finetuner is Jupyter Notebook.
Popular Comparisons
- simple-llm-finetuner VS alpaca-lora
- simple-llm-finetuner VS paper-qa
- simple-llm-finetuner VS peft
- simple-llm-finetuner VS Made-With-ML
- simple-llm-finetuner VS minimal-llama
- simple-llm-finetuner VS OpenChatKit
- simple-llm-finetuner VS text-generation-webui
- simple-llm-finetuner VS content-chatbot
- simple-llm-finetuner VS WebChatRWKVstic
- simple-llm-finetuner VS AlpacaDataCleaned
Sponsored