-
LLaMA-LoRA-Tuner
UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
If you have access to the data-center grade GPU, the quickest way to start would be to pick one of the efforts of fine-tuning, for example, stanford alpaca (https://github.com/tatsu-lab/stanford_alpaca/ ) or indeed, vicuna (https://github.com/lm-sys/FastChat ) and use your own data. The main issue for home users is that their VRAM is vastly insufficient for standard model tuning (original weights, updated weights, a copy of adam himself, and a copy for the AI overlord…)
If you have access to the data-center grade GPU, the quickest way to start would be to pick one of the efforts of fine-tuning, for example, stanford alpaca (https://github.com/tatsu-lab/stanford_alpaca/ ) or indeed, vicuna (https://github.com/lm-sys/FastChat ) and use your own data. The main issue for home users is that their VRAM is vastly insufficient for standard model tuning (original weights, updated weights, a copy of adam himself, and a copy for the AI overlord…)
Sure! This is the link to Oobabooga https://github.com/oobabooga/text-generation-webui