Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Text-generation-webui-testing Alternatives
Similar projects and alternatives to text-generation-webui-testing
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
LLaMA-8bit-LoRA
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
text-generation-webui-testing reviews and mentions
-
Slow inference on R720 w/P40 (or not)?
Also autograd from here: https://github.com/Ph0rk0z/text-generation-webui-testing/ and it's matching GPTQ: https://github.com/Ph0rk0z/GPTQ-Merged/tree/dual-model
-
Call me a fool, but I thought 24 GB of ram would get me 2048 context with 13B GPTQ
Some of the novel-ai ones work but the first one I tried to make it not spazz out was: https://github.com/Ph0rk0z/text-generation-webui-testing/blob/DualModel/presets/MOSS.yaml
-
Finetuning on multiple GPUs
i've never tried that particular one. everything else I threw at it trained through : https://github.com/Ph0rk0z/text-generation-webui-testing/ successfully.
-
Best current tutorial for training your own LoRA? Also I've got a 24GB 3090, so which models would you recommend fine tuning on?
as integrated in https://github.com/Ph0rk0z/text-generation-webui-testing/
-
Monkeypatch Issues
if you like the "monkeypatch" https://github.com/Ph0rk0z/text-generation-webui-testing/ is better. I think in the discussion someone got it running on windows.
- My Lora training locally experiments
-
Any news on training LoRAs in 4-bit mode?
https://github.com/Ph0rk0z/text-generation-webui-testing < 4bit lora use from the UI on old GPTQ
-
Keep your GPUs cool
well im running it with oobabooga/text-generation-webui and 8 bit now works after i did this fix Add 8bit threshold for my Pascal card. I use 1.5 or 1.0, otherwise NaN · Ph0rk0z/text-generation-webui-testing@ecad08f (github.com)
- 4bit LoRA Guide for Oobabooga!
-
A note from our sponsor - InfluxDB
www.influxdata.com | 2 May 2024
Stats
Ph0rk0z/text-generation-webui-testing is an open source project licensed under GNU Affero General Public License v3.0 which is an OSI approved license.
The primary programming language of text-generation-webui-testing is Python.
Popular Comparisons
- text-generation-webui-testing VS bitsandbytes
- text-generation-webui-testing VS LLaMA-8bit-LoRA
- text-generation-webui-testing VS alpaca_lora_4bit
- text-generation-webui-testing VS alpaca_lora_4bit_readme
- text-generation-webui-testing VS private-gpt
- text-generation-webui-testing VS axolotl
- text-generation-webui-testing VS GPTQ-for-LLaMa
- text-generation-webui-testing VS text-generation-webui
- text-generation-webui-testing VS GPTQ-Merged
Sponsored