TencentPretrain
llama-classification
TencentPretrain | llama-classification | |
---|---|---|
1 | 1 | |
983 | 90 | |
1.1% | - | |
7.6 | 3.2 | |
8 days ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TencentPretrain
llama-classification
-
zero-shot / few-shot / fine-tuning classification with Llama?
sh0416/llama-classification: Text classification with Foundation Language Model LLaMA (github.com)
What are some alternatives?
simpleT5 - simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
tiger - Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)
AtomGPT - 中英文预训练大模型,目标与ChatGPT的水平一致
alpaca-lora - Instruct-tune LLaMA on consumer hardware
LLM-Adapters - Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
awesome-pretrained-chinese-nlp-models - Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.