LLMSurvey
Chinese-LLaMA-Alpaca
LLMSurvey | Chinese-LLaMA-Alpaca | |
---|---|---|
3 | 4 | |
8,902 | 17,466 | |
9.2% | - | |
7.9 | 8.3 | |
4 months ago | 11 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLMSurvey
-
Ask HN: Textbook Regarding LLMs
Here’s another one - it’s older but has some interesting charts and graphs.
https://arxiv.org/abs/2303.18223
-
Share your favorite materials: intersection of LLMs and business applications
There have recently been some some nice early surveys on progress, pitfalls, future research directions:
- A Survey of LLMs https://arxiv.org/abs/2303.18223
- A Survey of Large Language Models
Chinese-LLaMA-Alpaca
-
Chinese-Alpaca-Plus-13B-GPTQ
I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.
-
How to train a new language that is not in base model?
Could follow what people did with the Chinese-LLaMA, just for Korean. Might want to have a pure Korean corpus before feeding in a translation dataset. How big is it by the way?
- Open Source Chinese LLMs
-
Its possible to fine tune the llama model to better understand another language?
Chinese: https://github.com/ymcui/Chinese-LLaMA-Alpaca
What are some alternatives?
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
HugNLP - CIKM2023 Best Demo Paper Award. HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊
paxml - Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Qwen-VL - The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Qwen - The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
LLM-Agent-Paper-List - The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
opening-up-chatgpt.github.io - Tracking instruction-tuned LLM openness. Paper: Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.
Cornucopia-LLaMA-Fin-Chinese - 聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)