Cornucopia-LLaMA-Fin-Chinese
safe-rlhf
Cornucopia-LLaMA-Fin-Chinese | safe-rlhf | |
---|---|---|
19 | 1 | |
536 | 1,160 | |
- | 4.5% | |
4.4 | 8.1 | |
11 months ago | 23 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Cornucopia-LLaMA-Fin-Chinese
safe-rlhf
What are some alternatives?
Baichuan-7B - A large-scale 7B pretraining language model developed by BaiChuan-Inc.
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
ray-llm - RayLLM - LLMs on Ray
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
tableQA-Chinese - Unsupervised tableQA and databaseQA on chinese finance question and tabular data
AtomGPT - 中英文预训练大模型,目标与ChatGPT的水平一致
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
opening-up-chatgpt.github.io - Tracking instruction-tuned LLM openness. Paper: Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.
Huatuo-Llama-Med-Chinese - Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调
storium-backend - Source code for the web backend for hosting story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"
h2o-wizardlm - Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning