ExpertLLaMA
An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability. (by OFA-Sys)
CodeCapybara
Open-source Self-Instruction Tuning Code LLM (by FSoft-AI4Code)
ExpertLLaMA | CodeCapybara | |
---|---|---|
1 | 1 | |
288 | 156 | |
0.0% | 1.3% | |
6.1 | 5.9 | |
12 months ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ExpertLLaMA
Posts with mentions or reviews of ExpertLLaMA.
We have used some of these posts to build our list of alternatives
and similar projects.
-
ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically synthesize detailed and customized descriptions of the expert identity for each specific instruction, and then ask LLMs to provide answer conditioned on such agent background. Based on this augmented prompting strategy, we produce a new set of instruction-following data using GPT-3.5, and train a competitive open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation to show that 1) the expert data is of significantly higher quality than vanilla answers, and 2) ExpertLLaMA outperforms existing open-source opponents and achieves 96\% of the original ChatGPT's capability. All data and the ExpertLLaMA model will be made publicly available at this https URL.
CodeCapybara
Posts with mentions or reviews of CodeCapybara.
We have used some of these posts to build our list of alternatives
and similar projects.
What are some alternatives?
When comparing ExpertLLaMA and CodeCapybara you can also consider the following projects:
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
CodeCapypara - [Moved to: https://github.com/FSoft-AI4Code/CodeCapybara]
LLaMA-LoRA-Tuner - UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
LLaMA-Cult-and-More - Large Language Models for All, 🦙 Cult and More, Stay in touch !
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback