CodeCapypara
ExpertLLaMA
CodeCapypara | ExpertLLaMA | |
---|---|---|
1 | 1 | |
94 | 288 | |
- | 0.0% | |
10.0 | 6.1 | |
about 1 year ago | 12 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CodeCapypara
-
[R] CodeCapybara: Another open source model for code generation based on instruction tuning, outperformed Llama and CodeAlpaca
The model can be accessed here: https://github.com/AI4Code-Research/CodeCapypara
ExpertLLaMA
-
ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts. In this paper, we propose ExpertPrompting to elicit the potential of LLMs to answer as distinguished experts. We first utilize In-Context Learning to automatically synthesize detailed and customized descriptions of the expert identity for each specific instruction, and then ask LLMs to provide answer conditioned on such agent background. Based on this augmented prompting strategy, we produce a new set of instruction-following data using GPT-3.5, and train a competitive open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation to show that 1) the expert data is of significantly higher quality than vanilla answers, and 2) ExpertLLaMA outperforms existing open-source opponents and achieves 96\% of the original ChatGPT's capability. All data and the ExpertLLaMA model will be made publicly available at this https URL.
What are some alternatives?
LLaMA-LoRA-Tuner - UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
example-scalping - A working example algorithm for scalping strategy trading multiple stocks concurrently using python asyncio
LLaMA-Cult-and-More - Large Language Models for All, 🦙 Cult and More, Stay in touch !
example-hftish - Example Order Book Imbalance Algorithm
self-instruct - Aligning pretrained language models with instruction data generated by themselves.
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
alpaca-trade-api-python - Python client for Alpaca's trade API
Chinese-LLaMA-Alpaca - ä¸æ–‡LLaMA&Alpaca大è¯è¨€æ¨¡åž‹+本地CPU/GPUè®ç»ƒéƒ¨ç½² (Chinese LLaMA & Alpaca LLMs)