ctrl-sum
MOSS
ctrl-sum | MOSS | |
---|---|---|
1 | 4 | |
145 | 11,825 | |
0.0% | 0.3% | |
0.0 | 8.5 | |
11 months ago | 8 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ctrl-sum
-
[R] [P] CTRLsum: Towards Generic Controllable Text Summarization Web Demo
github: https://github.com/salesforce/ctrl-sum
MOSS
- Has anyone tried fine tuning on a dataset of complex tasks that require tool use?
-
Benchmarks for Recent LLMs
Missing Vicuna, Dolly, BELLE, phoenix, MOSS, the ones used by open assistant.
- [D] Open-Source LLMs vs APIs
- GitHub - OpenLMLab/MOSS: An open-source tool-augmented conversational language model from Fudan University
What are some alternatives?
gpt-2-simple - Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
LLMZoo - ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
GPT2-Chinese - Chinese version of GPT2 training code, using BERT tokenizer.
private-gpt - Deploy smart and secure conversational agents for your employees, using Azure. Able to use both private and public data.
databunker - A secure user directory built for developers to comply with the GDPR [Moved to: https://github.com/securitybunker/databunker]
alpaca_farm - A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
Yi - A series of large language models trained from scratch by developers @01-ai
AdaKGC - [EMNLP 2023 (Findings)] Schema-adaptable Knowledge Graph Construction
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
bolt
badger - Fast key-value DB in Go.
lm-evaluation-harness - A framework for few-shot evaluation of language models.