HugNLP
LLMSurvey
HugNLP | LLMSurvey | |
---|---|---|
3 | 3 | |
370 | 8,967 | |
0.0% | 9.9% | |
7.6 | 7.9 | |
7 months ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
HugNLP
LLMSurvey
-
Ask HN: Textbook Regarding LLMs
Here’s another one - it’s older but has some interesting charts and graphs.
https://arxiv.org/abs/2303.18223
-
Share your favorite materials: intersection of LLMs and business applications
There have recently been some some nice early surveys on progress, pitfalls, future research directions:
- A Survey of LLMs https://arxiv.org/abs/2303.18223
- A Survey of Large Language Models
What are some alternatives?
prompt-lib - A set of utilities for running few-shot prompting experiments on large-language models
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
alpaca_farm - A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
ray-llm - RayLLM - LLMs on Ray
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
CodeTF - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
HugNLP - HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 HugNLP will released to @HugAILab
Qwen - The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
zshot - Zero and Few shot named entity & relationships recognition
opening-up-chatgpt.github.io - Tracking instruction-tuned LLM openness. Paper: Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In Proceedings of the 5th International Conference on Conversational User Interfaces. doi:10.1145/3571884.3604316.