Woodpecker
Chinese-LLaMA-Alpaca
Woodpecker | Chinese-LLaMA-Alpaca | |
---|---|---|
2 | 4 | |
556 | 17,594 | |
- | - | |
8.9 | 8.3 | |
4 months ago | 24 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Woodpecker
-
shinning the spotlight on CogVLM
Woodpecker: Hallucination Correction for Multimodal Large Language Models https://github.com/BradyFU/Woodpecker
- Woodpecker: Hallucination Correction for Multimodal Large Language Models
Chinese-LLaMA-Alpaca
-
Chinese-Alpaca-Plus-13B-GPTQ
I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.
-
How to train a new language that is not in base model?
Could follow what people did with the Chinese-LLaMA, just for Korean. Might want to have a pure Korean corpus before feeding in a translation dataset. How big is it by the way?
- Open Source Chinese LLMs
-
Its possible to fine tune the llama model to better understand another language?
Chinese: https://github.com/ymcui/Chinese-LLaMA-Alpaca
What are some alternatives?
hallucination-leaderboard - Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
Qwen - The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
paxml - Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
GPT4RoI - GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
Qwen-VL - The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
deeplake - Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
LLM-Agent-Paper-List - The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.