Yi
DeepSeek-Coder
Yi | DeepSeek-Coder | |
---|---|---|
9 | 8 | |
7,141 | 5,317 | |
2.8% | 4.6% | |
9.4 | 8.6 | |
4 days ago | 17 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Yi
-
Yi: Open Foundation Models by 01.ai
The model license:
https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMEN...
1) Your use of the Yi Series Models must comply with the Laws and Regulations as
-
Chinese Startup Is Winning the Open Source AI Race
01.ai's Yi model has performed well and there are several strong fine-tunes on Huggingface also.
I wonder what definition of "open source" the author is using or if he even read the license Yi is released under.
The Yi model license agreement [1] restricts usage and requires compliance with the "laws and administrative regulations of the mainland of the People's Republic of China" and they have a separate license that you can apply for if you want to use Yi commercially. [2]
Kudos to the 01.ai team on a strong LLM model but I do wonder if Wired and others should be a little more careful with the use of "open source" when describing AI models.
1. https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMEN...
-
EU regulation implications ?
I doubt it, training a foundational model (very important to make the difference with language model) make very little economic sense when there's already plenty of open source option. And in fact, almost all AI startup use/fine-tune these models. Only big research center will do foundational research, as it as always been the case. (Mistral and 01.ai are outliers, and I don't see how they're ever gonna recoup their cost)
-
What the heck is so great about this model?
Yi-34b: https://github.com/01-ai/Yi
-
Yi-34B-Chat
The 6B model is unfortunately still a base text completion model. I've been waiting for the Chat version it to be open-sourced :). The 01-ai team is working on it! https://github.com/01-ai/Yi/issues/173
- 零一万物-AI2.0大模型技术和应用的全球公司(01.AI)- Source of Yi 34B LLM AI
- Kai-Fu Lee's Yi-34B uses Llama's architecture except for two tensors renamed
- 01-AI/Yi: A series of large language models trained from scratch
DeepSeek-Coder
-
Meta Llama 3
deepseek-coder-instruct 6.7B still looks like is better than llama 3 8B on HumanEval [0], and deepseek-coder-instruct 33B still within reach to run on 32 GB Macbook M2 Max - Lamma 3 70B on the other hand will be hard to run locally unless you really have 128GB ram or more. But we will see in the following days how it performs in real life.
[0] https://github.com/deepseek-ai/deepseek-coder?tab=readme-ov-...
-
Mistral Remove "Committing to open models" from their website
Deepseek (https://github.com/deepseek-ai/DeepSeek-Coder?tab=readme-ov-...) code is MIT and the model license is available too.
- FLaNK Stack 05 Feb 2024
-
Stable Code 3B: Coding on the Edge
https://github.com/deepseek-ai/deepseek-coder
33B Instruct doesn’t beat 6.7B Instruct by much but maybe those % improvements mean more for your usage.
I run 6.7B since I have 16GB RAM.
-
What the heck is so great about this model?
Deepseek Coder: https://github.com/deepseek-ai/DeepSeek-Coder (Best open source coding model right now)
- Deepseek Coder instruct – 6.7B model beats gpt3.5-turbo in coding
- FLaNK Stack Weekly for 13 November 2023
- DeepSeek-Coder: Has anyone tried this one?
What are some alternatives?
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
draw-a-ui - Draw a mockup and generate html for it