open-r1
DeepSeek-V3

open-r1 | DeepSeek-V3 | |
---|---|---|
4 | 12 | |
22,804 | 91,990 | |
99.5% | 31.4% | |
9.4 | 8.2 | |
about 21 hours ago | 20 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open-r1
DeepSeek-V3
-
Analyzing DeepSeek API Instability: What API Gateways Can and Can't Do
DeepSeek, known for its high-performance AI models like R1 and V3, has been a game-changer in the AI landscape. However, recent reports have highlighted issues with API instability, affecting developers and users who rely on these services. Understanding the root causes of this instability is essential for addressing and mitigating these issues.
-
DeepSeek not as disruptive as claimed, firm has 50k GPUs and spent $1.6B
It is not FOSS. The LLM industry has repurposed "open source" to mean "you can run the model yourself." They've released the model, but it does not meet the 'four freedoms' standard: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE...
-
Build your next AI Tech Startup with DeepSeek
Typically, training parts of an AI model usually meant updating the whole thing, even if some parts didn't contribute anything, which lead to a massive waste of resources. To solve this, they introduced an Auxiliary-Loss-Free (ALS) Load Balancing. The ALS Load Balancing works by introducing a bias factor to prevent overloading one chip, while under-utilizing another (Source). This resulted in only 5% of the model's parameters being trained per-token, and around 91% cheaper cost to train than GPT 4 (GPT 4 costed $63 million to train (Source) and V3 costed $5.576 million to train. (Source))
-
Is DeepSeek’s Influence Overblown?
According to the official paper, DeepSeek took only $5.6 mln to train with impressive results. This is a remarkable achievement for a large language model (LLM). In comparison, OpenAI's CEO Sam Altman admitted that training OpenAI GPT-4 took over $100 mln, not saying how much more. Some AI specialists assume that the estimation of the DeepSeek training expense is underreported. Nevertheless, the hidden gem is not how much it cost to train but how drastically it improved runtime requirements.
- Maybe you missed this file when looking at DeepSeek?
-
DeepSeek proves the future of LLMs is open-source
> If the magic values are some kind of microcode or firmware, or something else that is executed in some way, then no, it is not really open source.
To my understanding, the contents of a .safetensors file is purely numerical weights - used by the model defined in MIT-licensed code[0] and described in a technical report[1]. The weights are arguably only really "executed" to the same extent kernel weights of a gaussian blur filter would be, though there is a large difference in scale and effect.
[0]: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inferen...
[1]: https://arxiv.org/html/2412.19437v1
- DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via RL
-
AI and Startup Moats
But the cost is _definitely_ falling. For a recent example, see DeepSeek V3[1]. It's a model that's competitive with GPT-4, Claude Sonnet. But cost ~$6 Million to train.
This is ridiculously cheaper than what we had before. Inference is basically getting an 10x cheaper per year!
We're spending more because bigger models are worth the investment. But the "price per unit of [intelligence/quality]" is getting lower and _fast_.
Saying that models are getting more expensive is confusing the absolute value spent with the value for money.
- [1] https://github.com/deepseek-ai/DeepSeek-V3/tree/main
- DeepSeek-V3
- DeepSeek-v3 Technical Report [pdf]
What are some alternatives?
DeepSeek-R1
TinyZero - Clean, minimal, accessible reproduction of DeepSeek R1-Zero
DeepSeek-LLM - DeepSeek LLM: Let there be answers
