DeepSeek-V3 VS TinyZero

Compare DeepSeek-V3 vs TinyZero and see what are their differences.

TinyZero

Clean, minimal, accessible reproduction of DeepSeek R1-Zero (by Jiayi-Pan)
Judoscale - Save 47% on cloud hosting with autoscaling that just works
Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.
judoscale.com
featured
InfluxDB high-performance time series database
Collect, organize, and act on massive volumes of high-resolution data to power real-time intelligent systems.
influxdata.com
featured
DeepSeek-V3 TinyZero
14 9
96,026 11,654
6.6% 7.3%
8.3 9.3
18 days ago 2 days ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

DeepSeek-V3

Posts with mentions or reviews of DeepSeek-V3. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2025-02-10.
  • DeepSeek V3-0324 vs. Claude 3.7 Sonnet Base: Which AI Codes Better?
    1 project | dev.to | 29 Mar 2025
  • Deepseek API Complete Guide: Mastering the DeepSeek API for Developers
    1 project | dev.to | 19 Mar 2025
    What distinguishes DeepSeek-V3 is its training efficiency—completed using only 2.664M H800 GPU hours on 14.8 trillion tokens, making it remarkably cost-effective for its size. Technical specifications are available on the GitHub page for DeepSeek-V3.
  • Analyzing DeepSeek API Instability: What API Gateways Can and Can't Do
    2 projects | dev.to | 10 Feb 2025
    DeepSeek, known for its high-performance AI models like R1 and V3, has been a game-changer in the AI landscape. However, recent reports have highlighted issues with API instability, affecting developers and users who rely on these services. Understanding the root causes of this instability is essential for addressing and mitigating these issues.
  • DeepSeek not as disruptive as claimed, firm has 50k GPUs and spent $1.6B
    1 project | news.ycombinator.com | 4 Feb 2025
    It is not FOSS. The LLM industry has repurposed "open source" to mean "you can run the model yourself." They've released the model, but it does not meet the 'four freedoms' standard: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE...
  • Build your next AI Tech Startup with DeepSeek
    6 projects | dev.to | 3 Feb 2025
    Typically, training parts of an AI model usually meant updating the whole thing, even if some parts didn't contribute anything, which lead to a massive waste of resources. To solve this, they introduced an Auxiliary-Loss-Free (ALS) Load Balancing. The ALS Load Balancing works by introducing a bias factor to prevent overloading one chip, while under-utilizing another (Source). This resulted in only 5% of the model's parameters being trained per-token, and around 91% cheaper cost to train than GPT 4 (GPT 4 costed $63 million to train (Source) and V3 costed $5.576 million to train. (Source))
  • Is DeepSeek’s Influence Overblown?
    1 project | dev.to | 31 Jan 2025
    According to the official paper, DeepSeek took only $5.6 mln to train with impressive results. This is a remarkable achievement for a large language model (LLM). In comparison, OpenAI's CEO Sam Altman admitted that training OpenAI GPT-4 took over $100 mln, not saying how much more. Some AI specialists assume that the estimation of the DeepSeek training expense is underreported. Nevertheless, the hidden gem is not how much it cost to train but how drastically it improved runtime requirements.
  • Maybe you missed this file when looking at DeepSeek?
    1 project | news.ycombinator.com | 30 Jan 2025
  • DeepSeek proves the future of LLMs is open-source
    4 projects | news.ycombinator.com | 29 Jan 2025
    > If the magic values are some kind of microcode or firmware, or something else that is executed in some way, then no, it is not really open source.

    To my understanding, the contents of a .safetensors file is purely numerical weights - used by the model defined in MIT-licensed code[0] and described in a technical report[1]. The weights are arguably only really "executed" to the same extent kernel weights of a gaussian blur filter would be, though there is a large difference in scale and effect.

    [0]: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inferen...

    [1]: https://arxiv.org/html/2412.19437v1

  • DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via RL
    8 projects | news.ycombinator.com | 25 Jan 2025
  • AI and Startup Moats
    1 project | news.ycombinator.com | 7 Jan 2025
    But the cost is _definitely_ falling. For a recent example, see DeepSeek V3[1]. It's a model that's competitive with GPT-4, Claude Sonnet. But cost ~$6 Million to train.

    This is ridiculously cheaper than what we had before. Inference is basically getting an 10x cheaper per year!

    We're spending more because bigger models are worth the investment. But the "price per unit of [intelligence/quality]" is getting lower and _fast_.

    Saying that models are getting more expensive is confusing the absolute value spent with the value for money.

    - [1] https://github.com/deepseek-ai/DeepSeek-V3/tree/main

TinyZero

Posts with mentions or reviews of TinyZero. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2025-02-09.
  • LIMO: Less Is More for Reasoning
    5 projects | news.ycombinator.com | 9 Feb 2025
    Yes, the authors explicitly highlighted those two points in the abstract, in terms of them being the elicitation threshold for complex reasoning, namely, an extremely complete pre-trained foundation model, and a set of extremely high quality examples post-training.

    To your question on finetuning on the initial 10 million pool - intuitively, it would require tremendous amount of finetuning data to move the needle - you really won't be able to move the gradients much with just 817 examples, that initial pool is effectively enforcing pretty rigid regularization.

    There is now an increasing interest in showing that small data with inference time scaling is providing significant yield. Couple of recent examples:

    * TinyZero: https://github.com/Jiayi-Pan/TinyZero

  • Mini-R1: Reproduce DeepSeek R1 "Aha Moment"
    2 projects | news.ycombinator.com | 30 Jan 2025
    They do mention it here

    > Note: This blog is inspired by Jiayi Pan [1] who initially explored the idea and proofed it with a small model.

    But I agree, that attribution could be more substantial.

    > Note: This blog is inspired by Jiayi Pan [1] who also reproduced the "Aha Moment" with their TinyZero [2] model.

    [1] https://x.com/jiayi_pirate/status/1882839370505621655 (1.1M views btw)

    [2] https://github.com/Jiayi-Pan/TinyZero

    A lot of people are busy reproing R1 right now. I think this is the spark.

  • Berkeley Researchers Replicate DeepSeek R1's Core Tech for Just $30: A Small Mod
    2 projects | news.ycombinator.com | 28 Jan 2025
  • Berkeley Researchers Replicate DeepSeek R1's Core Tech for Just $30
    1 project | news.ycombinator.com | 27 Jan 2025
    This is blogspam of https://github.com/Jiayi-Pan/TinyZero and https://nitter.lucabased.xyz/jiayi_pirate/status/18828393705.... This also doesn't mention that it's for one specific domain (playing Countdown).
  • Explainer: What's R1 and Everything Else?
    1 project | news.ycombinator.com | 26 Jan 2025
    This is indeed a massive exaggeration, I'm pretty sure the $30 experiment is this one: https://threadreaderapp.com/thread/1882839370505621655.html (github: https://github.com/Jiayi-Pan/TinyZero).

    And while this is true that this experiment shows that you can reproduce the concept of direct reinforcement learning of an existing LLM, in a way that makes it develop reasoning in the same fashion Deepseek-R1 did, this is very far from a re-creation of R1!

  • DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via RL
    8 projects | news.ycombinator.com | 25 Jan 2025
    >I wonder if this was a deliberate move by PRC or really our own fault in falling for the fallacy that more is always better.

    Well, let’s see …hmmm… are we discussing this on a platform ran by people who made insane money flipping zero-value companies to greater fools during the dotcom bubble, only to pivot to doing the same thing to big tech during the FANG era or one for discussing of hard ML research among the no-nonsense math elite from some of the world’s top universities.

    More seriously, we don’t have to even speculate about any of this because the methods from DeepSeek’s work are already being reproduced:

    https://github.com/Jiayi-Pan/TinyZero

  • TinyZero
    1 project | news.ycombinator.com | 24 Jan 2025

What are some alternatives?

When comparing DeepSeek-V3 and TinyZero you can also consider the following projects:

DeepSeek-R1

DeepSeek-LLM - DeepSeek LLM: Let there be answers

open-r1 - Fully open reproduction of DeepSeek-R1

Judoscale - Save 47% on cloud hosting with autoscaling that just works
Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.
judoscale.com
featured
InfluxDB high-performance time series database
Collect, organize, and act on massive volumes of high-resolution data to power real-time intelligent systems.
influxdata.com
featured

Did you know that Python is
the 2nd most popular programming language
based on number of references?