tensorflow VS Pytorch

Compare tensorflow vs Pytorch and see what are their differences.

Pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration (by pytorch)
Our great sponsors
  • Mergify - Updating dependencies is time-consuming.
  • InfluxDB - Collect and Analyze Billions of Data Points in Real Time
  • SonarCloud - Analyze your C and C++ projects with just one click.
tensorflow Pytorch
216 300
177,728 70,847
0.7% 2.1%
10.0 10.0
4 days ago 7 days ago
C++ Python
Apache License 2.0 BSD 1-Clause License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tensorflow

Posts with mentions or reviews of tensorflow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-04.

Pytorch

Posts with mentions or reviews of Pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-10.
  • Deep Learning with “AWS Graviton2 + NVIDIA Tensor T4G” for as low as free* with CUDA 12.2
    2 projects | dev.to | 10 Sep 2023
    # Download and install ccache for faster compilation wget https://github.com/ccache/ccache/releases/download/v4.8.3/ccache-4.8.3.tar.xz tar -xf ccache-4.8.3.tar.xz pushd ccache-4.8.3 cmake . make -j $CPUS make install popd # Install NumPy, a dependency for PyTorch dnf install -y numpy # Install Python typing extensions for better type-checking sudo -u ec2-user pip3 install typing-extensions # Clone PyTorch repository and install from source git clone --recursive https://github.com/pytorch/pytorch.git pushd pytorch python3 setup.py install popd # Refresh the dynamic linker run-time bindings ldconfig # Install additional Python libraries for PyTorch sudo -u ec2-user pip3 install sympy filelock fsspec networkx
  • Godly – Astronomically good web design inspiration
    2 projects | news.ycombinator.com | 22 Aug 2023
    Given how popular they are, these modern designs must appeal to someone, but personally I find them really bad. It's pure form over function with the huge text that reduces my 24" monitor to the information density of a phone and the annoying fade-ins that interfere with quickly skimming the page. This kind of webpage makes me immediately suspicious. I find these landing pages much better: <https://hypothesis.readthedocs.io/en/latest/>, <https://cmocka.org/>, <https://pytorch.org/>.
    2 projects | news.ycombinator.com | 22 Aug 2023
  • Building an efficient sparse keyword index in Python
    5 projects | dev.to | 17 Aug 2023
    Large computations in pure Python can also be painfully slow. Luckily, there is a robust landscape of options for numeric processing. The most popular framework is NumPy. There is also PyTorch and other GPU-based tensor processing frameworks.
  • 7 Open-Source Libraries SAVE NOW!
    5 projects | dev.to | 12 Aug 2023
  • A comprehensive guide to running Llama 2 locally
    19 projects | news.ycombinator.com | 25 Jul 2023
    While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:

    * I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.

    * If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].

    You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)

    Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)

    [1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...

    [2] https://github.com/microsoft/DeepSpeed/issues/1580

    [3] https://github.com/TimDettmers/bitsandbytes/issues/485

    [4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...

    [5] https://forums.macrumors.com/threads/ai-generated-art-stable...

  • [D] Keras 3.0 Announcement: Keras for TensorFlow, JAX, and PyTorch
    3 projects | /r/MachineLearning | 11 Jul 2023
    The lack of engagement in the relevant issue trackers (imports, dtypes, I can't actually find an issue for the pooling padding) hurts the legitimate complaints.
  • Like Diffusion but Faster: The Paella Model for Fast Image Generation
    4 projects | news.ycombinator.com | 26 Jun 2023
    - The gain in stable diffusion is modest (15%-25% last I checked?)

    - Torch 2.0 only supports static inputs. In actual usage scenarios, this means frequent lengthy recompiles. Eventually, these recompiles will overload the compilation cache and torch.compile will stop functioning.

    - Some common augmentations (like TomeSD) break compilation, make it take forever, or kill the performance gains.

    - Other miscellaneous bugs (like freezing the Python thread and causing timeouts in web UIs, or errors with embeddings)

    - Dynamic input in Torch 2.1 nightly fixes a lot of these issues, but was only maybe working a week ago? See https://github.com/pytorch/pytorch/issues/101228#issuecommen...

    - TVM and AITemplate have massive performance gains. ~2x or more for AIT, not sure about an exact number for TVM.

    - AIT supported dynamic input before torch.compile did, and requires no recompilation after the initial compile. Also, weights (models and LORAs) can be swapped out without a recompile.

    - TVM supports very performant Vulkan inference, which would massively expand hardware compatibility.

    Note that the popular SD Web UIs don't support any of this, with two exceptions: VoltaML (with WIP AIT support) and a the Windows DirectML fork of A1111 (which uses optimized ONNX models, I think).

  • How popular are libraries in each technology
    21 projects | dev.to | 21 Jun 2023
    Mobile app development is the process of creating applications for mobile devices such as smartphones and tablets. There are many mobile app libraries and languages available, but the most popular by far is Flutter. Flutter is a mobile app development framework developed by Google that enables developers to build high-performance, high-fidelity, apps for iOS and Android from a single codebase. It has over 154k stars on Github.
  • Falcon LLM – A 40B Model
    6 projects | news.ycombinator.com | 17 Jun 2023
    I found them out myself when making our own implementation of the model. We test our outputs against upstream models. In decoding without history, our tests passed, but in decoding with history there was a mismatch between our implementation and the upstream implementation. Naturally, I assumed that our implementation was wrong (being the newer implementation, not sharing code with theirs), but while debugging this I found that our implementation is actually correct.

    Then I was planning to report these issues. Someone else found the causal mask issue a week earlier, so there was no need to report it:

    https://github.com/pytorch/pytorch/issues/103082

    I reported the issue with rotary embeddings in a discussion of problems that people were running into trying to use KV caching:

    https://huggingface.co/tiiuae/falcon-40b/discussions/48#648c...

    More in general, I am not sure what the best place is to track these issues. Maybe a model's discussion forums?

What are some alternatives?

When comparing tensorflow and Pytorch you can also consider the following projects:

PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

Prophet - Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.

Flux.jl - Relax! Flux is the ML library that doesn't make you tensor

Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more

mediapipe - Cross-platform, customizable ML solutions for live and streaming media.

Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing

LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

flax - Flax is a neural network library for JAX that is designed for flexibility.

scikit-learn - scikit-learn: machine learning in Python

tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]

LightFM - A Python implementation of LightFM, a hybrid recommendation algorithm.

xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow