Python Tensorflow

Open-source Python projects categorized as Tensorflow

Top 23 Python Tensorflow Projects

  • transformers

    ๐Ÿค— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    Project mention: Codestral Mamba | news.ycombinator.com | 2024-07-16

    I can give a summary of what's happened the past couple of years and what tools are out there.

    After ChatGPT released, there was a lot of hype in the space but open source was far behind. Iirc the best open foundation LLM that existed was GPT-2 but it was two generations behind.

    Awhile later Meta released LLaMA[1], a well trained base foundation model, which brought an explosion to open source. It was soon implemented in the Hugging Face Transformers library[2] and the weights were spread across the Hugging Face website for anyone to use.

    At first, it was difficult to run locally. Few developers had the system or money to run. It required too much RAM and iirc Meta's original implementation didn't support running on the CPU but developers soon came up with methods to make it smaller via quantization. The biggest project for this was Llama.cpp[3] which probably is still the biggest open source project today for running LLMs locally. Hugging Face Transformers also added quantization support through bitsandbytes[4].

    Over the next months there was rapid development in open source. Quantization techniques improved which meant LLaMA was able to run with less and less RAM with greater and greater accuracy on more and more systems. Tools came out that we're capable of finetuning LLaMA and there were hundreds of LLaMA finetunes that came out which finetuned LLaMA on instruction following, RLHF, and chat datasets which drastically increased accuracy even further. During this time, Stanford's Alpaca, Lmsys's Vicuna, Microsoft's Wizard, 01ai's Yi, Mistral, and a few others made their way onto the open LLM scene with some very good LLaMA finetunes.

    A new inference engine (software for running LLMs like Llama.cpp, Transformers, etc) called vLLM[5] came out which was capable of running LLMs in a more efficient way than was previously possible in open source. Soon it would even get good AMD support, making it possible for those with AMD GPUs to run open LLMs locally and with relative efficiency.

    Then Meta released Llama 2[6]. Llama 2 was by far the best open LLM for its time. Released with RLHF instruction finetunes for chat and with human evaluation data that put its open LLM leadership beyond doubt. Existing tools like Llama.cpp and Hugging Face Transformers quickly added support and users had access to the best LLM open source had to offer.

    At this point in time, despite all the advancements, it was still difficult to run LLMs. Llama.cpp and Transformers were great engines for running LLMs but the setup process was difficult and required a lot of time. You had to find the best LLM, quantize it in the best way for your computer (or figure out how to identify and download one from Hugging Face), setup whatever engine you wanted, figure out how to use your quantized LLM with the engine, fix any bugs you made along the way, and finally figure out how to prompt your specific LLM in a chat-like format.

    However, tools started coming out to make this process significantly easier. The first one of these that I remember was GPT4All[7]. GPT4All was a wrapper around Llama.cpp which made it easy to install, easy to select the LLM that you want (pre-quantized options for easy download from a download manager), and a chat UI which made LLMs easy to use. This significantly reduced the barrier to entry for those who were interested in using LLMs.

    The second project that I remember was Ollama[8]. Also a wrapper around Llama.cpp, Ollama gave most of what GPT4All had to offer but in an even simpler way. Today, I believe Ollama is bigger than GPT4All although I think it's missing some of the higher-level features of GPT4All.

    Another important tool that came out during this time is called Exllama[9]. Exllama is an inference engine with a focus on modern consumer Nvidia GPUs and advanced quantization support based on GPTQ. It is probably the best inference engine for squeezing performance out of consumer Nvidia GPUs.

    Months later, Nvidia came out with another new inference engine called TensorRT-LLM[10]. TensorRT-LLM is capable of running most LLMs and does so with extreme efficiency. It is the most efficient open source inferencing engine that exists for Nvidia GPUs. However, it also has the most difficult setup process of any inference engine and is made primarily for production use cases and Nvidia AI GPUs so don't expect it to work on your personal computer.

    With the rumors of GPT-4 being a Mixture of Experts LLM, research breakthroughs in MoE, and some small MoE LLMs coming out, interest in MoE LLMs was at an all-time high. The company Mistral had proven itself in the past with very impressive LLaMA finetunes, capitalized on this interest by releasing Mixtral 8x7b[11]. The best accuracy for its size LLM that the local LLM community had seen to date. Eventually MoE support was added to all inference engines and it was a very popular mid-to-large sized LLM.

    Cohere released their own LLM as well called Command R+[12] built specifically for RAG-related tasks with a context length of 128k. It's quite large and doesn't have notable performance on many metrics, but it has some interesting RAG features no other LLM has.

    More recently, Llama 3[13] was released which similar to previous Llama releases, blew every other open LLM out of the water. The smallest version of Llama 3 (Llama 3 8b) has the greatest accuracy for its size of any other open LLM and the largest version of Llama 3 released so far (Llama 3 70b) beats every other open LLM on almost every metric.

    Less than a month ago, Google released Gemma 2[14], the largest of which, performs very well under human evaluation despite being less than half the size of Llama 3 70b, but performs only decently on automated benchmarks.

    If you're looking for a tool to get started running LLMs locally, I'd go with either Ollama or GPT4All. They make the process about as painless as possible. I believe GPT4All has more features like using your local documents for RAG, but you can also use something like PrivateGPT with Ollama to get the same functionality.

    If you want to get into the weeds a bit and extract some more performance out of your machine, I'd go with using Llama.cpp, Exllama, or vLLM depending upon your system. If you have a normal, consumer Nvidia GPU, I'd go with Exllama. If you have an AMD GPU that supports ROCm 5.7 or 6.0, I'd go with vLLM. For anything else, including just running it on your CPU, I'd go with Llama.cpp. TensorRT-LLM only makes sense if you have an AI Nvidia GPU like the A100, V100, A10, H100, etc.

    [1] https://ai.meta.com/blog/large-language-model-llama-meta-ai/

    [2] https://github.com/huggingface/transformers

    [3] https://github.com/ggerganov/llama.cpp

    [4] https://github.com/bitsandbytes-foundation/bitsandbytes

    [5] https://github.com/vllm-project/vllm

    [6] https://ai.meta.com/blog/llama-2/

    [7] https://www.nomic.ai/gpt4all

    [8] http://ollama.ai/

    [9] https://github.com/turboderp/exllamav2

    [10] https://github.com/NVIDIA/TensorRT-LLM

    [11] https://mistral.ai/news/mixtral-of-experts/

    [12] https://cohere.com/blog/command-r-plus-microsoft-azure

    [13] https://ai.meta.com/blog/meta-llama-3/

    [14] https://blog.google/technology/developers/google-gemma-2/

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • Keras

    Deep Learning for humans

    Project mention: Using Google Magika to build an AI-powered file type detector | dev.to | 2024-06-13

    The core model architecture for Magika was implemented using Keras, a popular open source deep learning framework that enables Google researchers to experiment quickly with new models.

  • Real-Time-Voice-Cloning

    Clone a voice in 5 seconds to generate arbitrary speech in real-time

    Project mention: FLaNK Stack Weekly 12 February 2024 | dev.to | 2024-02-12
  • bert

    TensorFlow code and pre-trained models for BERT

    Project mention: OpenAI Will Terminate Its Services in China: A Comprehensive Analysis | dev.to | 2024-06-25

    BERT

  • Ray

    Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

    Project mention: Comparison: Dask vs. Ray | news.ycombinator.com | 2024-06-14
  • data-science-ipython-notebooks

    Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

  • spleeter

    Deezer source separation library including pretrained models.

    Project mention: Are stems a good way of making mashups | /r/Beatmatch | 2023-12-10

    virtual dj and others stem separator is shrinked model of this https://github.com/deezer/spleeter you will get better results downloading original + their large model.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • Mask_RCNN

    Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

    Project mention: Intuituvely Understanding Harris Corner Detector | news.ycombinator.com | 2023-09-11

    The most widely used algorithms for classical feature detection today are "whatever opencv implements"

    In terms of tech that's advancing at the moment? https://co-tracker.github.io/ if you want to track individual points, https://github.com/matterport/Mask_RCNN and its descendents if you want to detect, say, the cover of a book.

  • d2l-en

    Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.

  • datasets

    ๐Ÿค— The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools

    Project mention: ๐Ÿ๐Ÿ 23 issues to grow yourself as an exceptional open-source Python expert ๐Ÿง‘โ€๐Ÿ’ป ๐Ÿฅ‡ | dev.to | 2023-10-19
  • supervision

    We write your reusable computer vision tools. ๐Ÿ’œ

    Project mention: Top 15 Open-Source Low-Code Projects with the Most GitHub Stars | dev.to | 2024-07-18

    GitHub https://github.com/roboflow/supervision GitHub Stars 17.9k Most Recent Update on GitHub Within one day Open Source License MIT Number of Active Contributors This Year 35 Acceptance of External PRs Yes Official Website https://supervision.roboflow.com/ Documentation https://supervision.roboflow.com/0.22.0/how\_to/detect\_and\_annotate/

  • frigate

    NVR with realtime local object detection for IP cameras

    Project mention: Police warn of thieves using WiFi-jamming tech to disarm cameras, alarms | news.ycombinator.com | 2024-07-18

    No one has mentioned Frigate. It has taken the "homelab"/selfhosted world by storm & utterly dominates. Open source, works great, & by far some of the most sophisticated detection/triggering schemes one can acquire, period. https://frigate.video/

    I have two Hanwha units I never got around to using at my last place. H.265 IP streaming out. Onvif is the main standard everyone seems to use for streaming out.

  • best-of-ml-python

    ๐Ÿ† A ranked list of awesome machine learning Python libraries. Updated weekly.

    Project mention: Top Github repositories for 10+ programming languages | dev.to | 2024-07-16

    Best of ml python

  • horovod

    Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

  • ivy

    Convert ML Code Between Frameworks

    Project mention: Keras 3.0 | news.ycombinator.com | 2023-11-28

    See also https://github.com/unifyai/ivy which I have not tried but seems along the lines of what you are describing, working with all the major frameworks

  • nni

    An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.

  • facenet

    Face recognition using Tensorflow

  • TFLearn

    Deep learning library featuring a higher-level API for TensorFlow.

  • autokeras

    AutoML library for deep learning

  • wandb

    ๐Ÿ”ฅ A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.

    Project mention: 10 Open Source Tools for Building MLOps Pipelines | dev.to | 2024-06-06

    Weights and Biases (W&B) ****is a tool for visualizing and tracking machine learning experiments. It supports major machine learning frameworks such as TensorFlow and PyTorch. Its key features include:

  • einops

    Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

    Project mention: NumPy 2.0.0 | news.ycombinator.com | 2024-06-16

    https://einops.rocks/#why-use-einops-notation

  • deeplake

    Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai

    Project mention: FLaNK AI Weekly 25 March 2025 | dev.to | 2024-03-25
  • python-small-examples

    ๅ‘Šๅˆซๆžฏ็‡ฅ๏ผŒ่‡ดๅŠ›ไบŽๆ‰“้€  Python ๅฎž็”จๅฐไพ‹ๅญ๏ผŒๆ›ดๅคšPython่‰ฏๅฟƒๆ•™็จ‹่ง https://ai-jupyter.com

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Python Tensorflow discussion

Log in or Post with

Python Tensorflow related posts

  • The CrowdStrike file that broke everything was full of null characters

    3 projects | news.ycombinator.com | 19 Jul 2024
  • Police warn of thieves using WiFi-jamming tech to disarm cameras, alarms

    3 projects | news.ycombinator.com | 18 Jul 2024
  • DoLa and MT-Bench - A Quick Eval of a new LLM trick

    3 projects | dev.to | 11 Jul 2024
  • HuggingFace releases major updates to support for tool-use and RAG models

    1 project | news.ycombinator.com | 2 Jul 2024
  • OpenAI Will Terminate Its Services in China: A Comprehensive Analysis

    1 project | dev.to | 25 Jun 2024
  • Show HN: I am using AI to drop hats outside my window onto New Yorkers

    6 projects | news.ycombinator.com | 23 Jun 2024
  • Mathematics secret behind AI on Digit Recognition

    3 projects | dev.to | 15 Jun 2024
  • A note from our sponsor - SaaSHub
    www.saashub.com | 23 Jul 2024
    SaaSHub helps you find the best software and product alternatives Learn more โ†’

Index

What are some of the best open-source Tensorflow projects in Python? This list will help you:

Project Stars
1 transformers 129,472
2 Keras 61,338
3 Real-Time-Voice-Cloning 51,607
4 bert 37,508
5 Ray 32,158
6 data-science-ipython-notebooks 26,931
7 spleeter 25,342
8 Mask_RCNN 24,419
9 d2l-en 22,601
10 datasets 18,778
11 supervision 17,956
12 frigate 16,201
13 best-of-ml-python 16,076
14 horovod 14,073
15 ivy 14,026
16 nni 13,904
17 facenet 13,645
18 TFLearn 9,612
19 autokeras 9,101
20 wandb 8,652
21 einops 8,180
22 deeplake 7,903
23 python-small-examples 7,890

Sponsored
Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com

Did you konow that Python is
the 1st most popular programming language
based on number of metions?