C++ Machine learning

Open-source C++ projects categorized as Machine learning

Top 23 C++ Machine learning Projects

  • tensorflow

    An Open Source Machine Learning Framework for Everyone

    Latest mention: Rtx 3090 Is 14 Times Slower On Inference Compared | reddit.com/r/tensorflow | 2021-01-09

    That does seem to be the case. TF is much slower than pytorch for training, especially in backpropogation (depending on optimizers) https://github.com/tensorflow/tensorflow/issues/42475

  • pytorch

    Tensors and Dynamic neural networks in Python with strong GPU acceleration

    Latest mention: [P] Implementation of RealFormer using pytorch | reddit.com/r/MachineLearning | 2020-12-27

    Tip: Use torch.bmm instead of torch.einsum. The former is considerably faster. Take a look at Pytorchs own MHA implementation to see how you have to do the reshaping for it.

  • tesseract

    Tesseract Open Source OCR Engine (main repository)

    Latest mention: How do i use matlab ocr to recognize math equations? | reddit.com/r/matlab | 2021-01-16

    The code looks fine, I think for whatever reason the 'MathEquations' network just does a poor job of recognizing the equations. The support package that includes the language is based on this open-source tessaract repo which seems to struggle with math equation recognition (at least based on this issue).

  • caffe

    Caffe: a fast open framework for deep learning.

  • xgboost

    Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

  • openpose

    OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

    Latest mention: Kinect + jetson nano for tracking the bodies of the persons displayed inside of screen ? | reddit.com/r/JetsonNano | 2020-12-22

    There probably a few projects that could help you out, the first one that comes to mind is called openpose.

  • incubator-mxnet

    Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

    Latest mention: Can Apple's M1 help you train models faster and cheaper than Nvidia's V100? | news.ycombinator.com | 2021-01-14

    > But you still lose something, e.g. if you use half precision on V100 you get virtually double speed, if you do on a 1080 / 2080 you get... nothing because it's not supported.

    That's not true. FP16 is supported and can be fast on 2080, although some frameworks fail to see the speed-up. I filed a bug report about this a year ago: https://github.com/apache/incubator-mxnet/issues/17665

    What consumer GPUs lack is ECC and fast FP64.

  • CNTK

    Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

  • DeepSpeech

    DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

    Latest mention: Any self hosted call transcription software? | reddit.com/r/selfhosted | 2021-01-19
  • mediapipe

    Cross-platform, customizable ML solutions for live and streaming media.

    Latest mention: Weekly Developer Roundup #21 - Sun Nov 08 2020 | dev.to | 2020-11-07

    google/mediapipe (C++): MediaPipe is the simplest way for researchers and developers to build world-class ML solutions and applications for mobile, edge, cloud and the web.

  • dlib

    A toolkit for making real world machine learning and data analysis applications in C++

  • vowpal_wabbit

    Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

  • catboost

    A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

  • tiny-dnn

    header only, dependency-free deep learning framework in C++14

  • jetson-inference

    Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

    Latest mention: Basic Teaching | reddit.com/r/JetsonNano | 2021-01-18

    https://github.com/dusty-nv/jetson-inference#system-setup

  • Open3D

    Open3D: A Modern Library for 3D Data Processing

    Latest mention: LIDAR to OBJ similar to photogrammetry with Intel RealSense L515? | reddit.com/r/3DScanning | 2021-01-05
  • mlpack

    mlpack: a scalable C++ machine learning library --

  • shogun

    Shōgun

  • MITIE

    MITIE: library and tools for information extraction

    Latest mention: Is it possible to build a recommendation system or do sentiment analysis in plain c++? | reddit.com/r/AskComputerScience | 2021-01-14

    I would suggest you use something like LucenePlusPlus as the backbone of the system for processing the text, and maybe something like MITIE for further analysis (I've never used this to be honest).

  • deepdetect

    Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

    Latest mention: [P] Benchmarking OpenBLAS on an Apple MacBook M1 | reddit.com/r/MachineLearning | 2020-12-30

    Interesting, thanks. Recently benchmarked inference with Vulkan/MoltenVK/NCNN, M1 GPU is roughly 30% faster than M1 CPU, https://github.com/jolibrain/deepdetect/pull/1105 for single batch inference (NCNN does not really support batch size > 1).

  • server

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    Latest mention: [D] Deploying ML models - batching | reddit.com/r/MachineLearning | 2020-12-27

    I've seen this called "dynamic batching" most places at work. Nvidia has Triton Inference server which works fine for us. I'd say likely you'll get more speedup from dymamic batching on GPU than CPU depending on model architecture. The overall structure probably looks something like one inference thread, then when requests come in (from many threads) you add them to your queue, and when the queue is full or The oldest enqueued request times out, you construct your batch then run inference

  • flashlight

    A C++ standalone library for machine learning

    Latest mention: Facebook To Release Xlsr53 A Wav2vec 20 Model | reddit.com/r/speechtech | 2021-01-09

    Project moved to here: https://github.com/facebookresearch/flashlight/tree/master/flashlight/app/asr

  • frugally-deep

    Header-only library for using Keras models in C++.

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Index

What are some of the best open-source Machine learning projects in C++? This list will help you:

Project Stars
1 tensorflow 152,399
2 pytorch 45,558
3 tesseract 38,313
4 caffe 31,308
5 xgboost 20,386
6 openpose 19,846
7 incubator-mxnet 19,230
8 CNTK 16,958
9 DeepSpeech 16,340
10 mediapipe 10,592
11 dlib 9,803
12 vowpal_wabbit 7,393
13 catboost 5,645
14 tiny-dnn 5,287
15 jetson-inference 4,016
16 Open3D 3,914
17 mlpack 3,527
18 shogun 2,783
19 MITIE 2,560
20 deepdetect 2,192
21 server 1,743
22 flashlight 1,351
23 frugally-deep 698