-
Deeplearning4j
Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
neural-network-number-guesser
Java program that tries to guess a number you are drawing using a Neural Network. Uses the MNIST Dataset for training. Left click to draw, Right click to reset drawing
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
deeplearning4j-examples
Discontinued Deeplearning4j Examples (DL4J, DL4J Spark, DataVec) (by deeplearning4j)
You can take a look at github actions as well: https://github.com/deeplearning4j/deeplearning4j/tree/master/.github/workflows
Well, we've been using Tribuo in production for many years now. The ONNX Runtime Java API that I maintain in MS's ONNX Runtime project has also seen a bunch of uptake in companies, and Amazon have been building DJL for several years too.
For deploying a trained model there are a bunch of options that use Java on top of some native runtime like TF-Java (which I co-lead), ONNX Runtime, pytorch has inference for TorchScript models. Training deep learning models is harder, though you can do it for some of them in DJL. Training more standard ML models is much simpler, either via Tribuo, or using things like LibSVM & XGBoost directly, or other libraries like SMILE or WEKA.
We've been developing Tribuo on Github for two years now, MS are very actively developing ONNX Runtime (and the Java layer is fairly thin and wrapped over the same C API they use for node.js and C#), and things like XGBoost and LibSVM have been around for many years and the Java bits are developed in tree with the rest of the code so updated along with it. Amazon have a team of people working on DJL, though you'd have to ask them what their plans are.
I am using https://github.com/Gleethos/neureka to do some personal machine learning and also basic data science stuff at my workplace. It is inspired by PyTorch and it's dynamic autograd system (which records the computation graph to then traverse it for backpropagation eagerly). The library is super lightweight, has a nice API and documentation, but it's still very young and not as feature rich.
We actually have a sample with Tribuo working inside SGX: https://github.com/R3Conclave/conclave-samples/tree/master/tribuo-classification
I made a shitty neural net for guessing hand drawn numbers. It only works about half the time and takes ages to train. In my experience the OOP-ness of Java really slows down the process and it'd probably just be faster by using more functional code. But I'm sure I could've done it better with more experience (am still in school). Here's the repo if you want to check it out: https://github.com/d3nosaur/neural-network-number-guesser
I'm building from source because I made a tiny change to dl4j to get an example running. It's not really anything that should be committed as it wouldn't be a solution to the issue. (Just a hack around. :) )