Tribuo
server
Tribuo | server | |
---|---|---|
15 | 24 | |
1,226 | 7,356 | |
0.6% | 2.7% | |
4.8 | 9.5 | |
3 days ago | 7 days ago | |
Java | Python | |
Apache 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Tribuo
- FLaNK Weekly 08 Jan 2024
-
Is deeplearning4j a good choice?
It seems to have been picked up by Eclipse and there is also Oracle Labs' Tribuo and Deep Java Library. All seem active, but I don't know much about any of them. I agree it's probably best to follow the community and use a more popular tool like PyTorch.
-
Stochastic gradient descent written in SQL
We built model & data provenance into our open source ML library, though it's admittedly not the W3C PROV standard. There were a few gaps in it until we built an automated reproducibility system on top of it, but now it's pretty solid for all the algorithms we implement. Unfortunately some of the things we wrap (notably TensorFlow) aren't reproducible enough due to some unfixed bugs. There's an overview of the provenance system in this reprise of the JavaOne talk I gave here https://www.youtube.com/watch?v=GXOMjq2OS_c. The library is on GitHub - https://github.com/oracle/tribuo.
-
Just want to vent a bit
Although it may be a bit more work, you can do both machine learning and AI in Java. If you are doing deep learning, you can use DeepJavaLibrary (I do work on this one at Amazon). If you are looking for other ML algorithms, I have seen Smile, Tribuo, or some around Spark.
-
Anybody here using Java for machine learning?
We've been developing Tribuo on Github for two years now, MS are very actively developing ONNX Runtime (and the Java layer is fairly thin and wrapped over the same C API they use for node.js and C#), and things like XGBoost and LibSVM have been around for many years and the Java bits are developed in tree with the rest of the code so updated along with it. Amazon have a team of people working on DJL, though you'd have to ask them what their plans are.
-
Java engineer wants to be a researcher
FWIW, Oracle actually did release a Java ML library - https://github.com/oracle/tribuo.
-
txtai 3.4 released - Build AI-powered semantic search applications in Java
Tribuo (tribuo.org, github.com/oracle/tribuo). ONNX export support is there for 2 models at the moment in main, there's a PR for factorization machines which supports ONNX export, and we plan to add another couple of models and maybe ensembles before the upcoming release. Plus I need to write a tutorial on how it all works, but you can check the tests in the meantime.
-
Hottest topics for research for JAVA software engineers
You can do ML & data science in Java (full disclosure: I help run TensorFlow-Java, I maintain ONNX Runtime's Java interface, and I'm the lead developer on Oracle Labs' Java ML library Tribuo, so I'm pretty biased). It tends not to be as favoured in research, though I've published academic ML papers which used Java implementations. People do deploy ML models quite a bit in Java in industry.
-
John Snow Labs Spark-NLP 3.1.0: Over 2600+ new models and pipelines in 200+ languages, new DistilBERT, RoBERTa, and XLM-RoBERTa transformers, support for external Transformers, and lots more!
It might be worth having a look at the ONNX Runtime Java API in addition to TF-Java, it'll let you deploy the rest of the HuggingFace pytorch models that don't have TF equivalents. I built the Java API a few years ago, and it's now a supported part of the ONNX Runtime project. We use it in Tribuo to provide one of our text feature embedding classes (BERTFeatureExtractor).
-
If it gets better w age, will java become compatible for machine learning and data science?
The IJava notebook kernel works pretty well for data science on top of Java. We use it in Tribuo to write all our tutorials, and if you've got the jar file in the right folder everything is runnable. For example, this is our intro classification tutorial - https://github.com/oracle/tribuo/blob/main/tutorials/irises-tribuo-v4.ipynb.
server
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
- "A matching Triton is not available"
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Single RTX 3080 or two RTX 3060s for deep learning inference?
For inference of CNNs, memory should really not be an issue. If it is a software engineering problem, not a hardware issue. FP16 or Int8 for weights is fine and weight size won’t increase due to the high resolution. And during inference memory used for hidden layer tensors can be reused as soon as the last consumer layer has been processed. You likely using something that is designed for training for inference and that blows up the memory requirement, or if you are using TensorRT or something like that, you need to be careful to avoid that every tasks loads their own copy of the library code into the GPU. Maybe look at https://github.com/triton-inference-server/server
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
-
I mean,.. we COULD just make our own lol
[1] https://docs.nvidia.com/launchpad/ai/chatbot/latest/chatbot-triton-overview.html[2] https://github.com/triton-inference-server/server[3] https://neptune.ai/blog/deploying-ml-models-on-gpu-with-kyle-morris[4] https://thechief.io/c/editorial/comparison-cloud-gpu-providers/[5] https://geekflare.com/best-cloud-gpu-platforms/
-
Why TensorFlow for Python is dying a slow death
"TensorFlow has the better deployment infrastructure"
Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).
For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.
Triton Server is very impressive and I'm always surprised to see how relatively niche it is.
[0] - https://github.com/triton-inference-server/server
What are some alternatives?
Deep Java Library (DJL) - An Engine-Agnostic Deep Learning Framework in Java
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
oj! Algorithms - oj! Algorithms
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
spark-nlp - State of the Art Natural Language Processing
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.
grobid - A machine learning software for extracting information from scholarly documents
Megatron-LM - Ongoing research training transformer models at scale