spark-nlp
onnxruntime
Our great sponsors
spark-nlp | onnxruntime | |
---|---|---|
87 | 51 | |
3,651 | 12,386 | |
1.6% | 5.4% | |
9.4 | 10.0 | |
2 days ago | 3 days ago | |
Scala | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-nlp
-
PySpark for NLP Workshop - Materials and Jupyter Notebooks
I recently had the opportunity to run a workshop at ODSC East, focusing on using PySpark for Natural Language Processing (NLP). Had a great time explaining PySpark's fundamentals and exploring the Spark NLP library.
-
Transformers.js
I'd like to use this transformer model in rust (because it's on the backend, because I can use data munging and it will be faster, and for other reasons). It looks like a good model! But, it doesn't compile on Apple Silicon for wierd linking issues that aren't apparent - https://github.com/guillaume-be/rust-bert/issues/338. I've spent a large part of today and yesterday attempting to find out why. The only other library that I've found for doing this kind of thing programmatically (particularly sentiment analysis) is this (https://github.com/JohnSnowLabs/spark-nlp). Some of the models look a little older, which is OK, but it does mean that I'd have to do this in another language.
Does anyone know of any sentiment analysis software that can be tuned (other than VADER - I'm looking for more along the lines of a transformer model) - like BERT, but is pretrained and can be used in Rust or Python? Otherwise I'll probably using spark-nlp and having to spin another process.
Thanks.
-
Release John Snow Labs Spark-NLP 4.3.0: New HuBERT for speech recognition, new Swin Transformer for Image Classification, new Zero-shot annotator for Entity Recognition, CamemBERT for question answering, new Databricks and EMR with support for Spark 3.3, 1000+ state-of-the-art models and many more!
I saw Wav2Vec2 a couple of releases ago: https://github.com/JohnSnowLabs/spark-nlp/releases/tag/4.2.0
-
Data science in Scala
I am not aware of common open frameworks like Tensorflow, PyTorch or Scikit-learn for Scala. But specifically for natural language processing, there's SparkNLP from John Snow Labs.
-
Spark-NLP 4.1.0 Released: Vision Transformer (ViT) is here! The very first Computer Vision pipeline for the state-of-the-art Image Classification task, AWS Graviton/ARM64 support, new EMR & Databricks support, 1000+ state-of-the-art models, and more!
NEW: Introducing ViTForImageClassification annotator in Spark NLP 🚀. ViTForImageClassification can load Vision Transformer ViT Models with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This annotator is compatible with all the models trained/fine-tuned by using ViTForImageClassification for PyTorch or TFViTForImageClassification for TensorFlow models in HuggingFace 🤗 (https://github.com/JohnSnowLabs/spark-nlp/pull/11536)
- Spark-NLP 4.0.0 🚀: New modern extractive Question answering (QA) annotators for ALBERT, BERT, DistilBERT, DeBERTa, RoBERTa, Longformer, and XLM-RoBERTa, official support for Apple silicon M1, support oneDNN to improve CPU up to 97%, improved transformers on GPU up to +700%, 1000+ SOTA models
- How can you do efficient text preprocessing?
- John Snow Labs Spark-NLP 3.4.0: New OpenAI GPT-2, new ALBERT, XLNet, RoBERTa, XLM-RoBERTa, and Longformer for Sequence Classification, support for Spark 3.2, new distributed Word2Vec, extend support to more Databricks & EMR runtimes, new state-of-the-art transformer models, bug fixes, and lots more!
onnxruntime
- FLaNK Stack 05 Feb 2024
-
Vcc – The Vulkan Clang Compiler
- slang[2] has the potential, but the meta programming part is not as strong as C++, existing libraries cannot be used.
The above conclusion is drawn from my work https://github.com/microsoft/onnxruntime/tree/dev/opencl, purely nightmare to work with thoes drivers and jit compilers. Hopefully Vcc can take compute shader more seriously.
-
Oracle-samples/sd4j: Stable Diffusion pipeline in Java using ONNX Runtime
I did. It depends what you want, for an overview of how ONNX Runtime works then Microsoft have a bunch of things on https://onnxruntime.ai, but the Java content is a bit lacking on there as I've not had time to write much. Eventually I'll probably write something similar to the C# SD tutorial they have on there but for the Java API.
For writing ONNX models from Java we added an ONNX export system to Tribuo in 2022 which can be used by anything on the JVM to export ONNX models in an easier way than writing a protobuf directly. Tribuo doesn't have full coverage of the ONNX spec, but we're happy to accept PRs to expand it, otherwise it'll fill out as we need it.
- Mamba-Chat: A Chat LLM based on State Space Models
-
VectorDB: Vector Database Built by Kagi Search
What about models besides GPT? Most of the popular vector encoding models aren't using this architecture.
If you really didn't want PyTorch/Transformers, you could consider exporting your models to ONNX (https://github.com/microsoft/onnxruntime).
- Onnx Runtime: “Cross-Platform Accelerated Machine Learning”
-
PyTorch Primitives in WebGPU for the Browser
https://news.ycombinator.com/item?id=35696031 ... TIL about wonnx: https://github.com/webonnx/wonnx#in-the-browser-using-webgpu...
microsoft/onnxruntime: https://github.com/microsoft/onnxruntime
Apache/arrow has language-portable Tensors for cpp: https://arrow.apache.org/docs/cpp/api/tensor.html and rust: https://docs.rs/arrow/latest/arrow/tensor/struct.Tensor.html and Python: https://arrow.apache.org/docs/python/api/tables.html#tensors https://arrow.apache.org/docs/python/generated/pyarrow.Tenso...
Fwiw it looks like the llama.cpp Tensor is from ggml, for which there are CUDA and OpenCL implementations (but not yet ROCm, or a WebGPU shim for use with emscripten transpilation to WASM): https://github.com/ggerganov/llama.cpp/blob/master/ggml.h
Are the recommendable ways to cast e.g. arrow Tensors to pytorch/tensorflow?
FWIU, Rust has a better compilation to WASM; and that's probably faster than already-compiled-to-JS/ES TensorFlow + WebGPU.
What's a fair benchmark?
-
How to create YOLOv8-based object detection web service using Python, Julia, Node.js, JavaScript, Go and Rust
Before continue, ensure that the ONNX runtime installed on your operating system, because the library that integrated to the Rust package may not work correctly. To install it, you can download the archive for your operating system from here, extract and copy contents of "lib" subfolder to the system libraries path of your operating system.
-
Ask HN: What tech is under the radar with all attention on ChatGPT etc.
I can't seem to figure if the PR for the WebGPU backend for onnxruntime is supposed to land in a 1.14 release, a 1.15 release, has already landed, isn't yet scheduled to land, etc? https://github.com/microsoft/onnxruntime/pull/14579
https://github.com/microsoft/onnxruntime/releases I don't see it in any releases yet?
https://github.com/microsoft/onnxruntime/milestone/4 I don't see it in the upcoming milestone.
I don't see any examples or docs that go with it
https://github.com/microsoft/onnxruntime/wiki/Upcoming-Relea... This seems to be out of date
https://github.com/microsoft/onnxruntime/tree/rel-1.15.0 I do see the js/webgpu work merged into here so I guess it'll be released in 1.15.0
https://onnxruntime.ai/docs/reference/releases-servicing.htm...
> Official releases of ONNX Runtime are managed by the core ONNX Runtime team. A new release is published approximately every quarter, and the upcoming roadmap can be found here.
ONNX Runtime v1.14.0 was Feb 10th
-
You probably don't know how to do Prompt Engineering
Contribute to Microsoft's ONNX runtime, it's helping accelerate non-Nvidia hardware for all sorts of ML goodness: https://onnxruntime.ai/
What are some alternatives?
onnx - Open standard for machine learning interoperability
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
onnx-simplifier - Simplify your onnx model
ONNX-YOLOv7-Object-Detection - Python scripts performing object detection using the YOLOv7 model in ONNX.
onnx-tensorflow - Tensorflow Backend for ONNX
MLflow - Open source platform for the machine learning lifecycle
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
FasterTransformer - Transformer related optimization, including BERT, GPT
tensorflow-directml - Fork of TensorFlow accelerated by DirectML
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
torch2trt - An easy to use PyTorch to TensorRT converter
tch-rs - Rust bindings for the C++ api of PyTorch.