spark-nlp
web-stable-diffusion
spark-nlp | web-stable-diffusion | |
---|---|---|
87 | 21 | |
3,695 | 3,440 | |
1.2% | 1.2% | |
9.3 | 4.4 | |
12 days ago | about 2 months ago | |
Scala | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-nlp
- Spark NLP 5.1.0: Introducing state-of-the-art OpenAI Whisper speech-to-text, OpenAI Embeddings and Completion transformers, MPNet text embeddings, ONNX support for E5 text embeddings, new multi-lingual BART Zero-Shot text classification, and much more!
-
PySpark for NLP Workshop - Materials and Jupyter Notebooks
I recently had the opportunity to run a workshop at ODSC East, focusing on using PySpark for Natural Language Processing (NLP). Had a great time explaining PySpark's fundamentals and exploring the Spark NLP library.
- Spark-NLP 4.4.0: New BART for Text Translation & Summarization, new ConvNeXT Transformer for Image Classification, new Zero-Shot Text Classification by BERT, more than 4000+ state-of-the-art models, and many more! · JohnSnowLabs/spark-nlp
-
Transformers.js
I'd like to use this transformer model in rust (because it's on the backend, because I can use data munging and it will be faster, and for other reasons). It looks like a good model! But, it doesn't compile on Apple Silicon for wierd linking issues that aren't apparent - https://github.com/guillaume-be/rust-bert/issues/338. I've spent a large part of today and yesterday attempting to find out why. The only other library that I've found for doing this kind of thing programmatically (particularly sentiment analysis) is this (https://github.com/JohnSnowLabs/spark-nlp). Some of the models look a little older, which is OK, but it does mean that I'd have to do this in another language.
Does anyone know of any sentiment analysis software that can be tuned (other than VADER - I'm looking for more along the lines of a transformer model) - like BERT, but is pretrained and can be used in Rust or Python? Otherwise I'll probably using spark-nlp and having to spin another process.
Thanks.
- Release John Snow Labs Spark-NLP 4.3.0: New HuBERT for speech recognition, new Swin Transformer for Image Classification, new Zero-shot annotator for Entity Recognition, CamemBERT for question answering, new Databricks and EMR with support for Spark 3.3, 1000+ state-of-the-art models and many more!
web-stable-diffusion
-
GPU-Accelerated LLM on a $100 Orange Pi
Yup, here's their web stable diffusion repo: https://github.com/mlc-ai/web-stable-diffusion
The input is a model (weights + runtime lib) compiled via the mlc-llm project: https://mlc.ai/mlc-llm/docs/compilation/compile_models.html
-
StableDiffusion can now run directly in the browser on WebGPU
The MLC team got that working back in March: https://github.com/mlc-ai/web-stable-diffusion
Even more impressively, they followed up with support for several Large Language Models: https://webllm.mlc.ai/
- Web StableDiffusion
-
[Stable Diffusion] Diffusion stable Web: exécution de diffusion stable directement dans le navigateur sans serveur GPU
[https://github.com/mlc-ai/web-stable-diffusion
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
You can run it directly in the browser with WebGPU, https://mlc.ai/web-stable-diffusion/
-
I've got Stable Diffusion integrated into my site now, fully client side with no setup or servers.
Using the amazing work of https://mlc.ai/web-stable-diffusion/ I've got the code moved into a Web Worker and running fully local client side. It does require 2GB's of model files be downloaded (automatically), and takes a few minutes for the first load, but it works and once it's going it only takes 20s to make a 512x512 image.
-
Chrome Ships WebGPU
The Apache TVM machine learning compiler has a WASM and WebGPU backend, and can import from most DNN frameworks. Here's a project running Stable Diffusion with webgpu and TVM [1].
Questions exist around post-and-pre-processing code in folks' Python stacks, with e.g. NumPy and opencv. There's some NumPy to JS transpilers out there, but those aren't feature complete or fully integrated.
[1] https://github.com/mlc-ai/web-stable-diffusion
- Bringing stable diffusion models to web browsers
- mlc-ai/web-stable-diffusion: Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
- Web Stable Diffusion: Running Diffusion Models with WebGPU
What are some alternatives?
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
stable-diffusion-webui-directml - Stable Diffusion web UI
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
nlu - 1 line for thousands of State of The Art NLP models in hundreds of languages The fastest and most accurate way to solve text problems.
SHA256-WebGPU - Implementation of sha256 in WGSL
pytorch-sentiment-analysis - Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
wgpu-py - Next generation GPU API for Python
clj-djl - clojure wrap for deep java library(DJL.ai)
Tribuo - Tribuo - A Java machine learning library
js-promise-integration - JavaScript Promise Integration