spark-nlp-workshop
magika
spark-nlp-workshop | magika | |
---|---|---|
16 | 4 | |
999 | 7,344 | |
1.1% | 1.6% | |
9.6 | 9.8 | |
2 days ago | 6 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-nlp-workshop
- FLaNK Stack Weekly 19 Feb 2024
-
Spark-NLP 4.1.0 Released: Vision Transformer (ViT) is here! The very first Computer Vision pipeline for the state-of-the-art Image Classification task, AWS Graviton/ARM64 support, new EMR & Databricks support, 1000+ state-of-the-art models, and more!
You can visit Spark NLP Workshop for 100+ examples
-
Spark-NLP 4.0.0 🚀: New modern extractive Question answering (QA) annotators for ALBERT, BERT, DistilBERT, DeBERTa, RoBERTa, Longformer, and XLM-RoBERTa, official support for Apple silicon M1, support oneDNN to improve CPU up to 97%, improved transformers on GPU up to +700%, 1000+ SOTA models
I submitted a pull request here: https://github.com/JohnSnowLabs/spark-nlp-workshop/pull/552 that I think addresses both of those.
-
How AI is used for mental health therapy
In SnowLab’s implementation, for example, they wrote a search function called get_clinical_entities that finds all mentions of medications for 100 patients, as well as specifications, if any, about the quantity and frequency the medication is consumed. The location of the sentence in the overall piece is also recorded, to locate the information easier.
-
John Snow Labs Spark-NLP 3.4.0: New OpenAI GPT-2, new ALBERT, XLNet, RoBERTa, XLM-RoBERTa, and Longformer for Sequence Classification, support for Spark 3.2, new distributed Word2Vec, extend support to more Databricks & EMR runtimes, new state-of-the-art transformer models, bug fixes, and lots more!
There are so many examples here for Python users (I would start from tutorials/Certificate_Trainings): https://github.com/JohnSnowLabs/spark-nlp-workshop
-
John Snow Labs Spark-NLP 3.1.0: Over 2600+ new models and pipelines in 200+ languages, new DistilBERT, RoBERTa, and XLM-RoBERTa transformers, support for external Transformers, and lots more!
Spark NLP Workshop notebooks
-
Release John Snow Labs Spark-NLP 2.7.0: New T5 and MarianMT seq2seq transformers, detect up to 375 languages, word segmentation, over 720+ models and pipelines, support for 192+ languages, and many more! · JohnSnowLabs/spark-nlp
Spark NLP training certification notebooks for Google Colab and Databricks
Spark NLP training certification notebooks for Google Colab and Databricks
Spark NLP training certification notebooks for Google Colab and Databricks
Spark NLP training certification notebooks for Google Colab and Databricks
magika
- FLaNK Stack Weekly 19 Feb 2024
-
Magika: AI powered fast and efficient file type identification
As someone that has worked in a space that has to deal with uploaded files for the last few years, and someone who maintains a WASM libmagic Node package ( https://github.com/moshen/wasmagic ) , I have to say I really love seeing new entries into the file type detection space.
Though I have to say when looking at the Node module, I don't understand why they released it.
Their docs say it's slow:
https://github.com/google/magika/blob/120205323e260dad4e5877...
It loads the model an runtime:
https://github.com/google/magika/blob/120205323e260dad4e5877...
They mark it as Experimental in the documentation, but it seems like it was just made for the web demo.
Also as others have mentioned. The model appears to only detect 116 file types:
https://github.com/google/magika/blob/120205323e260dad4e5877...
Where libmagic detects... a lot. Over 1600 last time I checked:
https://github.com/file/file/tree/4cbd5c8f0851201d203755b76c...
I guess I'm confused by this release. Sure it detected most of my list of sample files, but in a sample set of 4 zip files, it misidentified one.
-
Show HN: Magika: AI powered fast and efficient file type identification
We are very excited to announce the release of Magika our AI powered fast and efficient file type identification lib and tool - https://opensource.googleblog.com/2024/02/magika-ai-powered-fast-and-efficient-file-type-identification.html
Thanks to its optimized Keras model, large scale training dataset, and Onnx Magika massively outperform other file identification tools while be very fast even on CPU.
Magika python code and model is open sourced on Github: https://github.com/google/magika and we also provide an experimental TFJS based npm package https://www.npmjs.com/package/magika
With the team we hope you will find Magika useful for your own projects. Let us know what you think or if you have any question!
What are some alternatives?
spark-nlp - State of the Art Natural Language Processing
file - Read-only mirror of file CVS repository, updated every half hour. NOTE: do not make pull requests here, nor comment any commits, submit them usual way to bug tracker or to the mailing list. Maintainer(s) are not tracking this git mirror.
spark-nlp-display - A library for the simple visualization of different types of Spark NLP annotations.
magic - Racket implementation of the Unix file command's magic language
proton - A streaming SQL engine, a fast and lightweight alternative to ksqlDB and Apache Flink, 🚀 powered by ClickHouse.
Space-Maker
TensorRT-LLM - TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
osv.dev - Open source vulnerability DB and triage service.
noseyparker - Nosey Parker is a command-line program that finds secrets and sensitive information in textual data and Git history.