spark-nlp-workshop
prql
spark-nlp-workshop | prql | |
---|---|---|
16 | 106 | |
999 | 9,436 | |
1.1% | 0.8% | |
9.6 | 9.9 | |
2 days ago | 1 day ago | |
Jupyter Notebook | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-nlp-workshop
- FLaNK Stack Weekly 19 Feb 2024
-
Spark-NLP 4.1.0 Released: Vision Transformer (ViT) is here! The very first Computer Vision pipeline for the state-of-the-art Image Classification task, AWS Graviton/ARM64 support, new EMR & Databricks support, 1000+ state-of-the-art models, and more!
You can visit Spark NLP Workshop for 100+ examples
-
Spark-NLP 4.0.0 🚀: New modern extractive Question answering (QA) annotators for ALBERT, BERT, DistilBERT, DeBERTa, RoBERTa, Longformer, and XLM-RoBERTa, official support for Apple silicon M1, support oneDNN to improve CPU up to 97%, improved transformers on GPU up to +700%, 1000+ SOTA models
I submitted a pull request here: https://github.com/JohnSnowLabs/spark-nlp-workshop/pull/552 that I think addresses both of those.
-
How AI is used for mental health therapy
In SnowLab’s implementation, for example, they wrote a search function called get_clinical_entities that finds all mentions of medications for 100 patients, as well as specifications, if any, about the quantity and frequency the medication is consumed. The location of the sentence in the overall piece is also recorded, to locate the information easier.
-
John Snow Labs Spark-NLP 3.4.0: New OpenAI GPT-2, new ALBERT, XLNet, RoBERTa, XLM-RoBERTa, and Longformer for Sequence Classification, support for Spark 3.2, new distributed Word2Vec, extend support to more Databricks & EMR runtimes, new state-of-the-art transformer models, bug fixes, and lots more!
There are so many examples here for Python users (I would start from tutorials/Certificate_Trainings): https://github.com/JohnSnowLabs/spark-nlp-workshop
-
John Snow Labs Spark-NLP 3.1.0: Over 2600+ new models and pipelines in 200+ languages, new DistilBERT, RoBERTa, and XLM-RoBERTa transformers, support for external Transformers, and lots more!
Spark NLP Workshop notebooks
-
Release John Snow Labs Spark-NLP 2.7.0: New T5 and MarianMT seq2seq transformers, detect up to 375 languages, word segmentation, over 720+ models and pipelines, support for 192+ languages, and many more! · JohnSnowLabs/spark-nlp
Spark NLP training certification notebooks for Google Colab and Databricks
Spark NLP training certification notebooks for Google Colab and Databricks
Spark NLP training certification notebooks for Google Colab and Databricks
Spark NLP training certification notebooks for Google Colab and Databricks
prql
- Prolog language for PostgreSQL proof of concept
-
SQL is syntactic sugar for relational algebra
> I completely attribute this to SQL being difficult or "backwards" to parse. I mean backwards in the way that in SQL you start with what you want first (the SELECT) rather than what you have and widdling it down.
> The turning point for me was to just accept SQL for what it is.
Or just write PRQL and compile it to SQL
https://github.com/PRQL/prql
- Transpile Any SQL to PostgreSQL Dialect
-
Show HN: Open-source, browser-local data exploration using DuckDB-WASM and PRQL
Hey HN! We’ve built Pretzel, an open-source data exploration and visualization tool that runs fully in the browser and can handle large files (200 MB CSV on my 8gb MacBook air is snappy). It’s also reactive - so if, for example, you change a filter, all the data transform blocks after it re-evaluate automatically. You can try it here: https://pretzelai.github.io/ (static hosted webpage) or see a demo video here: https://www.youtube.com/watch?v=73wNEun_L7w
You can play with the demo CSV that’s pre-loaded (GitHub data of text-editor adjacent projects) or upload your own CSV/XLSX file. The tool runs fully in-browser—you can disconnect from the internet once the website loads—so feel free to use sensitive data if you like.
Here’s how it works: You upload a CSV file and then, explore your data as a series of successive data transforms and plots. For example, you might: (1) Remove some columns; (2) Apply some filters (remove nulls, remove outliers, restrict time range etc); (3) Do a pivot (i.e, a group-by but fancier); (4) Plot a chart; (5) Download the chart and the the transformed data. See screenshot: https://imgur.com/a/qO4yURI
In the UI, each transform step appears as a “Block”. You can always see the result of the full transform in a table on the right. The transform blocks are editable - for instance in the example above, you can go to step 2, change some filters and the reactivity will take care of re-computing all the cells that follow, including the charts.
We wanted Pretzel to run locally in the browser and be extremely performant on large files. So, we parse CSVs with the fastest CSV parser (uDSV: https://github.com/leeoniya/uDSV) and use DuckDB-Wasm (https://github.com/duckdb/duckdb-wasm) to do all the heavy lifting of processing the data. We also wanted to allow for chained data transformations where each new block operates on the result of the previous block. For this, we’re using PRQL (https://prql-lang.org/) since it maps 1-1 with chained data transform blocks - each block maps to a chunk of PRQL which when combined, describes the full data transform chain. (PRQL doesn’t support DuckDB’s Pivot statement though so we had to make some CTE based hacks).
There’s also an AI block: This is the only (optional) feature that requires an internet connection but we’re working on adding local model support via Ollama. For now, you can use your own OpenAI API key or use an AI server we provide (GPT4 proxy; it’s loaded with a few credits), specify a transform in plain english and get back the SQL for the transform which you can edit.
Our roadmap includes allowing API calls to create new columns; support for an SQL block with nice autocomplete features, and a Python block (using Pyodide to run Python in the browser) on the results of the data transforms, much like a jupyter notebook.
There’s two of us and we’ve only spent about a week coding this and fixing major bugs so there are still some bugs to iron out. We’d love for you to try this and to get your feedback!
-
Pql, a pipelined query language that compiles to SQL (written in Go)
> Looks like PRQL doesn't have a Go library so I guess they just really wanted something in Go?
There's some C bindings and the example in the README shows integration with Go:
https://github.com/PRQL/prql/tree/main/prqlc/bindings/prqlc-...
- FLaNK Stack 26 February 2024
- FLaNK Stack Weekly 19 Feb 2024
-
PRQL as a DuckDB Extension
Can someone tell me why PRQL is better? I went here: https://github.com/PRQL/prql
It looks nice, but what's the strengths compared to SQL?
-
Shouldn't FROM come before SELECT in SQL?
PRQL [1] is a compile-to-SQL relational querying language that puts FROM first.
[1] https://prql-lang.org
-
Vanna.ai: Chat with your SQL database
https://prql-lang.org/ might be an answer for this. As a cross-database pipelined language, it would allow RAG to be intermixed with the query, and the syntax may(?) be more reliable to generate
What are some alternatives?
spark-nlp - State of the Art Natural Language Processing
malloy - Malloy is an experimental language for describing data relationships and transformations.
spark-nlp-display - A library for the simple visualization of different types of Spark NLP annotations.
Preql - An interpreted relational query language that compiles to SQL.
proton - A streaming SQL engine, a fast and lightweight alternative to ksqlDB and Apache Flink, 🚀 powered by ClickHouse.
bustub - The BusTub Relational Database Management System (Educational)
TensorRT-LLM - TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
tresql - Shorthand SQL/JDBC wrapper language, providing nested results as JSON and more
magika - Detect file content types with deep learning
spyql - Query data on the command line with SQL-like SELECTs powered by Python expressions
toydb - Distributed SQL database in Rust, written as a learning project
rfcs - RFCs for major changes to EdgeDB