snowpark-python
jupysql
snowpark-python | jupysql | |
---|---|---|
1 | 8 | |
231 | 611 | |
3.0% | 5.4% | |
9.6 | 9.1 | |
3 days ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
snowpark-python
jupysql
-
Show HN: JupySQL – a SQL client for Jupyter (ipython-SQL successor)
Hey, HN community!
We're stoked to launch JupySQL today! JupySQL is an open-source library that brings a modern SQL experience to Jupyter. JupySQL is compatible with all major databases, such as Snowflake, Redshift, PostgreSQL, MySQL, MariaDB, DuckDB, SQL Server, Clickhouse, Trino, and more!
To get started, check out our tutorial: https://jupysql.ploomber.io/en/latest/quick-start.html
SQL is the defacto language for data analysis; however, analysis often requires a mix of SQL and Python. JupySQL bridges this gap, allowing users to execute SQL queries seamlessly in Jupyter and continue their analysis in Python. Add %%sql to the top of your cell and start writing SQL.
Here are some of JupySQL's main features:
- Syntax highlighting
-
JupySQL: Connecting to a SQL database from Jupyter
Please show your support with a 🌟: https://github.com/ploomber/jupysql
- GitHub - ploomber/jupysql: Better SQL in Jupyter. 📊
- SQL CTE's in Jupyter notebooks, DuckDB integration and more
- TL;DR incorporate SQL functionality within Jupyter, access to modern data processing DBs (like DuckDB), polars and data exploration through plotting easier with JupySQL.
-
Evidence – Business Intelligence as Code
If anyone is looking for something like this in Python/Jupyter, check out JupySQL: https://github.com/ploomber/jupysql
- A full-featured SQL client for Jupyter
-
Pandas v2.0 Released
How are people managing the existence of data frame APIs like pandas/polars with SQL engines like BigQuery, Snowflake, and DuckDB?
Most of my notebooks are a mix of SQL and Python: SQL for most processing, dump the results as a pandas dataframe (via https://github.com/ploomber/jupysql) and then use Python for operations that are difficult to express with SQL (or that I don't know how to do it), so I end up with 80% SQL, 20% Python.
Unsure if this is the best workflow but it's the most efficient one I've come up with.
Disclaimer: my team develops JupySQL.
What are some alternatives?
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
grai-core
Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]
tpch
Skytrax-Data-Warehouse - A full data warehouse infrastructure with ETL pipelines running inside docker on Apache Airflow for data orchestration, AWS Redshift for cloud data warehouse and Metabase to serve the needs of data visualizations such as analytical dashboards.
chdb-server-bak - API Server for chDB, an in-process SQL OLAP Engine powered by ClickHouse
data-diff - Compare tables within or across databases
nba-monte-carlo - Monte Carlo simulation of the NBA season, leveraging dbt, duckdb and evidence.dev
versatile-data-kit - One framework to develop, deploy and operate data workflows with Python and SQL.
datapane - Build and share data reports in 100% Python
pytest-mock-resources - Pytest Fixtures that let you actually test against external resource (Postgres, Mongo, Redshift...) dependent code.
prism - Prism is the easiest way to develop, orchestrate, and execute data pipelines in Python.