RasgoQL
ploomber
Our great sponsors
RasgoQL | ploomber | |
---|---|---|
11 | 121 | |
267 | 3,374 | |
0.4% | 1.0% | |
0.0 | 7.4 | |
almost 2 years ago | 20 days ago | |
Jupyter Notebook | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RasgoQL
-
Dbt Vs python scripts
I built an open source package to bridge the gap between python and dbt, would love your feedback if you have a chance to check it out: https://github.com/rasgointelligence/RasgoQL
-
How to balance multiple time series data?
I’ve actually solved a similar problem several times in a variety of settings. I’ve had success with boosted trees and feature engineering on the sensor readings over time. I treat each reading as an observation and set the target to be the value I want to forecast (e.g. one hour ahead, the sum over the next day, the value at the same time the next day). There was a recent paper that compared boosted trees to deep learning techniques and found the boosted trees performed really well. Next, I perform feature engineering to aggregate the data up to the current time. These features will include the current value, lagged values over multiple observations for that sensor, more complicated features from moving statistics over different time scales, etc. I actually wrote a blog about creating these features using the open-source package RasgoQL and have similar types of features shared in the open-source repository here. I have also had success creating these sorts of historical features using the tsfresh package. Finally, when evaluating the forecast, use a time based split so earlier data is used to train the model and later data to evaluate the model.
-
RasgoQL - Open source data transformations in Python, without having to write SQL.
I created RasgoQL to give anyone a pandas-like syntax that you can use to quickly generate hundreds of lines of SQL that will run directly in your Snowflake or BigQuery data warehouse (with more data warehouse support coming soon). The best part? In one line of code, you can export this SQL to your dbt project so that it can run in production alongside other data pipelines.
- RasgoQL - Transform tables directly with Python, without writing SQL
- RasgoQL - Open data transformations in Python, no SQL required
-
[P] Open data transformations in Python, no SQL required
You can check it out here: https://github.com/rasgointelligence/RasgoQL
- [Project] Open data transformations in Python, no SQL required
- Open data transformations in Python, no SQL required
ploomber
-
Show HN: JupySQL – a SQL client for Jupyter (ipython-SQL successor)
- One-click sharing powered by Ploomber Cloud: https://ploomber.io
Documentation: https://jupysql.ploomber.io
Note that JupySQL is a fork of ipython-sql; which is no longer actively developed. Catherine, ipython-sql's creator, was kind enough to pass the project to us (check out ipython-sql's README).
We'd love to learn what you think and what features we can ship for JupySQL to be the best SQL client! Please let us know in the comments!
-
Runme – Interactive Runbooks Built with Markdown
For those who don't know, Jupyter has a bash kernel: https://github.com/takluyver/bash_kernel
And you can run Jupyter notebooks from the CLI with Ploomber: https://github.com/ploomber/ploomber
-
Rant: Jupyter notebooks are trash.
Develop notebook-based pipelines
-
Who needs MLflow when you have SQLite?
Fair point. MLflow has a lot of features to cover the end-to-end dev cycle. This SQLite tracker only covers the experiment tracking part.
We have another project to cover the orchestration/pipelines aspect: https://github.com/ploomber/ploomber and we have plans to work on the rest of features. For now, we're focusing on those two.
-
New to large SW projects in Python, best practices to organize code
I recommend taking a look at the ploomber open source. It helps you structure your code and parameterize it in a way that's easier to maintain and test. Our blog has lots of resources about it from testing your code to building a data science platform on AWS.
-
A three-part series on deploying a Data Science Platform on AWS
Developing end-to-end data science infrastructure can get complex. For example, many of us might have struggled to try to integrate AWS services and deal with configuration, permissions, etc. At Ploomber, we’ve worked with many companies in a wide range of industries, such as energy, entertainment, computational chemistry, and genomics, so we are constantly looking for simple solutions to get them started with Data Science in the cloud.
- Ploomber Cloud - Parametrizing and running notebooks in the cloud in parallel
-
Is Colab still the place to go?
If you like working locally with notebooks, you can run via the free tier of ploomber, that'll allow you to get the Ram/Compute you need for the bigger models as part of the free tier. Also, it has the historical executions so you don't need to remember what you executed an hour later!
-
Alternatives to nextflow?
It really depends on your use cases, I've seen a lot of those tools that lock you into a certain syntax, framework or weird language (for instance Groovy). If you'd like to use core python or Jupyter notebooks I'd recommend Ploomber, the community support is really strong, there's an emphasis on observability and you can deploy it on any executor like Slurm, AWS Batch or Airflow. In addition, there's a free managed compute (cloud edition) where you can run certain bioinformatics flows like Alphafold or Cripresso2
-
Saving log files
That's what we do for lineage with https://ploomber.io/
What are some alternatives?
pygwalker - PyGWalker: Turn your pandas dataframe into an interactive UI for visual analysis
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
papermill - 📚 Parameterize, execute, and analyze notebooks
Data-Science-For-Beginners - 10 Weeks, 20 Lessons, Data Science for All!
dagster - An orchestration platform for the development, production, and observation of data assets.
tempo - API for manipulating time series on top of Apache Spark: lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, downsampling, and interpolation
dvc - 🦉 ML Experiments and Data Management with Git
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
argo - Workflow Engine for Kubernetes
ickle - DataFrame, analysis & manipulation library for tiny labeled datasets
MLflow - Open source platform for the machine learning lifecycle