RasgoQL
100-pandas-puzzles
Our great sponsors
RasgoQL | 100-pandas-puzzles | |
---|---|---|
11 | 6 | |
267 | 2,194 | |
0.4% | - | |
0.0 | 0.0 | |
almost 2 years ago | 1 day ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RasgoQL
-
Dbt Vs python scripts
I built an open source package to bridge the gap between python and dbt, would love your feedback if you have a chance to check it out: https://github.com/rasgointelligence/RasgoQL
-
How to balance multiple time series data?
I’ve actually solved a similar problem several times in a variety of settings. I’ve had success with boosted trees and feature engineering on the sensor readings over time. I treat each reading as an observation and set the target to be the value I want to forecast (e.g. one hour ahead, the sum over the next day, the value at the same time the next day). There was a recent paper that compared boosted trees to deep learning techniques and found the boosted trees performed really well. Next, I perform feature engineering to aggregate the data up to the current time. These features will include the current value, lagged values over multiple observations for that sensor, more complicated features from moving statistics over different time scales, etc. I actually wrote a blog about creating these features using the open-source package RasgoQL and have similar types of features shared in the open-source repository here. I have also had success creating these sorts of historical features using the tsfresh package. Finally, when evaluating the forecast, use a time based split so earlier data is used to train the model and later data to evaluate the model.
-
RasgoQL - Open source data transformations in Python, without having to write SQL.
I created RasgoQL to give anyone a pandas-like syntax that you can use to quickly generate hundreds of lines of SQL that will run directly in your Snowflake or BigQuery data warehouse (with more data warehouse support coming soon). The best part? In one line of code, you can export this SQL to your dbt project so that it can run in production alongside other data pipelines.
- RasgoQL - Transform tables directly with Python, without writing SQL
- RasgoQL - Open data transformations in Python, no SQL required
-
[P] Open data transformations in Python, no SQL required
You can check it out here: https://github.com/rasgointelligence/RasgoQL
- [Project] Open data transformations in Python, no SQL required
- Open data transformations in Python, no SQL required
100-pandas-puzzles
-
What are the best Python libraries to learn for beginners?
#1: Welcome to df[pandas]! #2: 100 data puzzles for pandas, ranging from short and simple to super tricky | 3 comments #3: Happy Halloween, Pandas! 🎃🤓 | 0 comments
- 100 data puzzles for pandas, ranging from short and simple to super tricky
-
pandas practice resources?
I remember someone sharing this with me earlier: https://github.com/ajcr/100-pandas-puzzles Let me know if you think it's comprehensive and a good resource.
-
how important are learning the data manipulation libraries?
If you want to get better with pandas specifically you could work through the 100 pandas puzzles repo in your spare time, https://github.com/ajcr/100-pandas-puzzles
- Can anyone recommend resources to prepare for Pandas and Numpy interview questions?
- Is there anything AoC-like for Machine Learning or Data Science?
What are some alternatives?
pygwalker - PyGWalker: Turn your pandas dataframe into an interactive UI for visual analysis
numpy-100 - 100 numpy exercises (with solutions)
fugue - A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
tempo - API for manipulating time series on top of Apache Spark: lagged time values, rolling statistics (mean, avg, sum, count, etc), AS OF joins, downsampling, and interpolation
Data-Science-For-Beginners - 10 Weeks, 20 Lessons, Data Science for All!
pandas_exercises - Practice your pandas skills!
idx2numpy_array - Convert data in IDX format in MNIST Dataset to Numpy Array using Python
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
tempo - Grafana Tempo is a high volume, minimal dependency distributed tracing backend.
ickle - DataFrame, analysis & manipulation library for tiny labeled datasets
pyjanitor - Clean APIs for data cleaning. Python implementation of R package Janitor