quinn
dagster
Our great sponsors
quinn | dagster | |
---|---|---|
6 | 39 | |
392 | 6,364 | |
- | 7.2% | |
0.0 | 10.0 | |
4 days ago | 3 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
quinn
-
Invitation to collaborate on open source PySpark projects
quinn is a library with PySpark helper functions. I need to work through all the open issues / PRs and bump all versions. I should do another release. This library gets around 600,000 monthly downloads.
-
Pyspark now provides a native Pandas API
Pandas syntax is far inferior to regular PySpark in my opinion. Goes to show how much data analysts value a syntax that they're already familiar with. Pandas syntax makes it harder to reason about queries, abstract DataFrame transformations, etc. I've authored some popular PySpark libraries like quinn and chispa and am not excited to add Pandas syntax support, haha.
-
Is Spark - The Defenitive Guide outdated?
They spent a lot of effort improving the catalyst engine under the hood too and making it easier to extend and improve it in the future. Making it easy to add your own native code to Spark itself. Shameless plug of a blog post I wrote on this subject which basically reiterates what Matthew Powers, author of Spark Daria and quinn, wrote here.
-
Ask HN: What are some tools / libraries you built yourself?
I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.
quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.
Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.
Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8
-
Open source contributions for a Data Engineer?
I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.
dagster
-
dbt Cloud Alternatives?
Dagster? https://dagster.io
-
What's the best thing/library you learned this year ?
One that I haven't seen on here yet: dagster
- Can we take a moment to appreciate how much of dataengineering is open source?
-
Dagger Python SDK: Develop Your CI/CD Pipelines as Code
I wondered how it related to https://dagster.io/
-
Data Engineer Github Profile?
You can find all current, closed, and resolved issues on the “Issues” section and explore them using filters: eg issues for dagster. Look into some of the issues and feel free to ask a question or post your idea: it’s much less toxic here (compared to SO, for example).
-
[D] Should I go with Prefect, Argo or Flyte for Model Training and ML workflow orchestration?
You could also consider Dagster, which aims to improve Apache Airflow's shortcomings. Also, take a look at MyMLOps, where you can get a quick overview of open-source orchestration tools.
-
What aspects of Python should I learn that are most important for Data Engineering?
Python is one of the most accessible programming l code within Python. My favorite is dagster, which forces you to write functional blocks of code with superior features—coming from a more SQL, T-SQL, and PL-SQL background. As a data engineer, I'd say you'd not expect to write perfect code; it's better to know the Big-O annotation to avoid long-running data pipelines, even if your code doesn't look the prettiest. Static types such as mypy might be another good one to know, as it will detect errors pre-runtime, which is the biggest problem of Python.
-
Show HN: Airflow is cool but have you tried this for data pipelines?
This is cool, but looks like https://github.com/dagster-io/dagster
The issue with less popular data pipeline projects is that they’re less stable in production
-
Tips for using Jupyter Notebooks with GitHub
Papermill can also target cloud storage outputs for hosting rendered notebooks, execute notebooks from custom Python code, and even be used within distributed data pipelines like Dagster (see Dagstermill). For more information, see the papermill documentation.
-
Field Lineage
There are specialized tools like DataHub (see this for columnar level reporting: https://feature-requests.datahubproject.io/roadmap/541 ) that would help. But really, in a good data platform, the orchestration layer should be aggregating metadata and giving you everything you need to trace lineage, A tool like Dagster does this well if you make full use of the Software Defined Assets capability, but that is fairly new so not so many people have embraced it yet.
What are some alternatives?
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
airbyte - Data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes.
MLflow - Open source platform for the machine learning lifecycle
meltano
OpenLineage - An Open Standard for lineage metadata collection
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
streamlit - Streamlit — The fastest way to build data apps in Python
superset - Apache Superset is a Data Visualization and Data Exploration Platform
hashi-ui - A modern user interface for @hashicorp Consul & Nomad
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
dataform - Dataform is a framework for managing SQL based data operations in BigQuery, Snowflake, and Redshift