etl-markup-toolkit
sparkmagic
Our great sponsors
etl-markup-toolkit | sparkmagic | |
---|---|---|
7 | 4 | |
5 | 1,284 | |
- | 0.5% | |
0.0 | 7.6 | |
about 3 years ago | 10 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
etl-markup-toolkit
-
How do you serialize and save "transformations" in your pipeline?
I have a side project (https://github.com/leozqin/etl-markup-toolkit, if you're interested) that takes transformations as yaml files and outputs step-level logs about each step of the transformation. I've always felt that both artifacts could made searchable using an ELK stack or something... Do you have similar artifacts? Or perhaps there's a way to turn SQL into a structured or semi-structured form to aid in searchability
-
Alternative tools to DBT / SQL and Python for writing business logic? Trying to prevent creating a mountain of undocumented spaghetti
My current side project (https://github.com/leozqin/etl-markup-toolkit) is a low code way to express transformations as configuration and run it on pyspark. It also supports abstraction so you can call business logic like a function and has step-level reporting you can load into a metadata table. Usual disclaimers about OSS apply, although I'm happy to answer questions and take contributions.
-
How to keep track of the different Transformations done in an ETL pipeline?
Not sure if it meets your exact requirements, but I maintain an open source project that enables spark transformations as configuration, and part of that capability is reporting, including logging of columns in vs columns out, row counts, etc... It's very early stage but perhaps could be useful - https://github.com/leozqin/etl-markup-toolkit
- ETL Markup Toolkit - a spark native tool for describing etl transformations as configuration
- ETL Markup Toolkit - a Spark-native tool for expressing ETL transformations as configuration
sparkmagic
-
Doing ML works in AWS. Need help installing cartopy
Please file an issue at https://github.com/jupyter-incubator/sparkmagic
-
Ask HN: Who's an open source maintainer/project that needs sponsorship or help?
I maintain several open source projects, most notably:
Sparkmagic (https://github.com/jupyter-incubator/sparkmagic)
Sparkmagic provides jupyter magics and kernels for working with remote Spark clusters. It's used by thousands of developers and companies like Pinterest, Amazon, more!
I've been maintaining for the past few years and would love help!
KSOPS (https://github.com/viaduct-ai/kustomize-sops)
KSOPS, or kustomize-SOPS, is a kustomize KRM exec plugin for SOPS encrypted resources. KSOPS can be used to decrypt any Kubernetes resource, but is most commonly used to decrypt encrypted Kubernetes Secrets and ConfigMaps. As a kustomize plugin, KSOPS allows you to manage, build, and apply encrypted manifests the same way you manage the rest of your Kubernetes manifests.
KSOPS is the most popular kustomize plugin and I'd love help maintaining and improving it from out GitOps fanatics.
-
Spark is lit once again
Things get a bit more complicated on interactive sessions. We've created Sparkmagic compatible REST API so that Sparkmagic kernel could communicate with Lighter the same way as it does with Apache Livy. When a user creates an interactive session Lighter server submits a custom PySpark application which contains an infinite loop which constantly checks for new commands to be executed. Each Sparkmagic command is saved on Java collection, retrieved by the PySpark application through Py4J Gateway and executed.
-
An SQL Solution for Jupyter
Jupyter would be even better if it supported the seamless combination of Python and SQL code cells.
My notebook code typically involves a data prep stage with querying a SQL database, then downloading into Python for more complex analysis, ML modelling, integration with external data sources, etc. So the notebook has a Python kernel with SQL usually as embedded """-quoted strings.
Does anyone have a solution to treating selected code cells as SQL - with SQL highlighting and tooltips - exposed as string variables to the Python code?
Sparkmagic [1] does part of this for Python/SQL/Spark interoperability, but as far as I recall, doesn't support SQL syntax highlighting.
[1] https://github.com/jupyter-incubator/sparkmagic
What are some alternatives?
mara-pipelines - A lightweight opinionated ETL framework, halfway between plain scripts and Apache Airflow
lighter - REST API for Apache Spark on K8S or YARN
PySpark-Boilerplate - A boilerplate for writing PySpark Jobs
Jupyter Scala - A Scala kernel for Jupyter
quinn - pyspark methods to enhance developer productivity 📣 👯 🎉
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
tdigest - t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark
nbmake - 📝 Pytest plugin for testing notebooks
nbgrader - A system for assigning and grading notebooks
xeus-sql - Jupyter kernel for SQL databases
incubator-livy - Apache Livy is an open source REST interface for interacting with Apache Spark from anywhere.
incubator-livy - Mirror of Apache livy (Incubating)