docker-airflow
ploomber
Our great sponsors
docker-airflow | ploomber | |
---|---|---|
10 | 121 | |
3,703 | 3,355 | |
- | 1.1% | |
0.0 | 7.8 | |
about 1 year ago | about 1 month ago | |
Shell | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
docker-airflow
-
Amount of effort to stand up, integrate and manage a small airflow implementation
Used a custom version of Puckel Airflow Docker image (Spent a lot of time customising to our needs, but default Airflow container should still work)
-
The Unbundling of Airflow
I understand it is subjective. But I use a forked version of https://github.com/puckel/docker-airflow on our managed K8s cluster and it points to a cloud managed Postgres. It has worked pretty well for over 3 years with no-one actually managing it from an infra POV. YMMV. This is driving a product whose ARR is well in the 100s of Millions.
If you have simple needs that are more or less set, I agree Airflow is overkill and a simple Jenkins instance is all you need.
-
ETL com Apache Airflow, Web Scraping, AWS S3, Apache Spark e Redshift | Parte 1
A imagem do docker utilizada foi a puckel/docker-airflow onde acrescentei o BeautifulSoup como dependência para criação da imagem em minha máquina.
ploomber
-
Show HN: JupySQL – a SQL client for Jupyter (ipython-SQL successor)
- One-click sharing powered by Ploomber Cloud: https://ploomber.io
Documentation: https://jupysql.ploomber.io
Note that JupySQL is a fork of ipython-sql; which is no longer actively developed. Catherine, ipython-sql's creator, was kind enough to pass the project to us (check out ipython-sql's README).
We'd love to learn what you think and what features we can ship for JupySQL to be the best SQL client! Please let us know in the comments!
-
Runme – Interactive Runbooks Built with Markdown
For those who don't know, Jupyter has a bash kernel: https://github.com/takluyver/bash_kernel
And you can run Jupyter notebooks from the CLI with Ploomber: https://github.com/ploomber/ploomber
-
Rant: Jupyter notebooks are trash.
Develop notebook-based pipelines
-
Who needs MLflow when you have SQLite?
Fair point. MLflow has a lot of features to cover the end-to-end dev cycle. This SQLite tracker only covers the experiment tracking part.
We have another project to cover the orchestration/pipelines aspect: https://github.com/ploomber/ploomber and we have plans to work on the rest of features. For now, we're focusing on those two.
-
Ploomber Cloud - Parametrizing and running notebooks in the cloud in parallel
We started with an open-source framework to help data practitioners make their work reproducible. However, after months of building and learning from our community, we realized that many needed help with the setup: getting Python installed, getting dependencies, running experiments locally, etc.
-
Alternatives to nextflow?
It really depends on your use cases, I've seen a lot of those tools that lock you into a certain syntax, framework or weird language (for instance Groovy). If you'd like to use core python or Jupyter notebooks I'd recommend Ploomber, the community support is really strong, there's an emphasis on observability and you can deploy it on any executor like Slurm, AWS Batch or Airflow. In addition, there's a free managed compute (cloud edition) where you can run certain bioinformatics flows like Alphafold or Cripresso2
-
"Do I need to know {insert advanced math} to get a Data Science job?" [Rant]
btw, you can export Ploomber to Argo and Airflow!
-
Running Jupyter notebooks in parallel
As a second option, we will use Ploomber with serial execution, which also has a Python API that allows us to execute different notebooks using the NotebookRunner function:
-
How do you deal with parallelising parts of an ML pipeline especially on Python?
I also recommend checking ploomber out, this open source can help you build code as templates, parallelize it and parameterize it. There are also some reporting and debugging tools in there!
What are some alternatives?
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
papermill - 📚 Parameterize, execute, and analyze notebooks
dagster - An orchestration platform for the development, production, and observation of data assets.
dvc - 🦉 ML Experiments and Data Management with Git
argo - Workflow Engine for Kubernetes
orchest - Build data pipelines, the easy way 🛠️
MLflow - Open source platform for the machine learning lifecycle
nbdev - Create delightful software with Jupyter Notebooks
fastapi-dramatiq-data-ingestion - Sample project showing reliable data ingestion application using FastAPI and dramatiq
clearml - ClearML - Auto-Magical CI/CD to streamline your ML workflow. Experiment Manager, MLOps and Data-Management
jupytext - Jupyter Notebooks as Markdown Documents, Julia, Python or R scripts