ploomber VS projects

Compare ploomber vs projects and see what are their differences.

projects

Sample projects using Ploomber. (by ploomber)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
ploomber projects
121 19
3,369 77
0.9% -
7.8 4.7
16 days ago 3 months ago
Python Jupyter Notebook
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ploomber

Posts with mentions or reviews of ploomber. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-06.
  • Show HN: JupySQL – a SQL client for Jupyter (ipython-SQL successor)
    2 projects | news.ycombinator.com | 6 Dec 2023
    - One-click sharing powered by Ploomber Cloud: https://ploomber.io

    Documentation: https://jupysql.ploomber.io

    Note that JupySQL is a fork of ipython-sql; which is no longer actively developed. Catherine, ipython-sql's creator, was kind enough to pass the project to us (check out ipython-sql's README).

    We'd love to learn what you think and what features we can ship for JupySQL to be the best SQL client! Please let us know in the comments!

  • Runme – Interactive Runbooks Built with Markdown
    7 projects | news.ycombinator.com | 24 Aug 2023
    For those who don't know, Jupyter has a bash kernel: https://github.com/takluyver/bash_kernel

    And you can run Jupyter notebooks from the CLI with Ploomber: https://github.com/ploomber/ploomber

  • Rant: Jupyter notebooks are trash.
    6 projects | /r/datascience | 24 Jan 2023
    Develop notebook-based pipelines
  • Who needs MLflow when you have SQLite?
    5 projects | news.ycombinator.com | 16 Nov 2022
    Fair point. MLflow has a lot of features to cover the end-to-end dev cycle. This SQLite tracker only covers the experiment tracking part.

    We have another project to cover the orchestration/pipelines aspect: https://github.com/ploomber/ploomber and we have plans to work on the rest of features. For now, we're focusing on those two.

  • New to large SW projects in Python, best practices to organize code
    1 project | /r/Python | 11 Nov 2022
    I recommend taking a look at the ploomber open source. It helps you structure your code and parameterize it in a way that's easier to maintain and test. Our blog has lots of resources about it from testing your code to building a data science platform on AWS.
  • A three-part series on deploying a Data Science Platform on AWS
    1 project | /r/dataengineering | 4 Nov 2022
    Developing end-to-end data science infrastructure can get complex. For example, many of us might have struggled to try to integrate AWS services and deal with configuration, permissions, etc. At Ploomber, we’ve worked with many companies in a wide range of industries, such as energy, entertainment, computational chemistry, and genomics, so we are constantly looking for simple solutions to get them started with Data Science in the cloud.
  • Ploomber Cloud - Parametrizing and running notebooks in the cloud in parallel
    3 projects | /r/IPython | 3 Nov 2022
  • Is Colab still the place to go?
    1 project | /r/deeplearning | 2 Nov 2022
    If you like working locally with notebooks, you can run via the free tier of ploomber, that'll allow you to get the Ram/Compute you need for the bigger models as part of the free tier. Also, it has the historical executions so you don't need to remember what you executed an hour later!
  • Alternatives to nextflow?
    6 projects | /r/bioinformatics | 26 Oct 2022
    It really depends on your use cases, I've seen a lot of those tools that lock you into a certain syntax, framework or weird language (for instance Groovy). If you'd like to use core python or Jupyter notebooks I'd recommend Ploomber, the community support is really strong, there's an emphasis on observability and you can deploy it on any executor like Slurm, AWS Batch or Airflow. In addition, there's a free managed compute (cloud edition) where you can run certain bioinformatics flows like Alphafold or Cripresso2
  • Saving log files
    1 project | /r/docker | 26 Oct 2022
    That's what we do for lineage with https://ploomber.io/

projects

Posts with mentions or reviews of projects. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-08-08.
  • Analyze and plot 5.5M records in 20s with BigQuery and Ploomber
    2 projects | dev.to | 8 Aug 2022
    You can look at the files in detail here. For this tutorial, I'll quickly mention a few crucial details.
  • Three Tools for Executing Jupyter Notebooks
    6 projects | dev.to | 25 Jul 2022
    Ploomber is the complete solution for notebook execution. It builds on top of papermill and extends it to allow writing multi-stage workflows where each task is a notebook. Meanwhile, it automatically manages orchestration. Hence you can run notebooks in parallel without having to write extra code.
  • OOP in python ETL?
    3 projects | /r/dataengineering | 14 Mar 2022
    The answer is YES, you can take advantage of OOP best practices to write good ETLs. For instance in this Ploomber sample ETL You can see there's a mix of .sql and .py files, it's within modular components so it's easier to test, deploy and execute. It's way easier than airflow since there's no infra work involved, you only have to setup your pipeline.yaml file. This also allows you to make the code WAY more maintainable and scalable, avoid redundant code and deploy faster :)
  • What are some good DS/ML repos where I can learn about structuring a DS/ML project?
    3 projects | /r/datascience | 27 Feb 2022
    We have tons of examples that follow a standard layout, here’s one: https://github.com/ploomber/projects/tree/master/templates/ml-intermediate
  • Anyone's org using Airflow as a generalized job orchestator, not just for data engineering/ETL?
    2 projects | /r/dataengineering | 23 Feb 2022
    I can talk about the open-source I'm working on Ploomber (https://github.com/ploomber/ploomber), it's focusing on seamless integration with Jupyter and IDEs. It allows an easy mechanism to orchestrate work for instance, here's an example SQL ETL and then you can deploy it anywhere, so if you're working with Airflow, it'll deploy it there too but without the complexity. You wouldn't have to maintain docker images etc.
  • ETL with python
    3 projects | /r/ETL | 20 Feb 2022
    I recommend using Ploomber which can help you build once and automate a lot of the work, and it works with python natively. It's open source so you can start with one of the examples, like the ML-basic example or the ETL one. It'll allow you to define the pipeline and then easily explain the flow with the DAG plot. Feel free to ask questions, I'm happy to help (I've built 100s of data pipelines over the years).
  • What tools do you use for data quality?
    2 projects | /r/dataengineering | 8 Feb 2022
    I'm not sure what pipeline frameworks support this kind of testing, but after successfully implementing this workflow, I added this feature to Ploomber, the project I'm working on. Here's how a pipeline looks like, and here's a tutorial.
  • Data pipeline suggestions
    13 projects | /r/dataengineering | 4 Feb 2022
    Check out Ploomber, (disclaimer: I'm the author) it has a simple API, and you can export to Airflow, AWS, Kubernetes. Supports all databases that work with Python and you can seamlessly transfer from a SQL step to a Python step. Here's an example.
  • ETL Tools
    2 projects | /r/BusinessIntelligence | 4 Feb 2022
    Without more specifics about your use case, it's hard to give more specific advice. But check out Ploomber (disclaimer: I'm the creator) - here's an example ETL pipeline. I've used it in past projects to develop Oracle ETL pipelines. Modularizing the analysis in many parts helps a lot with maintenance.
  • Whats something hot rn or whats going to be next thing we should focus on in data engineering?
    4 projects | /r/dataengineering | 3 Feb 2022
    Yes! (tell your friend). You can write shell scripts so you can execute that 2002 code :) You can test it locally and then run it in AWS Batch/Argo. Here's an example

What are some alternatives?

When comparing ploomber and projects you can also consider the following projects:

Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.

cookiecutter-data-science - A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.

papermill - 📚 Parameterize, execute, and analyze notebooks

dagster - An orchestration platform for the development, production, and observation of data assets.

dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dvc - 🦉 ML Experiments and Data Management with Git

jitsu - Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days

argo - Workflow Engine for Kubernetes

Python Packages Project Generator - 🚀 Your next Python package needs a bleeding-edge project structure.

MLflow - Open source platform for the machine learning lifecycle

castled - Castled is an open source reverse ETL solution that helps you to periodically sync the data in your db/warehouse into sales, marketing, support or custom apps without any help from engineering teams