ploomber
orchest
ploomber | orchest | |
---|---|---|
121 | 44 | |
3,557 | 4,112 | |
0.2% | 0.6% | |
6.4 | 4.5 | |
7 months ago | almost 2 years ago | |
Python | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ploomber
-
Show HN: JupySQL – a SQL client for Jupyter (ipython-SQL successor)
- One-click sharing powered by Ploomber Cloud: https://ploomber.io
Documentation: https://jupysql.ploomber.io
Note that JupySQL is a fork of ipython-sql; which is no longer actively developed. Catherine, ipython-sql's creator, was kind enough to pass the project to us (check out ipython-sql's README).
We'd love to learn what you think and what features we can ship for JupySQL to be the best SQL client! Please let us know in the comments!
-
Runme – Interactive Runbooks Built with Markdown
For those who don't know, Jupyter has a bash kernel: https://github.com/takluyver/bash_kernel
And you can run Jupyter notebooks from the CLI with Ploomber: https://github.com/ploomber/ploomber
-
Rant: Jupyter notebooks are trash.
Develop notebook-based pipelines
-
Who needs MLflow when you have SQLite?
Fair point. MLflow has a lot of features to cover the end-to-end dev cycle. This SQLite tracker only covers the experiment tracking part.
We have another project to cover the orchestration/pipelines aspect: https://github.com/ploomber/ploomber and we have plans to work on the rest of features. For now, we're focusing on those two.
-
New to large SW projects in Python, best practices to organize code
I recommend taking a look at the ploomber open source. It helps you structure your code and parameterize it in a way that's easier to maintain and test. Our blog has lots of resources about it from testing your code to building a data science platform on AWS.
-
A three-part series on deploying a Data Science Platform on AWS
Developing end-to-end data science infrastructure can get complex. For example, many of us might have struggled to try to integrate AWS services and deal with configuration, permissions, etc. At Ploomber, we’ve worked with many companies in a wide range of industries, such as energy, entertainment, computational chemistry, and genomics, so we are constantly looking for simple solutions to get them started with Data Science in the cloud.
- Ploomber Cloud - Parametrizing and running notebooks in the cloud in parallel
-
Is Colab still the place to go?
If you like working locally with notebooks, you can run via the free tier of ploomber, that'll allow you to get the Ram/Compute you need for the bigger models as part of the free tier. Also, it has the historical executions so you don't need to remember what you executed an hour later!
-
Alternatives to nextflow?
It really depends on your use cases, I've seen a lot of those tools that lock you into a certain syntax, framework or weird language (for instance Groovy). If you'd like to use core python or Jupyter notebooks I'd recommend Ploomber, the community support is really strong, there's an emphasis on observability and you can deploy it on any executor like Slurm, AWS Batch or Airflow. In addition, there's a free managed compute (cloud edition) where you can run certain bioinformatics flows like Alphafold or Cripresso2
-
Saving log files
That's what we do for lineage with https://ploomber.io/
orchest
-
Decent low code options for orchestration and building data flows?
You can check out our OSS https://github.com/orchest/orchest
- Build ML workflows with Jupyter notebooks
-
Building container images in Kubernetes, how would you approach it?
The code example is part of our ELT/data pipeline tool called Orchest: https://github.com/orchest/orchest/
-
Launch HN: Patterns (YC S21) – A much faster way to build and deploy data apps
First want to say congrats to the Patterns team for creating a gorgeous looking tool. Very minimal and approachable. Massive kudos!
Disclaimer: we're building something very similar and I'm curious about a couple of things.
One of the questions our users have asked us often is how to minimize the dependence on "product specific" components/nodes/steps. For example, if you write CI for GitHub Actions you may use a bunch of GitHub Action references.
Looking at the `graph.yml` in some of the examples you shared you use a similar approach (e.g. patterns/openai-completion@v4). That means that whenever you depend on such components your automation/data pipeline becomes more tied to the specific tool (GitHub Actions/Patterns), effectively locking in users.
How are you helping users feel comfortable with that problem (I don't want to invest in something that's not portable)? It's something we've struggled with ourselves as we're expanding the "out of the box" capabilities you get.
Furthermore, would have loved to see this as an open source project. But I guess the second best thing to open source is some open source contributions and `dcp` and `common-model` look quite interesting!
For those who are curious, I'm one of the authors of https://github.com/orchest/orchest
-
Argo became a graduated CNCF project
Haven't tried it. In its favor, Argo is vendor neutral and is really easy to set up in a local k8s environment like docker for desktop or minikube. If you already use k8s for configuration, service discovery, secret management, etc, it's dead simple to set up and use (avoiding configuration having to learn a whole new workflow configuration language in addition to k8s). The big downside is that it doesn't have a visual DAG editor (although that might be a positive for engineers having to fix workflows written by non-programmers), but the relatively bare-metal nature of Argo means that it's fairly easy to use it as an underlying engine for a more opinionated or lower-code framework (orchest is a notable one out now).
- Ideas for infrastructure and tooling to use for frequent model retraining?
-
Looking for a mentor in MLOps. I am a lead developer.
If you’d like to try something for you data workflows that’s vendor agnostic (k8s based) and open source you can check out our project: https://github.com/orchest/orchest
-
Is there a good way to trigger data pipelines by event instead of cron?
You can find it here: https://github.com/orchest/orchest Convenience install script: https://github.com/orchest/orchest#installation
-
How do you deal with parallelising parts of an ML pipeline especially on Python?
We automatically provide container level parallelism in Orchest: https://github.com/orchest/orchest
-
Launch HN: Sematic (YC S22) – Open-source framework to build ML pipelines faster
For people in this thread interested in what this tool is an alternative to: Airflow, Luigi, Kubeflow, Kedro, Flyte, Metaflow, Sagemaker Pipelines, GCP Vertex Workbench, Azure Data Factory, Azure ML, Dagster, DVC, ClearML, Prefect, Pachyderm, and Orchest.
Disclaimer: author of Orchest https://github.com/orchest/orchest
What are some alternatives?
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
docker-airflow - Docker Apache Airflow
papermill - 📚 Parameterize, execute, and analyze notebooks
hookdeck-cli - Alternative to ngrok for localhost asynchronous web development (e.g. webhooks). No account required.
nbdev - Create delightful software with Jupyter Notebooks
ExpansionCards - Reference designs and documentation to create Expansion Cards for the Framework Laptop