ploomber
papermill
Our great sponsors
ploomber | papermill | |
---|---|---|
121 | 26 | |
3,369 | 5,623 | |
0.9% | 1.3% | |
7.8 | 7.9 | |
15 days ago | 13 days ago | |
Python | Python | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ploomber
-
Show HN: JupySQL – a SQL client for Jupyter (ipython-SQL successor)
- One-click sharing powered by Ploomber Cloud: https://ploomber.io
Documentation: https://jupysql.ploomber.io
Note that JupySQL is a fork of ipython-sql; which is no longer actively developed. Catherine, ipython-sql's creator, was kind enough to pass the project to us (check out ipython-sql's README).
We'd love to learn what you think and what features we can ship for JupySQL to be the best SQL client! Please let us know in the comments!
-
Runme – Interactive Runbooks Built with Markdown
For those who don't know, Jupyter has a bash kernel: https://github.com/takluyver/bash_kernel
And you can run Jupyter notebooks from the CLI with Ploomber: https://github.com/ploomber/ploomber
-
Rant: Jupyter notebooks are trash.
Develop notebook-based pipelines
-
Who needs MLflow when you have SQLite?
Fair point. MLflow has a lot of features to cover the end-to-end dev cycle. This SQLite tracker only covers the experiment tracking part.
We have another project to cover the orchestration/pipelines aspect: https://github.com/ploomber/ploomber and we have plans to work on the rest of features. For now, we're focusing on those two.
-
New to large SW projects in Python, best practices to organize code
I recommend taking a look at the ploomber open source. It helps you structure your code and parameterize it in a way that's easier to maintain and test. Our blog has lots of resources about it from testing your code to building a data science platform on AWS.
-
A three-part series on deploying a Data Science Platform on AWS
Developing end-to-end data science infrastructure can get complex. For example, many of us might have struggled to try to integrate AWS services and deal with configuration, permissions, etc. At Ploomber, we’ve worked with many companies in a wide range of industries, such as energy, entertainment, computational chemistry, and genomics, so we are constantly looking for simple solutions to get them started with Data Science in the cloud.
- Ploomber Cloud - Parametrizing and running notebooks in the cloud in parallel
-
Is Colab still the place to go?
If you like working locally with notebooks, you can run via the free tier of ploomber, that'll allow you to get the Ram/Compute you need for the bigger models as part of the free tier. Also, it has the historical executions so you don't need to remember what you executed an hour later!
-
Alternatives to nextflow?
It really depends on your use cases, I've seen a lot of those tools that lock you into a certain syntax, framework or weird language (for instance Groovy). If you'd like to use core python or Jupyter notebooks I'd recommend Ploomber, the community support is really strong, there's an emphasis on observability and you can deploy it on any executor like Slurm, AWS Batch or Airflow. In addition, there's a free managed compute (cloud edition) where you can run certain bioinformatics flows like Alphafold or Cripresso2
-
Saving log files
That's what we do for lineage with https://ploomber.io/
papermill
-
Spreadsheet errors can have disastrous consequences – yet we keep making them
Pandas docs > Comparison with spreadsheets: https://pandas.pydata.org/docs/getting_started/comparison/co...
Pandas docs > I/O > Excel files: https://pandas.pydata.org/docs/user_guide/io.html#excel-file...
nteract/papermill: https://github.com/nteract/papermill :
> papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks. [...]
> This opens up new opportunities for how notebooks can be used. For example:
> - Perhaps you have a financial report that you wish to run with different values on the first or last day of a month or at the beginning or end of the year, using parameters makes this task easier.
"The World Excel Championship is being broadcast on ESPN" (2022) https://news.ycombinator.com/item?id=32420925 :
> Computational notebook speedrun ideas:
-
Jupyter Kernel Architecture
There is Papermill ... https://github.com/nteract/papermill
-
Git and Jupyter Notebooks Guide
https://github.com/jupyter/enhancement-proposals/pull/103#is...
Papermill is one tool for running Jupyter notebooks as reports; with the date in the filename. https://papermill.readthedocs.io/en/latest/
-
JupyterLab 4.0
You may be interested in papermill to address the parametrized analysis problem [1]. I think (but I'm not positive) this is what the data team at a previous job used to automate running notebooks for all sorts nightly reports.
[1] https://papermill.readthedocs.io/en/latest/#
-
Show HN: Mercury – convert Jupyter Notebooks to Web Apps without code rewriting
I'm using Papermill to operationalize Notebooks (https://github.com/nteract/papermill), it e.g. also has airflow support. I'm really happy with papermill for automatic notebook execution, in my field it's nice that we can go very quickly from analysis to operations -- while having super transparent "logging" in the executed notebooks.
-
What's the best thing/library you learned this year ?
papermill bcpandas fastapi
-
Does the Jupyter API allow using Jupyter from the CL?
But you can execute your notebook using Jupyter-run or papermill.
-
Running Jupyter notebooks in parallel
As a first option, we will use Papermill, which has a Python API that allows us to run different notebooks using some functions:
-
Tips for using Jupyter Notebooks with GitHub
Papermill can also target cloud storage outputs for hosting rendered notebooks, execute notebooks from custom Python code, and even be used within distributed data pipelines like Dagster (see Dagstermill). For more information, see the papermill documentation.
-
Three Tools for Executing Jupyter Notebooks
Papermill Source Code
What are some alternatives?
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
nbconvert - Jupyter Notebook Conversion
dagster - An orchestration platform for the development, production, and observation of data assets.
airflow-notebook - This repository is no longer maintained.
dvc - 🦉 ML Experiments and Data Management with Git
nbdev - Create delightful software with Jupyter Notebooks
argo - Workflow Engine for Kubernetes
voila - Voilà turns Jupyter notebooks into standalone web applications
MLflow - Open source platform for the machine learning lifecycle
jupytext - Jupyter Notebooks as Markdown Documents, Julia, Python or R scripts
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows