luigi
orchest
luigi | orchest | |
---|---|---|
14 | 44 | |
17,327 | 4,022 | |
0.5% | 0.1% | |
6.3 | 4.5 | |
9 days ago | 11 months ago | |
Python | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
luigi
-
Ask HN: What is the correct way to deal with pipelines?
I agree there are many options in this space. Two others to consider:
- https://airflow.apache.org/
- https://github.com/spotify/luigi
There are also many Kubernetes based options out there. For the specific use case you specified, you might even consider a plain old Makefile and incrond if you expect these all to run on a single host and be triggered by a new file showing up in a directoryâŚ
-
In the context of Python what is a Bob Job?
Maybe if your use case is âsmallishâ and doesnât require the whole studio suite you could check out apscheduler for doing python âtasksâ on a schedule and luigi to build pipelines.
-
Lessons Learned from Running Apache Airflow at Scale
What are you trying to do? Distributed scheduler with a single instance? No database? Are you sure you don't just mean "a scheduler" ala Luigi? https://github.com/spotify/luigi
-
Apache Airflow. How to make the complex workflow as an easy job
It's good to know what Airflow is not the only one on the market. There are Dagster and Spotify Luigi and others. But they have different pros and cons, be sure that you did a good investigation on the market to choose the best suitable tool for your tasks.
-
DevOps Fundamentals for Deep Learning Engineers
MLOps is a HUGE area to explore, and not surprisingly, there are many startups showing up in this space. If you want to get it on the latest trends, then I would look at workflow orchestration frameworks such as Metaflow (started off at Netflix, is now spinning off into its own enterprise business, https://metaflow.org/), Kubeflow (used at Google, https://www.kubeflow.org/), Airflow (used at Airbnb, https://airflow.apache.org/), and Luigi (used at Spotify, https://github.com/spotify/luigi). Then you have the model serving itself, so there is Seldon (https://www.seldon.io/), Torchserve (https://pytorch.org/serve/), and TensorFlow Serving (https://www.tensorflow.org/tfx/guide/serving). You also have the actual export and transfer of DL models, and ONNX is the most popular here (https://onnx.ai/). Spark (https://spark.apache.org/) still holds up nicely after all these years, especially if you are doing batch predictions on massive amount of data. There is also the GitFlow way of doing things and Data Version Control (DVC, https://dvc.org/) is taken a pole position there.
-
Data pipelines with Luigi
At Wonderflow we're doing a lot of ML / NLP using Python and recently we are enjoying writing data pipelines using Spotify's Luigi.
- Noobie who is trying to use K8s needs confirmation to know if this is the way or he is overestimating Kubernetes.
-
Open Source ETL Project For Startups
đĄăAbout Luigiă https://github.com/spotify/luigi Luigi was built at Spotify since 2012, it's open source and mainly used for getting data insights by showing recommendations, toplists, A/B test analysis, external reports, internal dashboards, etc.
- Resources/tutorials to help me learn about ETL?
-
Using Terraform to make my many side-projects 'pick up and play'
So to sum that up, I went from having nothing for my side-project set up in AWS to having a Kubernetes cluster with the basic metrics and dashboard, a proper IAM-linked ServiceAccount support for a smooth IAM experience in K8s, and Luigi deployed so that I could then run a Luigi workflow using an ad-hoc run of a CronJob. That's quite remarkable to me. All that took hours to figure out and define when I first did it, over six months ago.
orchest
-
Decent low code options for orchestration and building data flows?
You can check out our OSS https://github.com/orchest/orchest
- Build ML workflows with Jupyter notebooks
-
Building container images in Kubernetes, how would you approach it?
The code example is part of our ELT/data pipeline tool called Orchest: https://github.com/orchest/orchest/
-
Launch HN: Patterns (YC S21) â A much faster way to build and deploy data apps
First want to say congrats to the Patterns team for creating a gorgeous looking tool. Very minimal and approachable. Massive kudos!
Disclaimer: we're building something very similar and I'm curious about a couple of things.
One of the questions our users have asked us often is how to minimize the dependence on "product specific" components/nodes/steps. For example, if you write CI for GitHub Actions you may use a bunch of GitHub Action references.
Looking at the `graph.yml` in some of the examples you shared you use a similar approach (e.g. patterns/openai-completion@v4). That means that whenever you depend on such components your automation/data pipeline becomes more tied to the specific tool (GitHub Actions/Patterns), effectively locking in users.
How are you helping users feel comfortable with that problem (I don't want to invest in something that's not portable)? It's something we've struggled with ourselves as we're expanding the "out of the box" capabilities you get.
Furthermore, would have loved to see this as an open source project. But I guess the second best thing to open source is some open source contributions and `dcp` and `common-model` look quite interesting!
For those who are curious, I'm one of the authors of https://github.com/orchest/orchest
-
Argo became a graduated CNCF project
Haven't tried it. In its favor, Argo is vendor neutral and is really easy to set up in a local k8s environment like docker for desktop or minikube. If you already use k8s for configuration, service discovery, secret management, etc, it's dead simple to set up and use (avoiding configuration having to learn a whole new workflow configuration language in addition to k8s). The big downside is that it doesn't have a visual DAG editor (although that might be a positive for engineers having to fix workflows written by non-programmers), but the relatively bare-metal nature of Argo means that it's fairly easy to use it as an underlying engine for a more opinionated or lower-code framework (orchest is a notable one out now).
- Ideas for infrastructure and tooling to use for frequent model retraining?
-
Looking for a mentor in MLOps. I am a lead developer.
If youâd like to try something for you data workflows thatâs vendor agnostic (k8s based) and open source you can check out our project: https://github.com/orchest/orchest
-
Is there a good way to trigger data pipelines by event instead of cron?
You can find it here: https://github.com/orchest/orchest Convenience install script: https://github.com/orchest/orchest#installation
-
How do you deal with parallelising parts of an ML pipeline especially on Python?
We automatically provide container level parallelism in Orchest: https://github.com/orchest/orchest
-
Launch HN: Sematic (YC S22) â Open-source framework to build ML pipelines faster
For people in this thread interested in what this tool is an alternative to: Airflow, Luigi, Kubeflow, Kedro, Flyte, Metaflow, Sagemaker Pipelines, GCP Vertex Workbench, Azure Data Factory, Azure ML, Dagster, DVC, ClearML, Prefect, Pachyderm, and Orchest.
Disclaimer: author of Orchest https://github.com/orchest/orchest
What are some alternatives?
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
docker-airflow - Docker Apache Airflow
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
hookdeck-cli - Receive events (e.g. webhooks) in your development environment
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
ploomber - The fastest âĄď¸ way to build data pipelines. Develop iteratively, deploy anywhere. âď¸
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
Dask - Parallel computing with task scheduling
label-studio - Label Studio is a multi-type data labeling and annotation tool with standardized output format
Pinball
Node RED - Low-code programming for event-driven applications