Apache Spark
luigi
Apache Spark | luigi | |
---|---|---|
121 | 14 | |
41,083 | 18,270 | |
0.6% | 0.5% | |
10.0 | 8.7 | |
6 days ago | 22 days ago | |
Scala | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Spark
-
Every Database Will Support Iceberg — Here's Why
Apache Iceberg defines a table format that separates how data is stored from how data is queried. Any engine that implements the Iceberg integration — Spark, Flink, Trino, DuckDB, Snowflake, RisingWave — can read and/or write Iceberg data directly.
-
How to Reduce Big Data Analytics Costs by 90% with Karpenter and Spark
Apache Spark powers large-scale data analytics and machine learning, but as workloads grow exponentially, traditional static resource allocation leads to 30–50% resource waste due to idle Executors and suboptimal instance selection.
-
Apache Spark VS cocoindex - a user suggested alternative
2 projects | 1 Apr 2025
-
Unveiling the Apache License 2.0: A Deep Dive into Open Source Freedom
One of the key attributes of Apache License 2.0 is its flexible nature. Permitting use in both proprietary and open source environments, it has become the go-to choice for innovative projects ranging from the Apache HTTP Server to large-scale initiatives like Apache Spark and Hadoop. This flexibility is not solely legal; it is also philosophical. The license is designed to encourage transparency and maintain a healthy balance between freedom and accountability, ultimately making it easier for developers to adapt and contribute without restrictive legal barriers. Another modern twist discussed in the article is the concept of dual licensing. Dual licensing can offer an attractive method for additional commercial exploitation while still upholding open source principles. However, as the article cautions, dual licensing involves legal intricacy and demands rigor in managing Contributor License Agreements (CLAs), a challenge that the open source community navigates with ongoing debates. For developers looking to understand similar innovative approaches to licensing, further information can be explored at License Token.
-
The Application of Java Programming In Data Analysis and Artificial Intelligence
[1] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Pearson, 2020. [2] F. Chollet, Deep Learning with Python. Manning Publications, 2018. [3] C. C. Aggarwal, Data Mining: The Textbook. Springer, 2015. [4] J. Dean and S. Ghemawat, "MapReduce: Simplified Data Processing on Large Clusters," Communications of the ACM, vol. 51, no. 1, pp. 107-113, 2008. [5] Apache Software Foundation, "Apache Spark: Lightning-Fast Unified Analytics Engine," Available: https://spark.apache.org/. [6] Java Community Process, "Java Machine Learning Libraries and Frameworks," Available: https://www.oracle.com/java/.
-
Apache Spark: Revolutionizing Big Data with Sustainable Open Source Funding
Apache Spark isn’t just a framework for distributed data processing; it’s a rich ecosystem that includes libraries for machine learning, stream processing, and graph processing. A key aspect of Spark’s ecosystem is its reliance on community contributions. Developers from around the world collaborate on its GitHub repository, ensuring that Spark remains at the cutting edge of technology. The governance process, characterized by transparency and meritocracy, builds trust among contributors and sponsors alike. An essential component of Apache Spark’s model is its use of the Apache 2.0 license. This permissive license not only shields contributors with patent protection but also allows enterprises to integrate Spark into proprietary systems without legal hurdles. The license enables a free flow of innovation—companies can both use and contribute to Spark’s codebase, leading to enhancements that benefit the entire community. The funding mechanisms sustaining Apache Spark are as diverse as they are innovative. Corporate sponsorships play a significant role, with companies dedicating resources and finances to support ongoing development. Additionally, grant programs and community donations help maintain an ecosystem where improvements and new features are continuously shared with users worldwide. These sustainable funding practices ensure that Apache Spark can meet the demands of real-time analytics and high-volume data processing.
-
Automating Enhanced Due Diligence in Regulated Applications
If you're designing an event-based pipeline, you can use a data streaming tool like Kafka to process data as it's collected by the pipeline. For a setup that already has data stored, you can use tools like Apache Spark to batch process and clean it before moving ahead with the pipeline.
-
Run PySpark Local Python Windows Notebook
PySpark is the Python API for Apache Spark, an open-source distributed computing system that enables fast, scalable data processing. PySpark allows Python developers to leverage the powerful capabilities of Spark for big data analytics, machine learning, and data engineering tasks without needing to delve into the complexities of Java or Scala.
- Infraestrutura para análise de dados com Jupyter, Cassandra, Pyspark e Docker
- His Startup Is Now Worth $62B. It Gave Away Its First Product Free
luigi
-
Ask HN: What is the correct way to deal with pipelines?
I agree there are many options in this space. Two others to consider:
- https://airflow.apache.org/
- https://github.com/spotify/luigi
There are also many Kubernetes based options out there. For the specific use case you specified, you might even consider a plain old Makefile and incrond if you expect these all to run on a single host and be triggered by a new file showing up in a directory…
-
In the context of Python what is a Bob Job?
Maybe if your use case is “smallish” and doesn’t require the whole studio suite you could check out apscheduler for doing python “tasks” on a schedule and luigi to build pipelines.
-
Lessons Learned from Running Apache Airflow at Scale
What are you trying to do? Distributed scheduler with a single instance? No database? Are you sure you don't just mean "a scheduler" ala Luigi? https://github.com/spotify/luigi
-
Apache Airflow. How to make the complex workflow as an easy job
It's good to know what Airflow is not the only one on the market. There are Dagster and Spotify Luigi and others. But they have different pros and cons, be sure that you did a good investigation on the market to choose the best suitable tool for your tasks.
-
DevOps Fundamentals for Deep Learning Engineers
MLOps is a HUGE area to explore, and not surprisingly, there are many startups showing up in this space. If you want to get it on the latest trends, then I would look at workflow orchestration frameworks such as Metaflow (started off at Netflix, is now spinning off into its own enterprise business, https://metaflow.org/), Kubeflow (used at Google, https://www.kubeflow.org/), Airflow (used at Airbnb, https://airflow.apache.org/), and Luigi (used at Spotify, https://github.com/spotify/luigi). Then you have the model serving itself, so there is Seldon (https://www.seldon.io/), Torchserve (https://pytorch.org/serve/), and TensorFlow Serving (https://www.tensorflow.org/tfx/guide/serving). You also have the actual export and transfer of DL models, and ONNX is the most popular here (https://onnx.ai/). Spark (https://spark.apache.org/) still holds up nicely after all these years, especially if you are doing batch predictions on massive amount of data. There is also the GitFlow way of doing things and Data Version Control (DVC, https://dvc.org/) is taken a pole position there.
-
Data pipelines with Luigi
At Wonderflow we're doing a lot of ML / NLP using Python and recently we are enjoying writing data pipelines using Spotify's Luigi.
- Noobie who is trying to use K8s needs confirmation to know if this is the way or he is overestimating Kubernetes.
-
Open Source ETL Project For Startups
💡【About Luigi】 https://github.com/spotify/luigi Luigi was built at Spotify since 2012, it's open source and mainly used for getting data insights by showing recommendations, toplists, A/B test analysis, external reports, internal dashboards, etc.
- Resources/tutorials to help me learn about ETL?
-
Using Terraform to make my many side-projects 'pick up and play'
So to sum that up, I went from having nothing for my side-project set up in AWS to having a Kubernetes cluster with the basic metrics and dashboard, a proper IAM-linked ServiceAccount support for a smooth IAM experience in K8s, and Luigi deployed so that I could then run a Luigi workflow using an ad-hoc run of a CronJob. That's quite remarkable to me. All that took hours to figure out and define when I first did it, over six months ago.
What are some alternatives?
Smile - Statistical Machine Intelligence & Learning Engine
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Trino - Official repository of Trino, the distributed SQL query engine for big data, former
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
Scalding - A Scala API for Cascading
streamparse - Run Python in Apache Storm topologies. Pythonic API, CLI tooling, and a topology DSL.