kukulcan
Apache Spark
kukulcan | Apache Spark | |
---|---|---|
1 | 105 | |
115 | 39,259 | |
- | 0.6% | |
0.0 | 10.0 | |
over 3 years ago | 4 days ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kukulcan
-
New version of the REPL for Apache Kafka is out!
Check it out! https://github.com/mmolimar/kukulcan
Apache Spark
-
Avoid These Top 10 Mistakes When Using Apache Spark
We all know how easy it is to overlook small parts of our code, especially when we have powerful tools like Apache Spark to handle the heavy lifting. Spark's core engine is great at optimizing our messy, complex code into a sleek, efficient physical plan. But here's the catch: Spark isn't flawless. It's on a journey to perfection, sure, but it still has its limits. And Spark is upfront about those limitations, listing them out in the documentation (sometimes as little notes).
-
IaaS vs PaaS vs SaaS: The Key Differences
One specific use case of the IaaS model is for deploying software that would have otherwise been bought as a SaaS. There are many such software from email servers to databases. You can choose to deploy MySQL in your infrastructure rather than buying from a MySQL SaaS provider. Other things you can deploy using the IaaS model include Mattermost for team collaboration, Apache Spark for data analytics, and SAP for Enterprise Resource Planning.
-
How I've implemented the Medallion architecture using Apache Spark and Apache Hdoop
In this project, I'm exploring the Medallion Architecture which is a data design pattern that organizes data into different layers based on structure and/or quality. I'm creating a fictional scenario where a large enterprise that has several branches across the country. Each branch receives purchase orders from an app and deliver the goods to their customers. The enterprise wants to identify the branch that receives the most purchase requests and the branch that has the minimum average delivery time. To achieve that, I've used Apache Spark as a distributed compute engine and Apache Hadoop, in particular HDFS, as my data storage layer. Apache Spark ingest, processes, and stores the app's data on HDFS to be served to a custom dashboard app. You can find all about it, in this Github repo
-
Shades of Open Source - Understanding The Many Meanings of "Open"
In contrast, Databricks maintains internal forks of Spark, Delta Lake, and Unity Catalog, using the same names for both the open-source versions and the features specific to the Databricks platform. While they do provide separate documentation, online discussions often reflect confusion about how to use features in the open-source versions that only exist on the Databricks platform. This creates a "muddying of the waters" between what is open and what is proprietary. This isn't an issue if you are a Databricks user, but it can be quite confusing for those who want to use these tools outside of the Databricks ecosystem.
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
What are some alternatives?
ammonite-spark - Run spark calculations from Ammonite
Trino - Official repository of Trino, the distributed SQL query engine for big data, former
kmq - Kafka-based message queue
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
kattlo-cli - Kattlo CLI Project
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Reactive-kafka - Alpakka Kafka connector - Alpakka is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka.
Scalding - A Scala API for Cascading
librdkafka - The Apache Kafka C/C++ library
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing