build-server-protocol
Apache Spark
build-server-protocol | Apache Spark | |
---|---|---|
3 | 108 | |
452 | 39,471 | |
1.6% | 0.6% | |
7.7 | 10.0 | |
2 days ago | 1 day ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
build-server-protocol
- Build Server Protocol
-
Build Server Protocol - LSP like protocol for "compile, run, test, debug and more"
https://build-server-protocol.github.io/ https://github.com/build-server-protocol/build-server-protocol
Apache Spark
-
Why Apache Spark RDD is immutable?
Apache Spark is a powerful and widely used framework for distributed data processing, beloved for its efficiency and scalability. At the heart of Spark’s magic lies the RDD, an abstraction that’s more than just a mere data collection. In this blog post, we’ll explore why RDDs are immutable and the benefits this immutability provides in the context of Apache Spark.
- Spark SQL is getting pipe syntax
-
Intro to Ray on GKE
The Python Library components of Ray could be considered analogous to solutions like numpy, scipy, and pandas (which is most analogous to the Ray Data library specifically). As a framework and distributed computing solution, Ray could be used in place of a tool like Apache Spark or Python Dask. It’s also worthwhile to note that Ray Clusters can be used as a distributed computing solution within Kubernetes, as we’ve explored here, but Ray Clusters can also be created independent of Kubernetes.
-
Avoid These Top 10 Mistakes When Using Apache Spark
We all know how easy it is to overlook small parts of our code, especially when we have powerful tools like Apache Spark to handle the heavy lifting. Spark's core engine is great at optimizing our messy, complex code into a sleek, efficient physical plan. But here's the catch: Spark isn't flawless. It's on a journey to perfection, sure, but it still has its limits. And Spark is upfront about those limitations, listing them out in the documentation (sometimes as little notes).
-
IaaS vs PaaS vs SaaS: The Key Differences
One specific use case of the IaaS model is for deploying software that would have otherwise been bought as a SaaS. There are many such software from email servers to databases. You can choose to deploy MySQL in your infrastructure rather than buying from a MySQL SaaS provider. Other things you can deploy using the IaaS model include Mattermost for team collaboration, Apache Spark for data analytics, and SAP for Enterprise Resource Planning.
-
How I've implemented the Medallion architecture using Apache Spark and Apache Hdoop
In this project, I'm exploring the Medallion Architecture which is a data design pattern that organizes data into different layers based on structure and/or quality. I'm creating a fictional scenario where a large enterprise that has several branches across the country. Each branch receives purchase orders from an app and deliver the goods to their customers. The enterprise wants to identify the branch that receives the most purchase requests and the branch that has the minimum average delivery time. To achieve that, I've used Apache Spark as a distributed compute engine and Apache Hadoop, in particular HDFS, as my data storage layer. Apache Spark ingest, processes, and stores the app's data on HDFS to be served to a custom dashboard app. You can find all about it, in this Github repo
-
Shades of Open Source - Understanding The Many Meanings of "Open"
In contrast, Databricks maintains internal forks of Spark, Delta Lake, and Unity Catalog, using the same names for both the open-source versions and the features specific to the Databricks platform. While they do provide separate documentation, online discussions often reflect confusion about how to use features in the open-source versions that only exist on the Databricks platform. This creates a "muddying of the waters" between what is open and what is proprietary. This isn't an issue if you are a Databricks user, but it can be quite confusing for those who want to use these tools outside of the Databricks ecosystem.
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
What are some alternatives?
seed - Build tool for Scala projects
Trino - Official repository of Trino, the distributed SQL query engine for big data, former
sbt-dependency-graph - sbt plugin to create a dependency graph for your project [Moved to: https://github.com/sbt/sbt-dependency-graph]
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
bazel-bsp - An implementation of the Build Server Protocol for Bazel
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Play - The Community Maintained High Velocity Web Framework For Java and Scala.
Scalding - A Scala API for Cascading
sbt - sbt, the interactive build tool
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Mill - Mill is a fast JVM build tool that supports Java and Scala. 2-3x faster than Gradle and 5-10x faster than Maven for common workflows, Mill aims to make your project’s build process performant, maintainable, and flexible
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services