Top 23 Scala Big Data Projects
Apache Spark - A unified analytics engine for large-scale data processingProject mention: Why should I invest in raptoreum? What makes it different | reddit.com/r/raptoreum | 2021-09-25
For your first question, if you are interested I encourage you to read the smart contracts paper here: https://docs.raptoreum.com/_media/Raptoreum_Contracts_EN.pdf and then to dig into what Apache Spark can do here: https://spark.apache.org/
CMAK is a tool for managing Apache Kafka clustersProject mention: We tried to make monitoring tool for Kafka | reddit.com/r/apachekafka | 2021-07-22
Scout APM: A developer's best friend. Try free for 14-days. Scout APM uses tracing logic that ties bottlenecks to source code so you know the exact line of code causing performance issues and can get back to building a great product faster.
BigDL: Distributed Deep Learning Framework for Apache SparkProject mention: Machine learning on JVM | reddit.com/r/scala | 2021-04-05
Intel BigDL for Spark which again is for Spark.
An open-source storage layer that brings scalable, ACID transactions to Apache Spark™ and big data workloads. (by delta-io)Project mention: SnowFlake vs DataBricks lakehouse or both together | reddit.com/r/datascience | 2021-08-31
There's also been huge strides in data lake tech, data lakes now support ACID transactions through delta, this brings cool stuff like rolling back through a transaction log. Whenever delta live tables (DLT) comes out of preview you can also use this to track your data lineage in your lake itself.
A Scala API for Cascading
A Scala API for Apache Beam and Google Cloud Dataflow.Project mention: ELT, Data Pipeline | dev.to | 2021-01-01
To counter the above mentioned problem, we decided to move our data to a Pub/Sub based stream model, where we would continue to push data as it arrives. As fluentd is the primary tool being used in all our servers to gather data, rather than replacing it we leveraged its plugin architecture to use a plugin to stream data into a sink of our choosing. Initially our inclination was towards Google PubSub and Google Dataflow as our Data Scientists/Engineers use Big Query extensively and keeping the data in the same Cloud made sense. The inspiration of using these tools came from Spotify’s Event Delivery – The Road to the Cloud. We did the setup on one of our staging server with Google PubSub and Dataflow. Both didn't really work out for us as PubSub model requires a Subscriber to be available for the Topic a Publisher streams messages to, otherwise the messages are not stored. On top of it there was no way to see which messages are arriving. During this the weirdest thing that we encountered was that the Topic would be orphaned losing the subscribers when working with Dataflow. PubSub we might have managed to live with, the wall in our path was Dataflow. We started off with using SCIO from Spotify to work with Dataflow, there is a considerate lack of documentation over it and found the community to be very reserved on Github, something quite evident in the world of Scala for which they came up with a Code of Conduct for its user base to follow. Something that was required from Dataflow for us was to support batch write option to GCS, after trying our hand at Dataflow to no success to achieve that, Google's staff at StackOverflow were quite responsive and their response confirmed that it was something not available with Dataflow and streaming data to BigQuery, Datastore or Bigtable as a datastore was an option to use. The reason we didn't do that was to avoid high streaming cost to these services to store data, as majority of our jobs from the data team are based on batched hourly data. The initial proposal to the updated pipeline is shown below.
Streaming MapReduce with Scalding and Storm
Run Linux Software Faster and Safer than Linux with Unikernels.
A Scala kernel for JupyterProject mention: EDA libraries for Scala and Spark? | reddit.com/r/scala | 2021-06-23
What about https://github.com/alexarchambault/plotly-scala and https://almond.sh/
Alpakka Kafka connector - Alpakka is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka.
CPU and GPU-accelerated Machine Learning Library
Lightweight real-time big data streaming engine over Akka
The missing MatPlotLib for Scala + Spark (by vegas-viz)
Real Time Analytics and Data Pipelines based on Spark Streaming (by Stratio)
A Scala productivity framework for Hadoop. (by NICTA)
A simplified, lightweight ETL Framework based on Apache Spark
C4E, a JVM friendly library written in Scala for both local and distributed (Spark) Clustering.
Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.
Scala DSL on top of Oozie XML
Deploy Spark cluster in an easy way.
Spark package to "plug" holes in data using SQL based rules ⚡️ 🔌
Basic framework utilities to quickly start writing production ready Apache Spark applications
Scala library for accessing various file, batch systems, job schedulers and grid middlewares.
Executable Apache Spark Tools: Format Converter & SQL Processor
What are some of the best open-source Big Data projects in Scala? This list will help you:
Are you hiring? Post a new remote job listing for free.