s3-sqs-connector
Apache Spark
s3-sqs-connector | Apache Spark | |
---|---|---|
6 | 101 | |
16 | 38,378 | |
- | 0.6% | |
0.0 | 10.0 | |
12 days ago | 5 days ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
s3-sqs-connector
- Provide maximum flexibility to your data team Author, schedule, and monitor data pipelines faster at scale on any cloud with the data processing engine of your choice with Qubole.
- Want to deliver Big Data Projects without a big price tag? Switch to Qubole to reduce your data lake cloud computing costs by 50%.
- Struggling to install, configure and maintain huge data clusters? Get a single experience across any cloud with near-zero administration and maintenance with Qubole.
- Say goodbye to data silos Explore Qubole’s open, and secure multi-cloud data lake to get faster access to petabytes of datasets
-
Upload to S3 -> AWS lambda with some Scala Spark code -> Process -> Write back to S3
Are you planning on uploading and processing many files to S3? If so I would use something like Structured Streaming with the FileSource which can detect new files uploaded to S3 and process them in on a "standard" Spark cluster. You can then build a very easy to deploy and operate cluster on EKS/Kubernetes. I would check out: https://github.com/qubole/s3-sqs-connector once the number of files you upload start to get really large. Glue could also be used to achieve roughly the same thing and without the hassle of managing the EKS/K8s clusters.
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
Jupyter Scala - A Scala kernel for Jupyter
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
LearningSparkV2 - This is the github repo for Learning Spark: Lightning-Fast Data Analytics [2nd Edition]
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Spark Utils - Basic framework utilities to quickly start writing production ready Apache Spark applications
Scalding - A Scala API for Cascading
mmlspark - Simple and Distributed Machine Learning [Moved to: https://github.com/microsoft/SynapseML]
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing