mc2
Apache Spark
Our great sponsors
mc2 | Apache Spark | |
---|---|---|
8 | 101 | |
291 | 38,320 | |
0.7% | 1.1% | |
0.7 | 10.0 | |
about 1 year ago | 6 days ago | |
C++ | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mc2
-
Intel deprecates SGX on Core series processors
Analytics and ML on confidential data are some interesting server side use cases. See the MC2 open source project, for example: https://github.com/mc2-project/mc2
-
How to Run Spark SQL on Encrypted Data
Check out more blog posts on how to securely process data with MC² Project. We would love your contributions ✋ and support ⭐! Please check out the Github repo to see how you can contribute. No contribution is too small.
- Show HN: MC² – Secure collaborative analytics and ML
- MC2: Secure Collaborative Analytics and ML
-
Announcing MC²: Securely perform analytics and machine learning on confidential data
MC2 is a platform for running secure analytics on data that stays encrypted even when in use. By doing so, the project also enables secure collaboration among multiple organizations, where individual data owners can use our platform to jointly analyze their collective data without revealing it to one another. To learn more and to see the individual projects’ documentation, visit our landing page.
-
Secure collaborative analytics and ML on encrypted data using MC²
Whoops. My bad: https://github.com/mc2-project/mc2
-
[P] Secure collaborative analytics and ML on encrypted data using MC²
Our team @ UC Berkeley has been working on a platform for secure analytics and machine learning called MC2 -- and today we are excited to announce the initial release v0.1 of the platform! With MC2, you can take encrypted data and run various analytics and machine learning workloads at near processor speeds, while keeping the data confidential. MC2 also enables secure collaboration -- mutually distrustful data owners can jointly analyze / train models on their data, but without revealing their data to each other. Github: https://github.com/mc2-project/mc2
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
delphi - A Cryptographic Inference Service for Neural Networks
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
opaque-sql - An encrypted data analytics platform
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
secure-xgboost - Secure collaborative training and inference for XGBoost.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
cerebro - Cerebro: A platform for Secure Coopetitive Learning
Scalding - A Scala API for Cascading
secure-aggregation - Secure aggregation for federated learning using enclaves
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
federated-xgboost - Federated gradient boosted decision tree learning
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.