Zeppelin
Apache Spark
Our great sponsors
Zeppelin | Apache Spark | |
---|---|---|
8 | 101 | |
6,248 | 38,104 | |
0.4% | 1.1% | |
8.7 | 10.0 | |
7 days ago | 6 days ago | |
Java | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Zeppelin
-
Serverless Apache Zeppelin on AWS
Now we can proceed with the definition of Apache Zeppelin. It is a web-based notebook that enables data-driven, interactive data analytics and collaborative documents with Python, Scala, SQL, Spark, and more. You can execute code and even schedule a job (via cron) to run at regular intervals.
-
Visualization using Pyspark Dataframe
Have you tried Apache Zepellin I remember that you can pretty print spark dataframes directly on it with z.show(df)
-
Fast CSV Processing with SIMD
I used to use Zeppelin, some kind of Jupyter Notebook for Spark (that supports Parquet). But it may be better alternatives.
-
What libraries do you use for machine learning and data visualizing in scala?
Another more widely used notebooks for scala and spark: https://zeppelin.apache.org/
-
How to use IPython in Apache Zeppelin Notebook
[1] Apache Zeppelin http://zeppelin.apache.org/ [2] Zeppelin notebooks website http://zeppelin-notebook.com/. [3] Zeppelin notebooks git repo https://github.com/zjffdu/zeppelin-notebook
-
BI Application in Golang.
Apache Zeppelin
-
Is there a way to collaborate in real-time for Jupyter Notebooks?
Check out Zeppelin. It's similar to Jupyter and allows real-time editing by multiple users. https://zeppelin.apache.org/
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
Gotta write this on my resume
So for example contributing to say spark may better for experience(and resume) than Twitter-the algorithm.
-
Query Real Time Data in Kafka Using SQL
Additionally, one of the challenges of working with Kafka is how to efficiently analyze and extract insights from the large volumes of data stored in Kafka topics. Traditional batch processing approaches, such as Hadoop MapReduce or Apache Spark, can be slow and expensive, and may not be suitable for real-time analytics. To address this challenge, you can use SQL queries with Kafka to analyze and extract insights from the data in real time.
What are some alternatives?
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Scalding - A Scala API for Cascading
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
Weka
Smile - Statistical Machine Intelligence & Learning Engine
Apache Calcite - Apache Calcite
Scio - A Scala API for Apache Beam and Google Cloud Dataflow.
Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.