mdoc
Apache Spark
mdoc | Apache Spark | |
---|---|---|
4 | 101 | |
387 | 38,378 | |
0.5% | 0.6% | |
8.4 | 10.0 | |
15 days ago | 5 days ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mdoc
- Optimal decision-making with examples built using scala
-
Friction-less scala - Tell us what is causing friction in your day-to-day life with Scala
Literally what scaladoc is, it comes with sbt. Although, it's better when enhanced with mdoc so that you get the standard microsite template like these. It would be nice to have an sbt serveDocs and if everyone would host their docs for external linking, but javadoc doesn't do that either.
-
A Scala rant
The good news is that scaladoc is produced by default by sbt and published by default. So you can often pull it from the same repository your library jar came from, extract it with zip, and read the docs. But that's also totally unnecessary - javadoc.io allows you to put in your module info and serves the docs for you, so if there's an older version you can access the documentation this way. Rely on the type signatures, since they can't lie, whilst comments (including scaladoc comments) can. Honestly, library authors should be using mdoc and including examples on every public method, and that type of documentation is something you can almost always contribute to a project for a quick pr kudos.
-
The future of Scaladoc
I know it's not new but the "Snippet validation and results (mdoc)" features in mdoc are so cool. Really takes some of the tedium out of working with documentation since you can know that as you evolve your code the compiler will make sure you keep the docs in sync. Whole new level of Readme-Driven Development
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
sbt-unidoc - sbt plugin to create a unified Scaladoc or Javadoc API document across multiple subprojects.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
sbt-mima-plugin - A tool for catching binary incompatibility in Scala
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
sbt-revolver - An SBT plugin for dangerously fast development turnaround in Scala
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
sbt-pack - A sbt plugin for creating distributable Scala packages.
Scalding - A Scala API for Cascading
coursier - Pure Scala Artifact Fetching
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
sbt-updates - sbt plugin that can check Maven and Ivy repositories for dependency updates
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.