Skytrax-Data-Warehouse VS spark-daria

Compare Skytrax-Data-Warehouse vs spark-daria and see what are their differences.

Skytrax-Data-Warehouse

A full data warehouse infrastructure with ETL pipelines running inside docker on Apache Airflow for data orchestration, AWS Redshift for cloud data warehouse and Metabase to serve the needs of data visualizations such as analytical dashboards. (by iam-mhaseeb)

spark-daria

Essential Spark extensions and helper methods ✨😲 (by MrPowers)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
Skytrax-Data-Warehouse spark-daria
1 4
131 742
- -
0.0 0.0
about 4 years ago about 2 years ago
Python Scala
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Skytrax-Data-Warehouse

Posts with mentions or reviews of Skytrax-Data-Warehouse. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-16.
  • Open source contributions for a Data Engineer?
    17 projects | /r/dataengineering | 16 Apr 2021
    Always open to accept contributions to my project (Skytrax Data Warehouse). If you are into data stuff support my work at youtube as well (One Developer Pirate), I mostly make data-oriented videos. These days I'm making a SQL course from a data analysis perspective that is expected to release in next week.

spark-daria

Posts with mentions or reviews of spark-daria. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-13.
  • Lakehouse architecture in Azure Synapse without Databricks?
    2 projects | /r/dataengineering | 13 Apr 2023
    I was a Databricks user for 5 years and spent 95% of my time developing Spark code in IDEs. See the spark-daria and spark-fast-tests projects as Scala examples. I developed internal libraries with all the business logic. The Databricks notebooks would consist of a few lines of code that would invoke a function in the proprietary Spark codebase. The proprietary Spark codebase would depend on the OSS libraries I developed in parallel.
  • Is Spark - The Defenitive Guide outdated?
    2 projects | /r/apachespark | 1 Jul 2021
    They spent a lot of effort improving the catalyst engine under the hood too and making it easier to extend and improve it in the future. Making it easy to add your own native code to Spark itself. Shameless plug of a blog post I wrote on this subject which basically reiterates what Matthew Powers, author of Spark Daria and quinn, wrote here.
  • Ask HN: What are some tools / libraries you built yourself?
    264 projects | news.ycombinator.com | 16 May 2021
    I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.

    quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.

    Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.

    Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8

  • Open source contributions for a Data Engineer?
    17 projects | /r/dataengineering | 16 Apr 2021
    I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.

What are some alternatives?

When comparing Skytrax-Data-Warehouse and spark-daria you can also consider the following projects:

dbd - dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

chispa - PySpark test helper methods with beautiful error messages

sqlfluff - A modular SQL linter and auto-formatter with support for multiple dialects and templated code.

quinn - pyspark methods to enhance developer productivity 📣 👯 🎉

jaydebeapi - JayDeBeApi module allows you to connect from Python code to databases using Java JDBC. It provides a Python DB-API v2.0 to that database.

Task - A task runner / simpler Make alternative written in Go

dbt-spotify-analytics - Containerized end-to-end analytics of Spotify data using Python, dbt, Postgres, and Metabase

Prefect - The easiest way to build, run, and monitor data pipelines at scale.

airflow-api-tests - This is a collection of Pytest for the 2.0 Stable Rest Apis for Apache Airflow. I have another repo where you could setup airflow locally and play around with these. I am used to RestAssured, but trying out pytest here.

spark-fast-tests - Apache Spark testing helpers (dependency free & works with Scalatest, uTest, and MUnit)

dagster - An orchestration platform for the development, production, and observation of data assets.