spark-fast-tests
airbyte
Our great sponsors
spark-fast-tests | airbyte | |
---|---|---|
6 | 139 | |
417 | 13,646 | |
- | 5.2% | |
0.0 | 10.0 | |
2 months ago | 4 days ago | |
Scala | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-fast-tests
-
Lakehouse architecture in Azure Synapse without Databricks?
I was a Databricks user for 5 years and spent 95% of my time developing Spark code in IDEs. See the spark-daria and spark-fast-tests projects as Scala examples. I developed internal libraries with all the business logic. The Databricks notebooks would consist of a few lines of code that would invoke a function in the proprietary Spark codebase. The proprietary Spark codebase would depend on the OSS libraries I developed in parallel.
-
Well designed scala/spark project
https://github.com/MrPowers/spark-fast-tests https://github.com/97arushisharma/Scala_Practice/tree/master/BigData_Analysis_with_Scala_and_Spark/wikipedia
-
Unit & integration testing in Databricks
If the majority of your stuff is not UDF-based there is an OS solution to run assertion tests against full data frames called spark-fast-tests. The idea here is similar in that you have a it notebook that calls your actual notebook against a staged input reads the output and compares it to a prefabed expected output. This does take a bit of setup and trial and error but it’s the closest I’ve been able to get to proper automated regression testing in databricks
-
Show dataengineering: beavis, a library for unit testing Pandas/Dask code
I am the author of spark-fast-tests and chispa, libraries for unit testing Scala Spark / PySpark code.
-
Ask HN: What are some tools / libraries you built yourself?
I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.
quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.
Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.
Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8
-
Open source contributions for a Data Engineer?
I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.
airbyte
-
Who's hiring developer advocates? (October 2023)
Link to GitHub -->
- All the ways to capture changes in Postgres
-
Is it impossible to contribute to open source as a data engineer?
You can try and contribute some new connectors/operators for workflow managers like Airflow or Airbyte
-
airbyte VS cloudquery - a user suggested alternative
2 projects | 2 Jun 20232 projects | 2 Jun 2023
-
New age ETL products every data team needs to know
- https://airbyte.com/
2. Reverse ETL:
-
Is it safe to update docker/docker-compose?
Here's the docker-compose file https://github.com/airbytehq/airbyte/blob/master/docker-compose.yaml
I'm trying to insall https://airbyte.com/ is a great selfhosted ELT platform. In common words, it's an app that can access all kinds of api to scrub the data and put it in a database. I really like the idea of being able to own my data and make all kinds of analyse with it.
-
Top 10 Best Open Source GitHub repos for Developers 2023
AirByte GitHub: https://github.com/airbytehq/airbyte
What are some alternatives?
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
dagster - An orchestration platform for the development, production, and observation of data assets.
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
meltano
jitsu - Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
supabase - The open source Firebase alternative.
dbt - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications. [Moved to: https://github.com/dbt-labs/dbt-core]
n8n-docs - Documentation for n8n, a fair-code licensed automation tool with a free community edition and powerful enterprise options. Build AI functionality into your workflows.
superset - Apache Superset is a Data Visualization and Data Exploration Platform