spark-fast-tests
airbyte
Our great sponsors
spark-fast-tests | airbyte | |
---|---|---|
5 | 112 | |
372 | 9,359 | |
- | 3.3% | |
4.0 | 10.0 | |
9 months ago | 4 days ago | |
Scala | Java | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-fast-tests
-
Well designed scala/spark project
https://github.com/MrPowers/spark-fast-tests https://github.com/97arushisharma/Scala_Practice/tree/master/BigData_Analysis_with_Scala_and_Spark/wikipedia
-
Unit & integration testing in Databricks
If the majority of your stuff is not UDF-based there is an OS solution to run assertion tests against full data frames called spark-fast-tests. The idea here is similar in that you have a it notebook that calls your actual notebook against a staged input reads the output and compares it to a prefabed expected output. This does take a bit of setup and trial and error but it’s the closest I’ve been able to get to proper automated regression testing in databricks
-
Show dataengineering: beavis, a library for unit testing Pandas/Dask code
I am the author of spark-fast-tests and chispa, libraries for unit testing Scala Spark / PySpark code.
-
Ask HN: What are some tools / libraries you built yourself?
I built daria (https://github.com/MrPowers/spark-daria) to make it easier to write Spark and spark-fast-tests (https://github.com/MrPowers/spark-fast-tests) to provide a good testing workflow.
quinn (https://github.com/MrPowers/quinn) and chispa (https://github.com/MrPowers/chispa) are the PySpark equivalents.
Built bebe (https://github.com/MrPowers/bebe) to expose the Spark Catalyst expressions that aren't exposed to the Scala / Python APIs.
Also build spark-sbt.g8 to create a Spark project with a single command: https://github.com/MrPowers/spark-sbt.g8
-
Open source contributions for a Data Engineer?
I've built popular PySpark (quinn, chispa) and Scala Spark (spark-daria, spark-fast-tests) libraries.
airbyte
-
What are your thoughts on projects using the Elastic License?
Doing a quick GitHub search reveals quite a few projects using the ELv2 license, including Airbyte and InvoiceNinja. Elastic (the company) aside, what are your thoughts on the Elastic License v2? Does your employer allow projects with an ELv2 license? Do you consider it open source? I understand that it's not OSI approved, but wondering where people stand when it comes to commercial open source software.
-
Airbyte Source Connectors performance bottelneck
I have been using airbyte sources, S3 mainly, it is so slow, I'm getting 1k-3k records per second, on a high end machine 4 Cpus and 16GB Ram. I checked the stats of the docker container it's hardly utilising the resources only consuming CPU, no memory usage at all, https://github.com/airbytehq/airbyte/issues/12532 I read on this issue that the connectors are slow because it traverse 1 records at one time, and prints it. What to do?? I need the performance of 20k-30k records per second.
-
Data Pipeline: From ETL to EL plus T
Yes, absolutely, Airbyte, and there are many similar solutions, but Airbyte is open source and relatively easy to use.
- Show HN: Data integration platform with 300 open-source connectors
-
Airbyte: Data integration platform with 300+ open-source connectors
Just an advisory here, the (majority of the) shared platform itself is not Open Source. They provide an overview here.
-
What is data integration?
Airbyte
- Cloud ETL Repo to Warehouse & Visualize Personal Data
-
Transfer data in Fauna to your analytics tool using Airbyte
git clone https://github.com/airbytehq/airbyte.git cd airbyte docker-compose up
We are excited to introduce Fauna’s new Airbyte open source connector. This connector lets you replicate Fauna data into your data warehouses, lakes, and analytical databases, such as Snowflake, Redshift, S3, and more.
What are some alternatives?
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
dagster - An orchestration platform for the development, production, and observation of data assets.
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
meltano
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs
dbt-core - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
jitsu - Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days
dbt - dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications. [Moved to: https://github.com/dbt-labs/dbt-core]
supabase - The open source Firebase alternative. Follow to stay updated about our public Beta.
superset - Apache Superset is a Data Visualization and Data Exploration Platform
singer-sdk