getting-started
airbyte
Our great sponsors
getting-started | airbyte | |
---|---|---|
16 | 139 | |
1,220 | 13,923 | |
0.1% | 4.7% | |
0.0 | 10.0 | |
about 1 year ago | 6 days ago | |
Makefile | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
getting-started
-
Why do companies still build data ingestion tooling instead of using a third-party tool like Airbyte?
Coincidently, I saw a presentation today on a nice half-way-house solution: using embeddable Python libraries like Sling and dlt - both open-source. See https://www.youtube.com/watch?v=gAqOLgG2iYY There is also singer.io which is more of a protocol than a library, but can also be installed although it looks like it is a true community effort and not so well maintained.
-
Data sources episode 2: AWS S3 to Postgres Data Sync using Singer
Singer is an open-source framework for data ingestion, which provides a standardized way to move data between various data sources and destinations (such as databases, APIs, and data warehouses). Singer offers a modular approach to data extraction and loading by leveraging two main components: Taps (data extractors) and Targets (data loaders). This design makes it an attractive option for data ingestion for several reasons:
- Design patter for Python ETL
-
Launch HN: Patterns (YC S21) – A much faster way to build and deploy data apps
Thanks for chipping in.
I’ve been leaning towards this direction. I think I/O is the biggest part that in the case of plain code steps still needs fixing. Input being data/stream and parameterization/config and output being some sort of typed data/stream.
My “let’s not reinvent the wheel” alarm is going of when I write that though. Examples that come to mind are text based (Unix / https://scale.com/blog/text-universal-interface) but also the Singer tap protocol (https://github.com/singer-io/getting-started/blob/master/doc...). And config obviously having many standard forms like ini, yaml, json, environment key value pairs and more.
At the same time, text feels horribly inefficient as encoding for some of the data objects being passed around in these flows. More specialized and optimized binary formats come to mind (Arrow, HDF5, Protobuf).
Plenty of directions to explore, each with their own advantages and disadvantages. I wonder which direction is favored by users of tools like ours. Will be good to poll (do they even care?).
PS Windmill looks equally impressive! Nice job
-
After Airflow. Where next for DE?
Mage uses the Singer Spec (https://github.com/singer-io/getting-started/blob/master/docs/SPEC.md), the data engineer community standard for building data integrations. This was created by Stitch and is widely adopted.
-
Basic data engineering question.
I like the Singer Protocol, and the various tools that use it. These include meltano, airbyte, stitch, pipelinewise, and a few others
-
I have hundreds of API data endpoints with different schemas. How do I organize?
Have you looked into using a dedicated data integration tool? Have you heard of Singer and the Singer Spec? https://github.com/singer-io/getting-started/blob/master/docs/SPEC.md
-
CDC (Change Data Capture) with 3rd party APIs
Or you could build your own such system and run it on Airflow, Prefect, Dagster, etc. Check out the Singer project for a suite of Python packages designed for such a task. Quality varies greatly, though.
-
Questions about Integration Singer Specification with AWS Glue
Our team is building out a data platform on AWS glue, and we pull from a variety of data sources including application databases and third party SaaS APIs. I have been looking into ways to standardize pulling data from different sources. The other day I came across the [Singer Specification](https://github.com/singer-io/getting-started) and was interested learning more about it. If anyone has experience working with Singer specifications, I would love to hear more about:
-
Anybody have experience creating singer taps and targets?
I just read the readme of the Singer getting started repo and am excited to write my first tap! I’m thinking instead of writing a new Airflow DAG whenever I want to pipe API data into our data warehouse I could write a singer tap and use Stitch instead. Is that a stupid idea?
airbyte
-
Launch HN: Bracket (YC W22) – Two-Way Sync Between Salesforce and Postgres
I'l also give a shout-out to Airbyte (https://airbyte.com/), with which I've had some limited success with integrating Salesforce to a local database. The particular pull for Airbyte is that we can self-host the open source version, rather than pay Fivetran a significant sum to do this for us.
It's an immature tool, so I don't yet know that I can claim we've spent _less_ than Fivetran on the additional engineering and ops time, but it feels like it has potential to do so once stabilized.
-
Who's hiring developer advocates? (October 2023)
Link to GitHub -->
- All the ways to capture changes in Postgres
-
Airbyte API and Terraform Provider – available in open source
When it says "available in open source", is that under the main airbyte repo's licensing [1], hence primarily licensed under the Elastic License v2 and therefore not typically considered open source by many?
Airbyte has previous of advertising their offering as open source while not really being as per the OSD[2]. This has been raised with them previously but without response [3][4]. They've also been extending their use of ELv2, recently moving many of their existing MIT licensed connectors to be ELv2 [5].
[1] https://github.com/airbytehq/airbyte/blob/master/LICENSE
-
Need help moving 16gb of mongodb data to tableau
As possible solution, I can suggest Airbyte(https://airbyte.com/). it's more performant than generic python script.
-
Connecting data sources to Xata with Airbyte and Zapier integrations
Airbyte, an open-source data integration engine that offers hundreds of connectors with data warehouses and databases, has gained popularity for its seamless integration and data syncing capabilities. Xata's integration with Airbyte offers a streamlined data ingestion process from any Airbyte input source directly into your Xata database.
- Data replication from postgresql to MSSQL
- Testing
-
Is it impossible to contribute to open source as a data engineer?
You can try and contribute some new connectors/operators for workflow managers like Airflow or Airbyte
-
airbyte VS cloudquery - a user suggested alternative
2 projects | 2 Jun 2023
What are some alternatives?
AWS Data Wrangler - pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
meltano
dagster - An orchestration platform for the development, production, and observation of data assets.
tap-hubspot
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
tap-spreadsheets-anywhere
jitsu - Jitsu is an open-source Segment alternative. Fully-scriptable data ingestion engine for modern data teams. Set-up a real-time data pipeline in minutes, not days
singer-sdk
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs