getting-started VS proposals

Compare getting-started vs proposals and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
getting-started proposals
16 60
1,220 63
0.1% -
0.0 4.0
about 1 year ago 4 days ago
Makefile
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

getting-started

Posts with mentions or reviews of getting-started. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-04.
  • Why do companies still build data ingestion tooling instead of using a third-party tool like Airbyte?
    1 project | /r/dataengineering | 6 Dec 2023
    Coincidently, I saw a presentation today on a nice half-way-house solution: using embeddable Python libraries like Sling and dlt - both open-source. See https://www.youtube.com/watch?v=gAqOLgG2iYY There is also singer.io which is more of a protocol than a library, but can also be installed although it looks like it is a true community effort and not so well maintained.
  • Data sources episode 2: AWS S3 to Postgres Data Sync using Singer
    2 projects | dev.to | 4 May 2023
    Singer is an open-source framework for data ingestion, which provides a standardized way to move data between various data sources and destinations (such as databases, APIs, and data warehouses). Singer offers a modular approach to data extraction and loading by leveraging two main components: Taps (data extractors) and Targets (data loaders). This design makes it an attractive option for data ingestion for several reasons:
  • Design patter for Python ETL
    2 projects | /r/dataengineering | 2 Dec 2022
  • Launch HN: Patterns (YC S21) – A much faster way to build and deploy data apps
    6 projects | news.ycombinator.com | 30 Nov 2022
    Thanks for chipping in.

    I’ve been leaning towards this direction. I think I/O is the biggest part that in the case of plain code steps still needs fixing. Input being data/stream and parameterization/config and output being some sort of typed data/stream.

    My “let’s not reinvent the wheel” alarm is going of when I write that though. Examples that come to mind are text based (Unix / https://scale.com/blog/text-universal-interface) but also the Singer tap protocol (https://github.com/singer-io/getting-started/blob/master/doc...). And config obviously having many standard forms like ini, yaml, json, environment key value pairs and more.

    At the same time, text feels horribly inefficient as encoding for some of the data objects being passed around in these flows. More specialized and optimized binary formats come to mind (Arrow, HDF5, Protobuf).

    Plenty of directions to explore, each with their own advantages and disadvantages. I wonder which direction is favored by users of tools like ours. Will be good to poll (do they even care?).

    PS Windmill looks equally impressive! Nice job

  • After Airflow. Where next for DE?
    13 projects | /r/dataengineering | 15 Nov 2022
    Mage uses the Singer Spec (https://github.com/singer-io/getting-started/blob/master/docs/SPEC.md), the data engineer community standard for building data integrations. This was created by Stitch and is widely adopted.
  • Basic data engineering question.
    2 projects | /r/dataengineering | 16 Oct 2022
    I like the Singer Protocol, and the various tools that use it. These include meltano, airbyte, stitch, pipelinewise, and a few others
  • I have hundreds of API data endpoints with different schemas. How do I organize?
    1 project | /r/dataengineering | 10 Oct 2022
    Have you looked into using a dedicated data integration tool? Have you heard of Singer and the Singer Spec? https://github.com/singer-io/getting-started/blob/master/docs/SPEC.md
  • CDC (Change Data Capture) with 3rd party APIs
    1 project | /r/dataengineering | 23 Sep 2022
    Or you could build your own such system and run it on Airflow, Prefect, Dagster, etc. Check out the Singer project for a suite of Python packages designed for such a task. Quality varies greatly, though.
  • Questions about Integration Singer Specification with AWS Glue
    1 project | /r/dataengineering | 26 Aug 2022
    Our team is building out a data platform on AWS glue, and we pull from a variety of data sources including application databases and third party SaaS APIs. I have been looking into ways to standardize pulling data from different sources. The other day I came across the [Singer Specification](https://github.com/singer-io/getting-started) and was interested learning more about it. If anyone has experience working with Singer specifications, I would love to hear more about:
  • Anybody have experience creating singer taps and targets?
    1 project | /r/dataengineering | 30 Jan 2022
    I just read the readme of the Singer getting started repo and am excited to write my first tap! I’m thinking instead of writing a new Airflow DAG whenever I want to pipe API data into our data warehouse I could write a singer tap and use Stitch instead. Is that a stupid idea?

proposals

Posts with mentions or reviews of proposals. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-21.
  • Is there an alternative for Airflow for running thousands of dynamic tasks?
    3 projects | /r/dataengineering | 21 Dec 2022
    Check out temporal.io open source project. It was built at Uber for large scale business-level processes. So any data pipelines are low-rate use cases by definition.
  • KuFlow as a Temporal.io-based Workflow Orchestrator
    1 project | dev.to | 16 Dec 2022
    With KuFlow it is also possible to work with serverless workflows apart from Temporal.io, we explain it in this blog entry, but in summary, almost as a no-code tool, the correct use It would be a rather low-code tool; in just a matter of minutes with our drag-and-drop tool, you can have a workflow that interacts with one or more users of the organization.
  • How to handle background jobs in Rust?
    5 projects | /r/rust | 1 Dec 2022
    Otherwise you may want to look into Kafka or Fluvio to ensure that task runs at least once. If you're doing something like batch operations as a background task, Temporal is another great option.
  • No-code or Workflow as code? Better both
    4 projects | dev.to | 29 Nov 2022
    The runtime is developed using Temporal, which is one of the main tools that we are currently using at KuFlow. Thanks to, all the workflow executions are robust: your application will be durable, reliable, and scalable.
  • Temporal Programming, a new name for an old paradigm
    2 projects | news.ycombinator.com | 27 Nov 2022
    Hmmm I got confused by the name. I thought it's related to https://temporal.io/
  • Possible innovations in Event Sourcing frameworks.
    2 projects | /r/microservices | 21 Nov 2022
    Have you looked at temporal.io open source platform? It uses event sourcing as an implementation detail. But it greatly simplifies the user experience compared to "raw event sourcing."
  • After Airflow. Where next for DE?
    13 projects | /r/dataengineering | 15 Nov 2022
    Rewrite Airflow on top of temporal.io. This way, you get unlimited scalability and very high reliability out of the box and would be able to innovate on the features that matter for DE.
  • Show HN: Retool Workflows – Cronjobs, but better
    1 project | news.ycombinator.com | 15 Nov 2022
    Hi all, founder @ Retool here. Over the past year, we’ve been working on Retool Workflows; a fast way for engineers to automate tasks with code. We started building the product because we ourselves (as developers) were looking for something in-between writing cron jobs (which involves a lot of boilerplate) and Zapier (which oftentimes isn’t customizable enough, since it doesn’t _really_ support writing code).

    Workflows is a code-first automation tool: you’re _expected_ to write code, but we handle all the boilerplate for you. For example: out-of-the-box integration with 80+ resources (you probably don’t want to be trying to figure out OAuth 2.0 with Salesforce!), monitoring and observability (so you can see the output of every run in the past, and immediately be notified if something goes wrong), and permissions (e.g. some Okta groups can see the outputs of Workflows, but can’t change the code itself).

    Right now, the product is cloud-only, but we’re hard at work at an on-prem, self-hosted version (in a Docker image). If you’re interested in that version, feel free to email us at [email protected]. We aim to get it out in the next few weeks. Self-hosted Retool is responsible for a large portion of our usage today, and we’re excited to be supporting Workflows too.

    All Retool plans now include 1GB of Workflows throughput, which we think is quite generous (80% of active Workflows users are below 1GB). We don’t bill by run at all, so you’re welcome to run as many workflows as you want.

    We use a bunch of interesting technology for Workflows; we are, for example, using Temporal (https://temporal.io/) under the hood. That’s something we’re going to be writing a blog post about later. (We’ve been hard at work on the launch, hah.)

  • How KuFlow supports Temporal as a worfkows engine for our processes?
    3 projects | dev.to | 15 Nov 2022
    In such a diverse world, it would be boring to have a single way of doing things. That's why at KuFlow we support different ways to implement the logic of our processes and tasks. And in this post, we will talk about one of them, the orchestration through Temporal, which gives us a powerful way to manage our workflows.
  • Library for manage tasks when make a workflow automation.
    1 project | /r/softwarearchitecture | 13 Nov 2022

What are some alternatives?

When comparing getting-started and proposals you can also consider the following projects:

airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.

conductor - Conductor is a microservices orchestration engine.

AWS Data Wrangler - pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

temporalite-archived - An experimental distribution of Temporal that runs as a single process

meltano

zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.

tap-hubspot

seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models

Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai

kubemq-community - KubeMQ is a Kubernetes native message queue broker

tap-spreadsheets-anywhere

nextjs-cron - Cron jobs with Github Actions for Next.js apps on Vercel▲