First data lake pipeline advice - multitenancy

This page summarizes the projects mentioned and recommended in the original post on /r/dataengineering

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • Airflow

    Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

  • If you are fixed on developing a solution in house, you may have options that don't require many additional tools. Aurora Postgres already supports exporting data to S3. So use an orchestration tool like AWS ECS Scheduled Tasks, Airflow, Prefect, etc to run a script (probably Python). That script can ask for all the distinct tenant ids "SELECT distinct tenant_id FROM...". Then iterate through them and run a query to copy that tenants data to a folder in an S3 bucket for each tenant. Finally in Athena, you can create an external table for each tenant, which points at the corresponding folder in S3. This will work at first but there are a bunch of other things to consider in terms of maintaining it. How will you handle schema evolution? Monitoring? type differences between Aurora and Athena? data integrity checks? rewrite the data each time or incremental upsert? If this is going to be an important feature for your product, you should check out what we are building, because we might be able to make your life a lot easier.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts