peerdb VS cli

Compare peerdb vs cli and see what are their differences.

peerdb

Fast, Simple and a cost effective tool to replicate data from Postgres to Data Warehouses, Queues and Storage (by PeerDB-io)

cli

🖥️ Depot CLI, build your Docker images in the cloud (by depot)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
peerdb cli
7 67
1,842 110
16.6% 8.1%
9.9 9.3
5 days ago about 23 hours ago
Go Go
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

peerdb

Posts with mentions or reviews of peerdb. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-06.
  • PeerDB Streams – Simple, Native Postgres Change Data Capture
    4 projects | news.ycombinator.com | 6 May 2024
  • Pgwire: a Rust library for PostgreSQL compatible application
    2 projects | news.ycombinator.com | 20 Mar 2024
    We at PeerDB (https://github.com/PeerDB-io/peerdb) were early adopters of Pgwire to implement our Postgres-compatible SQL Layer to do ETL. Very easy to work with. Saved us multiple months of effort to build it from scratch.
  • FLaNK AI Weekly 18 March 2024
    39 projects | dev.to | 18 Mar 2024
  • Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
    7 projects | news.ycombinator.com | 30 Jan 2024
    We've been using the Ubicloud runner for a while at PeerDB[1]. Great value and specially the ARM runners have been helpful to get our CI costs down. The team is really responsive and added the arm runner support within a few weeks of us requesting it.

    [1] https://github.com/PeerDB-io/peerdb

  • Benchmarking Postgres Replication: PeerDB vs. Airbyte
    1 project | news.ycombinator.com | 10 Oct 2023
    Thanks for posting this question. Composite primary key support is actively being worked on and should be available in 1-2 weeks :) - https://github.com/PeerDB-io/peerdb/pull/499
  • Launch HN: PeerDB (YC S23) – Fast, Native ETL/ELT for Postgres
    2 projects | news.ycombinator.com | 27 Jul 2023
    Hi HN! I'm Sai, the co-founder and CEO of PeerDB (https://www.peerdb.io/), a Postgres-first data-movement platform that makes moving data in and out of Postgres fast and simple. PeerDB is free and open (https://github.com/PeerDB-io/peerdb) and we provide a Docker stack for users to try us out. Our repo is at https://github.com/PeerDB-io/peerdb and there’s a 5-minute quickstart here: https://docs.peerdb.io/quickstart.

    For the past 8 years, working at Microsoft on Postgres on Azure, and before that at Citus Data, I’ve worked closely with customers running Postgres at the heart of their data stack, storing anywhere from 10s of GB of data to 10s of TB.

    This was when I got exposed to the challenges customers faced when moving data in and out of Postgres. Usually they would try existing ETL tools, fail, and decide to build in-house solutions. Common issues with these tools included painfully slow syncs - syncing 100s of GB of data took days; flaky and unreliable - frequent crashes, loss of data precision on target etc., and; feature-limited - lack of configurability, unsupported data types and so on.

    I remember a specific scenario where a tool didn’t support something as simple as the Postgres’ COPY command to ingest data. This would have improved the throughput by orders of magnitude. We (customer and me) reached out to that company to request them to add this feature. They couldn’t prioritize this feature because it wasn’t very easy - their tech stack was designed to support 100s of connectors rather than supporting a native Postgres feature.

    After multiple such occurrences, I thought, why not build a tool specialized for Postgres, making the lives of many Postgres users easier. I reached out to my long-time buddy Kaushik, who was building operating systems at Google and had led data teams at Safegraph and Palantir. We spent a few weeks building an MVP that streamed data in real-time from Postgres to BigQuery. It was 10 times faster than existing tools and maintained data freshness of less than 30 seconds. We realized that there were many Postgres native and infrastructural optimizations we could do to provide a rich data-movement experience for Postgres users. This is when we decided to start PeerDB!

    We started with two main use cases: Real-time Change Data Capture from Postgres (demo: https://docs.peerdb.io/usecases/realtime-cdc#demo) and Real-time Streaming of query results from Postgres (demo: https://docs.peerdb.io/usecases/realtime-streaming-of-query-...). The 2nd demo shows PeerDB streaming a table with 100M rows from Postgres to Snowflake.

    We implement multiple optimizations to provide a fast, reliable, feature-rich experience. For performance, we can parallelize the initial load of a large table, still ensuring consistency. Syncing 100s of GB goes from days to minutes. We do this by logically partitioning the table based on internal tuple identifiers (CTID) and parallelly streaming those partitions (inspired by this DuckDB blog - https://duckdb.org/2022/09/30/postgres-scanner.html#parallel...)

    For CDC, we don’t use Debezium, rather handle replication more natively—reading the slot, replicating the changes, keeping state etc. We made this choice mainly for flexibility. Staying native helps us use existing and future Postgres enhancements more effectively. For example, if the order of rows across tables on the target is not important, we can parallelize reading of a single slot across multiple tables and improve performance. Our architecture is designed for real-time syncs, which enables data-freshness of a few 10s of seconds even at large throughputs (10k+ tps).

    We have fault tolerance mechanisms for reliability (https://blog.peerdb.io/using-temporal-to-scale-data-synchron...) and support multiple features including log-based (CDC) / query based streaming, efficient syncing of tables with large (TOAST) columns, configurable batching and parallelism to prevent OOMs and crashes etc.

    For usability - we provide a Postgres compatible SQL layer for data-movement. This makes the life of data engineers much easier. They can develop pipelines using a framework they are familiar with, without needing to deal with custom UIs and REST APIs. They can use Postgres' 100s of integrations to build and manage ETL. We extend Postgres' SQL grammar with a few new intuitive SQL commands to enable real-time data streaming across stores. Because of this, we were able to add dbt integration via Dagster (in private preview) in a few hours! We expect data-engineers to unravel similar integrations with PeerDB easily, and plan to make this grammar richer as we evolve.

    PeerDB consists of the following components to handle data replication: (1) PeerDB Server uses the pgwire protocol to mimic a PostgreSQL server, responsible for query routing and generating gRPC requests to the Flow API. It relies on AST analysis to make informed decisions on routing. (2) Flow API: an API layer that deals with gRPC commands, orchestrating the data sync operations; (3) Flow Workers execute the data read-write operations from the source to the destination. Built to scale horizontally, they interact with Temporal for increased resilience. The types of data replication supported include CDC streaming replication and query-based batch replication. Workers do all of the heavy lifting, and have data store specific optimizations.

    Currently we support 6 target data stores (BigQuery, Snowflake, Postgres, S3, Kafka etc) for data movement from Postgres. This doc captures the current status of the connectors: https://docs.peerdb.io/sql/commands/supported-connectors.

    As we spoke to more customers, we realized that getting data into PostgreSQL at scale is equally important and hard. For example one of our customers wants to periodically sync data in multiple SQL Server instances (running on the edge) to their centralized Postgres database. Requests for Oracle to Postgres migrations are also common. So now we’re also supporting source data stores with Postgres as the target (currently SQL Server and Postgres itself, with more to come).

    We are actively working with customers to onboard them to our self-hosted enterprise offering. Our fully hosted offering on the cloud is in private preview. We haven’t yet decided on the pricing. One common concern we’ve heard from customers is that existing tools are expensive and charge based on the amount of data transferred. To address this, we are considering a more transparent way of pricing—for example, pricing based on provisioned hardware (cpu, memory, disk). We’re open for feedback on this!

    Check out our github repo - https://github.com/PeerDB-io/peerdb and go ahead and give it a spin (5-minute quickstart https://docs.peerdb.io/quickstart).

    We want to provide the world’s best data-movement experience for Postgres. We would love to get your feedback on product experience, our thesis and anything else that comes to your mind. It would be super useful for us. Thank you!

cli

Posts with mentions or reviews of cli. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-04.
  • Show HN: Managed GitHub Actions Runners for AWS
    6 projects | news.ycombinator.com | 4 Apr 2024
    Hey HN! I'm Jacob, one of the founders of Depot (https://depot.dev), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!

    Here's a video demo: https://www.youtube.com/watch?v=VX5Z-k1mGc8, and here’s our blog post: https://depot.dev/blog/depot-github-actions-runners.

    While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.

    Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!

    We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:

    - CPUs are typically 30%+ more performant than alternatives (the m7a instance type).

    - Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.

    - Each instance has a public IPv4 address, so it does not share rate limits with anyone else.

    We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.

    Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.

    We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.

    There are three alternatives to our managed runners currently:

    1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.

    2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.

    3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.

    Any feedback is very welcome! You can sign up at https://depot.dev/sign-up for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)

  • Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
    7 projects | news.ycombinator.com | 30 Jan 2024
    Depot [0] founder here. Thanks for the mention. We're also planning on bringing a bit of a different take to GitHub Action runners that's not tied to Hetzner directly. It will be entirely open-source as well, so you can take it and run it on your own instances if you'd like. Similar to how Depot supports self-hosted builders in your own AWS account [1].

    [0] https://depot.dev/

    [1] https://depot.dev/docs/self-hosted/architecture

  • Dive: A tool for exploring a Docker image, layer contents and more
    4 projects | news.ycombinator.com | 8 Jan 2024
    Dive is an amazing tool in the container/Docker space. It makes life so much easier to debug what is actually in your container. When we were first getting started with Depot [0], we often got asked how to reduce image size as well as make builds faster. So we wrote up a quick blog post that shows how to use Dive to help with that problem [1]. It might be a bit dated now, but in case it helps a future person.

    Dive also inspired us to make it easier to surface what is actually in your build context, on every build. So we shipped that as a feature in Depot a few weeks back.

    [0] https://depot.dev

  • Build Docker images faster using build cache
    1 project | dev.to | 7 Jan 2024
    If you want to learn more about how Depot can help you optimize your Docker image builds, sign up for our free trial.
  • Show HN: WarpBuild – x86-64 and arm GitHub Action runners for 30% faster builds
    10 projects | news.ycombinator.com | 8 Dec 2023
    We have this with https://depot.dev out of the box. You connect to a native BuildKit and run your Docker image build on native Intel and Arm CPUs with fast persistent SSD cache orchestrated across builds. It’s immediately there on the next build without having to save/load it over the network.
  • Launch HN: Loops (YC W22) – Email for SaaS Companies
    2 projects | news.ycombinator.com | 21 Sep 2023
    We use Loops to power the core of our email things for Depot [0] and it's been quite a breeze to use.

    I think there are some logic things to get right at the API level, like should I use events or contact properties to trigger loops? We're working on some of that and wish the guidance was a bit better/clearer, at the moment, any properties you send with an event get added to the contact, so it seems like contact properties are the way to go.

    My last request would be to support array properties on contacts as a given contact could be in multiple "things".

    [0] https://depot.dev/

  • Show HN: An OIDC issuer for GitHub Actions pull_request workflows
    2 projects | news.ycombinator.com | 19 Jul 2023
    We encountered a specific GitHub Actions restriction at Depot[0]: for pull_request workflows that originate from open-source forks, Actions disables access to all repository secrets and to the Actions OIDC issuer, as a security mechanism to deny untrusted code access to those secrets.

    But we needed a way to authenticate our CLI within those public workflows. This OIDC issuer is the result of that need, and works like so:

    1. The pull_request workflow makes a "claim request" to the OIDC issuer, claiming certain details about the workflow like the ID, run ID, repository, etc.

    2. The OIDC issuer responds with a "challenge code" that the workflow must periodically print to its logs

    3. The OIDC issuer connects to the GitHub Actions websocket endpoint for log streaming, validates that the challenge code is being printed, then returns a new OIDC token to the workflow

    This is working well for us, and lets us acquire an OIDC token similar to the GitHub Actions native OIDC token. The issuer itself runs as a Cloudflare Worker.

    Happy to answer questions and I'd love any feedback you may have!

    [0] https://depot.dev

  • Show HN: depot.ai – easily embed ML / AI models in your Dockerfile
    3 projects | news.ycombinator.com | 18 Jul 2023
    To optimize build speed, cache hits, and registry storage, we're building each image reproducibly and indexing the contents with eStargz[0]. The image is stored on Cloudflare R2, and served via a Cloudflare Worker. Everything is open source[1]!

    Compared to alternatives like `git lfs clone` or downloading your model at runtime, embedding it with `COPY` produces layers that are cache-stable, with identical hash digests across rebuilds. This means they can be fully cached, even if your base image or source code changes.

    And for Docker builders that enable eStargz, copying single files from the image will download only the requested files. eStargz can be enabled in a variety of image builders[2], and we’ve enabled it by default on Depot[3].

    Here’s an announcement post with more details: https://depot.dev/blog/depot-ai.

    We’d love to hear any feedback you may have!

    [0] https://github.com/containerd/stargz-snapshotter/blob/main/docs/estargz.md

    [1] https://github.com/depot/depot.ai

    [2] https://github.com/containerd/stargz-snapshotter/blob/main/docs/integration.md#image-builders

    [3] https://depot.dev

  • Launch HN: Resend (YC W23) – Email API for Developers Using React
    11 projects | news.ycombinator.com | 13 Jun 2023
    We use Resend for our transactional email at https://depot.dev after migrating away from Postmark following their acquisition. It's been awesome so far and because our app is Remix underneath the hood, it was delightfully easy to get our emails exactly how we wanted them.

    The visibility into what emails have been sent, to who, and what the content was is also incredibly helpful when we are talking about transactional emails. Double bonus for being able to share that email as well.

  • Docker layer cache is better when shared with your team.
    1 project | /r/u_depotdev | 19 May 2023

What are some alternatives?

When comparing peerdb and cli you can also consider the following projects:

pglogical - Logical Replication extension for PostgreSQL 15, 14, 13, 12, 11, 10, 9.6, 9.5, 9.4 (Postgres), providing much faster replication than Slony, Bucardo or Londiste, as well as cross-version upgrades.

lime - New standard library and runtime for the D programming language

transfer - Database replication platform that leverages change data capture. Stream production data from databases to your data warehouse (Snowflake, BigQuery, Redshift) in real-time.

plane - A distributed system for running WebSocket services at scale.

realtime - Broadcast, Presence, and Postgres Changes via WebSockets

windmill - Open-source developer platform to turn scripts into workflows and UIs. Fastest workflow engine (5x vs Airflow). Open-source alternative to Airplane and Retool.

cloudquery - The open source high performance ELT framework powered by Apache Arrow

fasten-onprem - Fasten is an open-source, self-hosted, personal/family electronic medical record aggregator, designed to integrate with 100,000's of insurances/hospitals/clinics

materialize - The data warehouse for operational workloads.

resend-node - resend's node.js sdk

bytebase - The GitHub/GitLab for database DevOps. World's most advanced database DevOps and CI/CD for Developer, DBA and Platform Engineering teams.

resend-java - Resend's Java SDK