dbd VS ethereum-etl

Compare dbd vs ethereum-etl and see what are their differences.

dbd

dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases. (by zsvoboda)

ethereum-etl

Python scripts for ETL (extract, transform and load) jobs for Ethereum blocks, transactions, ERC20 / ERC721 tokens, transfers, receipts, logs, contracts, internal transactions. Data is available in Google BigQuery https://goo.gl/oY5BCQ (by blockchain-etl)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
dbd ethereum-etl
4 3
55 2,823
- 1.6%
0.0 6.9
about 2 years ago 18 days ago
Python Python
BSD 3-clause "New" or "Revised" License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

dbd

Posts with mentions or reviews of dbd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-23.
  • Easy loading Kaggle dataset to a database
    2 projects | /r/datascience | 23 Jan 2022
    I've created two examples of how to use the dbd tool to load Kaggle dataset data files (csv, json, xls, parquet) to your Postgres, MySQL, or SQLite database.Basically, you don't have to create any tables, nor run any SQL INSERT or COPY statements. Everything is automated. You just reference the datasets and files with a URL and execute a 'dbd run' command.The examples are here. Perhaps you find it useful. Let me know, what you think!
  • Easy loading dataset files to a database
    2 projects | /r/kaggle | 23 Jan 2022
    I've created two examples of how to use the [dbd](https://github.com/zsvoboda/dbd) tool to load Kaggle dataset data files (csv, json, xls, parquet) to your Postgres, MySQL, or SQLite database.
  • dbd: create your database from data files on your directory
    1 project | /r/SQL | 15 Jan 2022
    I work on the new open-sourced tool called dbd that enables you to load data from your local data files to your database and transform it using insert-from-select statements. The tool supports templating (Jinja2). It works with Postgres, MySQL, SQLite, Snowflake, Redshift, and BigQuery.
  • New opensource ELT tool
    1 project | /r/dataengineering | 9 Jan 2022
    I was looking for some declarative ELT tool for creating my analytics solutions, and DBT was the closest I've found. I liked its concept, but I came across quite a few limitations when I wanted to use it. I couldn't specify and create basic things like data types, indexes, primary/foreign keys, etc. In the end, I decided to implement my own - more straightforward and more flexible. I've published the result - dbd on GitHub. Perhaps, you can find it helpful. Your feedback is greatly appreciated!

ethereum-etl

Posts with mentions or reviews of ethereum-etl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-27.
  • Blockchain transactions decoding: making wallet activity understandable
    5 projects | dev.to | 27 Oct 2023
    Event is a log entity which EVM smart contracts can emit during transaction execution. Events are very good at signalling that an some action has taken place on-chain. Applications can subscribe and listen to events to trigger some off-chain logic or they can index, transform and store events in some off-chain storage (look at The Graph protocol or Ethereum ETL).
  • data engineering in web3
    1 project | /r/dataengineering | 20 May 2022
    I'm surprised this is the only good response in this thread so far. Blockchain data is completely open but requires some organization in order to perform analytics. Nansen for example is a product that is built on top of ethereum-etl which you can checkout here
  • Trying To Recover Old ETH
    1 project | /r/ethereum | 1 Jan 2021
    You can use https://github.com/blockchain-etl/ethereum-etl

What are some alternatives?

When comparing dbd and ethereum-etl you can also consider the following projects:

Skytrax-Data-Warehouse - A full data warehouse infrastructure with ETL pipelines running inside docker on Apache Airflow for data orchestration, AWS Redshift for cloud data warehouse and Metabase to serve the needs of data visualizations such as analytical dashboards.

CueObserve - Timeseries Anomaly detection and Root Cause Analysis on data in SQL data warehouses and databases

pgsync - Postgres to Elasticsearch/OpenSearch sync

helium-etl-queries - A collection of SQL views used to enrich data produced by a Helium blockchain-etl

data-toolset - Upgrade from avro-tools and parquet-tools jars to a more user-friendly Python package.

rainbow_csv - 🌈Rainbow CSV - Vim plugin: Highlight columns in CSV and TSV files and run queries in SQL-like language

api - Moved to https://github.com/covid19india/data/

NetXML-to-CSV - Convert .netxml file into CSV file

sqlmesh - Efficient data transformation and modeling framework that is backwards compatible with dbt.

spotty - Training deep learning models on AWS and GCP instances

pydwt - Modeling tool like DBT to use SQL Alchemy core with a DataFrame interface like

gcptree - Like the unix tree command but for GCP Org Heirarchy