dlt
verified-sources
dlt | verified-sources | |
---|---|---|
6 | 2 | |
1,758 | 42 | |
9.0% | - | |
9.9 | 9.0 | |
5 days ago | 6 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dlt
-
Ask HN: Freelancer? Seeking freelancer? (December 2023)
SEEKING FREELANCER | REMOTE | GERMANY
dltHub is looking for a freelance help in the following repos:
- https://github.com/dlt-hub/dlt
-
Show HN: Data load tool(dlt)-Python library to automate the creation of datasets
You can use pydantic models to define schemas, validate data (we also load instances of the models natively): https://dlthub.com/docs/general-usage/resource#define-a-sche...
We have a PR (https://github.com/dlt-hub/dlt/pull/594) that is about to merge that makes the above highly configurable, between evolution and hard stopping:
- Data load tool (dlt) – open-source Python library that makes data loading easy
-
[Discussion] How to implement Data Contracts generically? Seeking advice from data contract users.
customize pipeline with hooks
- Which ETL/ELT tools do you think have future in data engineering space?
verified-sources
-
Ask HN: Freelancer? Seeking freelancer? (December 2023)
- https://github.com/dlt-hub/verified-sources
Please look at the issues, README and CONTRIBUTE guides - those are the tasks you’ll be working on. We will ensure you are onboarded and will give you comprehensive reviews of your code. We expect that you can work with us a minimum 20 hours/week and ideally you will be flexible to do some more upon request.
dlt is an open-source library that automatically creates datasets out of messy, unstructured data sources. You can use the library to move data from about anywhere into most of well-known SQL and vector stores, data lakes, storage buckets, or local engines like DuckDB. It automates many cumbersome data engineering tasks and can be handled by anyone who knows Python. You can read more about us here: https://news.ycombinator.com/item?id=37999527
---
Your Task and Responsibilities:
-
Show HN: Data load tool(dlt)-Python library to automate the creation of datasets
- get data from any storage bucket:https://github.com/dlt-hub/verified-sources/tree/master/sour...
What are some alternatives?
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
Proxmox-load-balancer - Designed to constantly maintain the Proxmox cluster in balance
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
grablinks - A simple and streamlined Python script to extract and filter links from a remote HTML resource.
Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]
Udacity-Data-Engineering-Projects - Few projects related to Data Engineering including Data Modeling, Infrastructure setup on cloud, Data Warehousing and Data Lake development.
prism - Prism is the easiest way to develop, orchestrate, and execute data pipelines in Python.
data-engineering-zoomcamp - Free Data Engineering course!
AWS Data Wrangler - pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).
DXY-COVID-19-Data - 2019新型冠状病毒疫情时间序列数据仓库 | COVID-19/2019-nCoV Infection Time Series Data Warehouse
versatile-data-kit - One framework to develop, deploy and operate data workflows with Python and SQL.