patterns-devkit
prism
patterns-devkit | prism | |
---|---|---|
5 | 7 | |
106 | 79 | |
0.0% | - | |
2.9 | 8.9 | |
about 1 year ago | about 1 month ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
patterns-devkit
prism
- Prism: the easiest way to create robust data workflows. Accessible via CLI
- Show HN: Prism – a framework for creating robust data science workflows
- Show HN: Prism – Data Orchestration in Python
-
Introducing Prism: A Novel, Open-Source Data Orchestration Software. Feedback needed!
🔗 Website: https://runprism.com/
By joining our Alpha testing phase, you have the unique opportunity to be among the first users to experience Prism in action. Your invaluable feedback will directly impact the development of this platform, helping us make it even better, more stable, and tailored to your needs. Visit our website https://runprism.com to learn more about the platform and its features. In addition, check out our documentation at https://docs.runprism.com to get started right away! Access the GitHub repository https://github.com/runprism/prism to view the source code, report issues, and contribute to the project. Try out Prism in your own workflow environment and let us know what you think! We highly encourage you to share your thoughts, suggestions, and bug reports with us. Feel free to post your feedback directly in this thread, or if you prefer, you can raise issues on GitHub. Your input is invaluable to us, and together, we can shape Prism into the go-to tool for data workflow orchestration.
- Prism - a lightweight, yet powerful data orchestration platform in Python. Accessible via CLI
What are some alternatives?
pyspark-example-project - Implementing best practices for PySpark ETL jobs and applications.
datavault4dbt - Scalefree's dbt package for a Data Vault 2.0 implementation congruent to the original Data Vault 2.0 definition by Dan Linstedt including the Staging Area, DV2.0 main entities, PITs and Snapshot Tables.
Dataplane - Dataplane is a data platform that makes it easy to construct a data mesh with automated data pipelines and workflows.
JDR - Job Dependency Runner
pipebird - Pipebird is open source infrastructure for securely sharing data with customers.
retake - PostgreSQL for Search [Moved to: https://github.com/paradedb/paradedb]
SmartPipeline - A framework for rapid development of robust data pipelines following a simple design pattern
paradedb - Postgres for Search and Analytics
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
data-diff - Compare tables within or across databases
AWS Data Wrangler - pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, Neptune, OpenSearch, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).
jupysql - Better SQL in Jupyter. 📊