prosto
hamilton
prosto | hamilton | |
---|---|---|
9 | 26 | |
91 | 878 | |
- | - | |
3.6 | 8.1 | |
about 3 years ago | almost 2 years ago | |
Python | Python | |
MIT License | BSD 3-clause Clear License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
prosto
-
Show HN: PRQL 0.2 – Releasing a better SQL
> Joins are what makes relational modeling interesting!
It is the central part of RM which is difficult to model using other methods and which requires high expertise in non-trivial use cases. One alternative to how multiple tables can be analyzed without joins is proposed in the concept-oriented model [1] which relies on two equal modeling constructs: sets (like RM) and functions. In particular, it is implemented in the Prosto data processing toolkit [2] and its Column-SQL language. The idea is that links between tables are used instead of joins. A link is formally a function from one set to another set.
[1] Joins vs. Links or Relational Join Considered Harmful https://www.researchgate.net/publication/301764816_Joins_vs_...
[2] https://github.com/asavinov/prosto data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby
-
Excel 2.0 – Is there a better visual data model than a grid of cells?
One idea is to use columns instead of cells. Each column has a definition in terms of other columns which might also be defined in terms of other columns. If you change value(s) in some source column then these changes will propagate through the graph of these column definitions. Some fragments of this general idea were implemented in different systems, for example, Power BI or Airtable.
This approach was formalized in the concept-oriented model of data which relies on two basic elements: mathematical functions and mathematical sets. In contrast, most traditional data models rely on only sets. Functions are implemented as columns. The main difficulty in any formalization is how to deal with columns in multiple tables.
This approach was implemented in the Prosto data processing toolkit: https://github.com/asavinov/prosto
-
Show HN: Query any kind of data with SQL powered by Python
Having Python expressions within a declarative language is a really good idea because we can combine low level logic of computations of values with high level logic of set processing.
A similar approach is implemented in the Prosto data processing toolkit:
https://github.com/asavinov/prosto
Although Prosto is viewed as an alternative to Map-Reduce by relying on functions, it also supports Python User-Defined Functions in its Column-SQL:
- No-Code Self-Service BI/Data Analytics Tool
-
Show HN: Hamilton, a Microframework for Creating Dataframes
Hamilton is more similar to the Prosto data processing toolkit which also relies on column operations defined via Python functions:
https://github.com/asavinov/prosto
However, Prosto allows for data processing via column operations in many tables (implemented as pandas data frames) by providing a column-oriented equivalents for joins and groupby (hence it has no joins and no groupbys which are known to be quite difficult and require high expertise).
Prosto also provides Column-SQL which might be simpler and more natural in many use cases.
The whole approach is based on the concept-oriented model of data which makes functions first-class elements of the model as opposed to having only sets in the relational model.
-
Against SQL
One alternative to SQL (type of thinking) is Column-SQL [1] which is based on a new data model. This model is relies on two equal constructs: sets (tables) and functions (columns). It is opposed to the relational algebra which is based on only sets and set operations. One benefit of Column-SQL is that it does not use joins and group-by for connectivity and aggregation, respectively, which are known to be quite difficult to understand and error prone in use. Instead, many typical data processing patterns are implemented by defining new columns: link columns instead of join, and aggregate columns instead of group-by.
More details about "Why functions and column-orientation" (as opposed to sets) can be found in [2]. Shortly, problems with set-orientation and SQL are because producing sets is not what we frequently need - we need new columns and not new table. And hence applying set operations is a kind of workaround due the absence of column operations.
This approach is implemented in the Prosto data processing toolkit [0] and Column-SQL[1] is a syntactic way to define its operations.
[0] https://github.com/asavinov/prosto Prosto is a data processing toolkit - an alternative to map-reduce and join-groupby
[1] https://prosto.readthedocs.io/en/latest/text/column-sql.html Column-SQL (work in progress)
[2] https://prosto.readthedocs.io/en/latest/text/why.html Why functions and column-orientation?
- Functions matter – an alternative to SQL and map-reduce for data processing
-
NoSQL Data Modeling Techniques
> This is closer to the way that humans perceive the world — mapping between whatever aspect of external reality you are interested in and the data model is an order of magnitude easier than with relational databases.
One approach to modeling data based on mappings (mathematical functions) is the concept-oriented model [1] implemented in [2]. Its main feature is that it gets rid of joins, groupby and map-reduce by manipulating data using operations with functions (mappings).
> Everything is pre-joined — you don’t have to disassemble objects into normalised tables and reassemble them with joins.
One old related general idea is to assume the existence of universal relation. Such an approach is referred to as the universal relation model (URM) [3, 4].
[1] A. Savinov, Concept-oriented model: Modeling and processing data using functions, Eprint: arXiv:1911.07225 [cs.DB], 2019 https://www.researchgate.net/publication/337336089_Concept-o...
[2] https://github.com/asavinov/prosto Prosto Data Processing Toolkit: No join-groupby, No map-reduce
[3] https://en.wikipedia.org/wiki/Universal_relation_assumption
[4] R. Fagin, A.O. Mendelzon and J.D. Ullman, A Simplified Universal Relation Assumption and Its Properties. ACM Trans. Database Syst., 7(3), 343-360 (1982).
-
Feature Processing in Go
(Currently, it is not actively developed and the focus is moved to a similar project - https://github.com/asavinov/prosto - also focused on data preprocessing and feature engineering)
hamilton
-
Write production grade pandas (and other libraries!) with Hamilton
And find the repository here: https://github.com/dagworks-inc/hamilton/
-
Useful libraries for data engineering in various programming languages
Python - https://github.com/stitchfix/hamilton (author here). It's great if you want your code to be always unit testable and documentation friendly, and you want to be able to visualize execution. Blog post on using it with Pandas https://link.medium.com/XhyYD9BAntb.
-
Cognitive Loads in Programming
Yes! As one of the creators of https://github.com/stitchfix/hamilton this was one of the aims. Simplifying the cognitive burden for those developing and managing data transforms over the course of years, and in particular for ones they didn't write!
For example in Hamilton -- we force people to write "declarative functions" which then are stitched together to create a dataflow.
E.g. example function -- my guess is that you can read and understand/guess what it does very easily.
-
Prefect vs other things question
For (1) there are quite a few options - prefect is one, metaflow is another, airflow, dagster, even https://github.com/stitchfix/hamilton (core contributor here), etc.
-
Field Lineage
If you're want to do more python https://github.com/stitchfix/hamilton allows you to model dependencies at a columnar (field) level.
- Show HN
-
[D] Is anyone working on interesting ML libraries and looking for contributors?
Take a look at https://github.com/stitchfix/hamilton - we're after contributors who can help us grow the project, e.g. make documentation great, dog fooding features and suggesting/contributing usability improvements.
-
Useful Python decorators for Data Scientists
For a real world example of their power, we built an entire framework (https://github.com/stitchfix/hamilton) at Stitch Fix, where a lot of cool magic is provide via decorators - see https://hamilton-docs.gitbook.io/docs/reference/api-reference/available-decorators and these two source files (https://github.com/stitchfix/hamilton/blob/main/hamilton/function_modifiers_base.py, https://github.com/stitchfix/hamilton/blob/main/hamilton/function_modifiers.py ). Note we do some non-trivial stuff via them.
-
unit tests
For data processing/transform code, I would recommend looking at https://github.com/stitchfix/hamilton, especially if you're trying to test pandas code. Short getting started here - https://towardsdatascience.com/how-to-use-hamilton-with-pandas-in-5-minutes-89f63e5af8f5 (disclaimer: I'm one of the authors).
-
Dealing with hundreds of customer/computed columns
The python package, hamilton, from Stitch Fix (https://hamilton-docs.gitbook.io/docs/) can help manage transformations on pandas dataframes. This DAG of transformations is managed separately in a file - so it can be versioned, in case the transformations change. The memory required is reduced, because only the API call tables and mapping parameter table have to be in memory. The calculated columns can be produced as needed. Just like dbt, transformations are separate from the source tables - but hamilton can be used on any python object - not just dataframes. dbt is SQL based.
What are some alternatives?
Preql - An interpreted relational query language that compiles to SQL.
versatile-data-kit - One framework to develop, deploy and operate data workflows with Python and SQL.
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
OpenLineage - An Open Standard for lineage metadata collection
opaleye
plumbing - Prismatic's Clojure(Script) utility belt
mito - The mitosheet package, trymito.io, and other public Mito code.
llrt - Local Learning Rule Tensors neural network library
spyql - Query data on the command line with SQL-like SELECTs powered by Python expressions
fn_graph - Lightweight function pipelines for Python
rel8 - Hey! Hey! Can u rel8?
datahub - The Metadata Platform for your Data and AI Stack