prosto
mito
prosto | mito | |
---|---|---|
9 | 19 | |
91 | 2,319 | |
- | 0.4% | |
3.6 | 9.9 | |
about 3 years ago | 7 days ago | |
Python | Jupyter Notebook | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
prosto
-
Show HN: PRQL 0.2 – Releasing a better SQL
> Joins are what makes relational modeling interesting!
It is the central part of RM which is difficult to model using other methods and which requires high expertise in non-trivial use cases. One alternative to how multiple tables can be analyzed without joins is proposed in the concept-oriented model [1] which relies on two equal modeling constructs: sets (like RM) and functions. In particular, it is implemented in the Prosto data processing toolkit [2] and its Column-SQL language. The idea is that links between tables are used instead of joins. A link is formally a function from one set to another set.
[1] Joins vs. Links or Relational Join Considered Harmful https://www.researchgate.net/publication/301764816_Joins_vs_...
[2] https://github.com/asavinov/prosto data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby
-
Excel 2.0 – Is there a better visual data model than a grid of cells?
One idea is to use columns instead of cells. Each column has a definition in terms of other columns which might also be defined in terms of other columns. If you change value(s) in some source column then these changes will propagate through the graph of these column definitions. Some fragments of this general idea were implemented in different systems, for example, Power BI or Airtable.
This approach was formalized in the concept-oriented model of data which relies on two basic elements: mathematical functions and mathematical sets. In contrast, most traditional data models rely on only sets. Functions are implemented as columns. The main difficulty in any formalization is how to deal with columns in multiple tables.
This approach was implemented in the Prosto data processing toolkit: https://github.com/asavinov/prosto
-
Show HN: Query any kind of data with SQL powered by Python
Having Python expressions within a declarative language is a really good idea because we can combine low level logic of computations of values with high level logic of set processing.
A similar approach is implemented in the Prosto data processing toolkit:
https://github.com/asavinov/prosto
Although Prosto is viewed as an alternative to Map-Reduce by relying on functions, it also supports Python User-Defined Functions in its Column-SQL:
- No-Code Self-Service BI/Data Analytics Tool
-
Show HN: Hamilton, a Microframework for Creating Dataframes
Hamilton is more similar to the Prosto data processing toolkit which also relies on column operations defined via Python functions:
https://github.com/asavinov/prosto
However, Prosto allows for data processing via column operations in many tables (implemented as pandas data frames) by providing a column-oriented equivalents for joins and groupby (hence it has no joins and no groupbys which are known to be quite difficult and require high expertise).
Prosto also provides Column-SQL which might be simpler and more natural in many use cases.
The whole approach is based on the concept-oriented model of data which makes functions first-class elements of the model as opposed to having only sets in the relational model.
-
Against SQL
One alternative to SQL (type of thinking) is Column-SQL [1] which is based on a new data model. This model is relies on two equal constructs: sets (tables) and functions (columns). It is opposed to the relational algebra which is based on only sets and set operations. One benefit of Column-SQL is that it does not use joins and group-by for connectivity and aggregation, respectively, which are known to be quite difficult to understand and error prone in use. Instead, many typical data processing patterns are implemented by defining new columns: link columns instead of join, and aggregate columns instead of group-by.
More details about "Why functions and column-orientation" (as opposed to sets) can be found in [2]. Shortly, problems with set-orientation and SQL are because producing sets is not what we frequently need - we need new columns and not new table. And hence applying set operations is a kind of workaround due the absence of column operations.
This approach is implemented in the Prosto data processing toolkit [0] and Column-SQL[1] is a syntactic way to define its operations.
[0] https://github.com/asavinov/prosto Prosto is a data processing toolkit - an alternative to map-reduce and join-groupby
[1] https://prosto.readthedocs.io/en/latest/text/column-sql.html Column-SQL (work in progress)
[2] https://prosto.readthedocs.io/en/latest/text/why.html Why functions and column-orientation?
- Functions matter – an alternative to SQL and map-reduce for data processing
-
NoSQL Data Modeling Techniques
> This is closer to the way that humans perceive the world — mapping between whatever aspect of external reality you are interested in and the data model is an order of magnitude easier than with relational databases.
One approach to modeling data based on mappings (mathematical functions) is the concept-oriented model [1] implemented in [2]. Its main feature is that it gets rid of joins, groupby and map-reduce by manipulating data using operations with functions (mappings).
> Everything is pre-joined — you don’t have to disassemble objects into normalised tables and reassemble them with joins.
One old related general idea is to assume the existence of universal relation. Such an approach is referred to as the universal relation model (URM) [3, 4].
[1] A. Savinov, Concept-oriented model: Modeling and processing data using functions, Eprint: arXiv:1911.07225 [cs.DB], 2019 https://www.researchgate.net/publication/337336089_Concept-o...
[2] https://github.com/asavinov/prosto Prosto Data Processing Toolkit: No join-groupby, No map-reduce
[3] https://en.wikipedia.org/wiki/Universal_relation_assumption
[4] R. Fagin, A.O. Mendelzon and J.D. Ullman, A Simplified Universal Relation Assumption and Its Properties. ACM Trans. Database Syst., 7(3), 343-360 (1982).
-
Feature Processing in Go
(Currently, it is not actively developed and the focus is moved to a similar project - https://github.com/asavinov/prosto - also focused on data preprocessing and feature engineering)
mito
-
Show HN: Excel to Python Compiler
3. Tables that translate as Pandas dataframes. We support at most one table per sheet, at the tables must be contigious. If the formulas in a column are consistent, then we will try and translate this as a single pandas statement.
We do not support: pivot tables or complex formulas. When we fail to translate these, we generate TODO statements. We also don’t support graphs or macros - and you won’t see these reflected in the output at all currently.
*Why we built this:*
We did YCS20 and built an open source tool called [Mito](https://trymito.io). It’s been a good journey since then - we’ve scaled revenue and to over [2k Github stars](https://github.com/mito-ds/mito). But fundamentally, Mito is a tool that’s useful for Excel users who wanted to start writing Python code more effectively.
We wanted to take another stab at the Excel -> Python pain point that was more developer focused - that helped developers that have to translate Excel files into Python do this much more quickly. Hence, Pyoneer!
I’ll be in the comments today if you’ve got feedback, criticism, questions, or comments.
-
The Design Philosophy of Great Tables (Software Package)
2. The report you're sending out for display is _expected_ in an Excel format. The two main reasons for this are just organizational momentum, or that you want to let the receiver conduct additional ad-hoc analysis (Excel is best for this in almost every org).
The way we've sliced this problem space is by improving the interfaces that users can use to export formatting to Excel. You can see some of our (open-core) code here [2]. TL;DR: Mito gives you an interface in Jupyter that looks like a spreadsheet, where you can apply formatting like Excel (number formatting, conditional formatting, color formatting) - and then Mito automatically generates code that exports this formatting to an Excel. This is one of our more compelling enterprise features, for decision makers that work with non-expert Python programmers - getting formatting into Excel is a big hassle.
[1] https://trymito.io
[2] https://github.com/mito-ds/mito/blob/dev/mitosheet/mitosheet...
- What codegen is (actually) good for
-
Pandas AI – The Future of Data Analysis
I think the biggest area for growth for LLM based tools for data analysis is around helping users _understand what edits they actually made_.
I'm a co-founder of a non-AI data code-gen tool for data analysis -- but we also have a basic version of an LLM integration. The problem we see with tooling like Pandas AI (in practice! with real users at enterprises!) is that users make an edit like "remove NaN values" and then get a new dataframe -- but they have no way of checking if the edited dataframe is actually what they want. Maybe the LLM removed NaN values. Maybe it just deleted some random rows!
The key here: how can users build an understanding of how their data changed, and confirm that the changes made by the LLM are the changes they wanted. In other words, recon!
We've been experimenting more with this recon step in the AI flow (you can see the final PR here: https://github.com/mito-ds/monorepo/pull/751). It takes a similar approach to the top comment (passing a subset of the data to the LLM), and then really focuses in the UI around "what changes were made." There's a lot of opportunity for growth here, I think!
Any/all feedback appreciated :)
-
The hand-picked selection of the best Python libraries and tools of 2022
Mito — spreadsheet inside notebooks
- I made an open source spreadsheet that turns your edits into Python
-
I made a tool that turns Excel into Python
You can see the open source code here.
-
I made a Spreadsheet for Python beginners that writes Python for you
Here is the Github again.
-
Learn Python through your Spreadsheet Skills
Mito is an open source Python package that allows the user to call an interactive spreadsheet into their Python environment. Each edit made in the spreadsheet generates the equivalent Python.
- A Spreadsheet for Data Science that Writes Python for Every Edit
What are some alternatives?
Preql - An interpreted relational query language that compiles to SQL.
qgrid - An interactive grid for sorting, filtering, and editing DataFrames in Jupyter notebooks
Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
opaleye
dtale - Visualizer for pandas data structures
spyql - Query data on the command line with SQL-like SELECTs powered by Python expressions
mathesar - Web application providing an intuitive user experience to databases.
rel8 - Hey! Hey! Can u rel8?
lux - Automatically visualize your pandas dataframe via a single print! 📊 💡
fquery - A graph query engine
appsmith - Platform to build admin panels, internal tools, and dashboards. Integrates with 25+ databases and any API.