prosto VS mito

Compare prosto vs mito and see what are their differences.

prosto

Prosto is a data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby (by asavinov)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
prosto mito
9 18
89 2,215
- 3.1%
3.6 10.0
over 2 years ago 9 days ago
Python Python
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

prosto

Posts with mentions or reviews of prosto. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-27.
  • Show HN: PRQL 0.2 – Releasing a better SQL
    16 projects | news.ycombinator.com | 27 Jun 2022
    > Joins are what makes relational modeling interesting!

    It is the central part of RM which is difficult to model using other methods and which requires high expertise in non-trivial use cases. One alternative to how multiple tables can be analyzed without joins is proposed in the concept-oriented model [1] which relies on two equal modeling constructs: sets (like RM) and functions. In particular, it is implemented in the Prosto data processing toolkit [2] and its Column-SQL language. The idea is that links between tables are used instead of joins. A link is formally a function from one set to another set.

    [1] Joins vs. Links or Relational Join Considered Harmful https://www.researchgate.net/publication/301764816_Joins_vs_...

    [2] https://github.com/asavinov/prosto data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby

  • Excel 2.0 – Is there a better visual data model than a grid of cells?
    5 projects | news.ycombinator.com | 31 Mar 2022
    One idea is to use columns instead of cells. Each column has a definition in terms of other columns which might also be defined in terms of other columns. If you change value(s) in some source column then these changes will propagate through the graph of these column definitions. Some fragments of this general idea were implemented in different systems, for example, Power BI or Airtable.

    This approach was formalized in the concept-oriented model of data which relies on two basic elements: mathematical functions and mathematical sets. In contrast, most traditional data models rely on only sets. Functions are implemented as columns. The main difficulty in any formalization is how to deal with columns in multiple tables.

    This approach was implemented in the Prosto data processing toolkit: https://github.com/asavinov/prosto

  • Show HN: Query any kind of data with SQL powered by Python
    6 projects | news.ycombinator.com | 25 Jan 2022
    Having Python expressions within a declarative language is a really good idea because we can combine low level logic of computations of values with high level logic of set processing.

    A similar approach is implemented in the Prosto data processing toolkit:

    https://github.com/asavinov/prosto

    Although Prosto is viewed as an alternative to Map-Reduce by relying on functions, it also supports Python User-Defined Functions in its Column-SQL:

  • No-Code Self-Service BI/Data Analytics Tool
    1 project | news.ycombinator.com | 13 Nov 2021
    Most of the self-service or no-code BI, ETL, data wrangling tools are am aware of (like airtable, fieldbook, rowshare, Power BI etc.) were thought of as a replacement for Excel: working with tables should be as easily as working with spreadsheets. This problem can be solved when defining columns within one table: ``ColumnA=ColumnB+ColumnC, ColumnD=ColumnAColumnE`` we get a graph of column computations* similar to the graph of cell dependencies in spreadsheets.

    Yet, the main problem is in working multiple tables: how can we define a column in one table in terms of columns in other tables? For example: ``Table1::ColumnA=FUNCTION(Table2::ColumnB, Table3::ColumnC)`` Different systems provided different answers to this question but all of them are highly specific and rather limited.

    Why it is difficult to define new columns in terms of other columns in other tables? Short answer is that working with columns is not the relational approach. The relational model is working with sets (rows of tables) and not with columns.

    One generic approach to working with columns in multiple tables is provided in the concept-oriented model of data which treats mathematical functions as first-class elements of the model. Previously it was implemented in a data wrangling tool called Data Commander. But them I decided to implement this model in the *Prosto* data processing toolkit which is an alternative to map-reduce and SQL:

    https://github.com/asavinov/prosto

    It defines data transformations as operations with columns in multiple tables. Since we use mathematical functions, no joins and no groupby operations are needed and this significantly simplifies and makes more natural the task of data transformations.

    Moreover, now it provides *Column-SQL* which makes it even easier to define new columns in terms of other columns:

    https://github.com/asavinov/prosto/blob/master/notebooks/col...

  • Show HN: Hamilton, a Microframework for Creating Dataframes
    6 projects | news.ycombinator.com | 8 Nov 2021
    Hamilton is more similar to the Prosto data processing toolkit which also relies on column operations defined via Python functions:

    https://github.com/asavinov/prosto

    However, Prosto allows for data processing via column operations in many tables (implemented as pandas data frames) by providing a column-oriented equivalents for joins and groupby (hence it has no joins and no groupbys which are known to be quite difficult and require high expertise).

    Prosto also provides Column-SQL which might be simpler and more natural in many use cases.

    The whole approach is based on the concept-oriented model of data which makes functions first-class elements of the model as opposed to having only sets in the relational model.

  • Against SQL
    8 projects | news.ycombinator.com | 10 Jul 2021
    One alternative to SQL (type of thinking) is Column-SQL [1] which is based on a new data model. This model is relies on two equal constructs: sets (tables) and functions (columns). It is opposed to the relational algebra which is based on only sets and set operations. One benefit of Column-SQL is that it does not use joins and group-by for connectivity and aggregation, respectively, which are known to be quite difficult to understand and error prone in use. Instead, many typical data processing patterns are implemented by defining new columns: link columns instead of join, and aggregate columns instead of group-by.

    More details about "Why functions and column-orientation" (as opposed to sets) can be found in [2]. Shortly, problems with set-orientation and SQL are because producing sets is not what we frequently need - we need new columns and not new table. And hence applying set operations is a kind of workaround due the absence of column operations.

    This approach is implemented in the Prosto data processing toolkit [0] and Column-SQL[1] is a syntactic way to define its operations.

    [0] https://github.com/asavinov/prosto Prosto is a data processing toolkit - an alternative to map-reduce and join-groupby

    [1] https://prosto.readthedocs.io/en/latest/text/column-sql.html Column-SQL (work in progress)

    [2] https://prosto.readthedocs.io/en/latest/text/why.html Why functions and column-orientation?

  • Functions matter – an alternative to SQL and map-reduce for data processing
    1 project | /r/datascience | 19 May 2021
  • NoSQL Data Modeling Techniques
    1 project | news.ycombinator.com | 10 Apr 2021
    > This is closer to the way that humans perceive the world — mapping between whatever aspect of external reality you are interested in and the data model is an order of magnitude easier than with relational databases.

    One approach to modeling data based on mappings (mathematical functions) is the concept-oriented model [1] implemented in [2]. Its main feature is that it gets rid of joins, groupby and map-reduce by manipulating data using operations with functions (mappings).

    > Everything is pre-joined — you don’t have to disassemble objects into normalised tables and reassemble them with joins.

    One old related general idea is to assume the existence of universal relation. Such an approach is referred to as the universal relation model (URM) [3, 4].

    [1] A. Savinov, Concept-oriented model: Modeling and processing data using functions, Eprint: arXiv:1911.07225 [cs.DB], 2019 https://www.researchgate.net/publication/337336089_Concept-o...

    [2] https://github.com/asavinov/prosto Prosto Data Processing Toolkit: No join-groupby, No map-reduce

    [3] https://en.wikipedia.org/wiki/Universal_relation_assumption

    [4] R. Fagin, A.O. Mendelzon and J.D. Ullman, A Simplified Universal Relation Assumption and Its Properties. ACM Trans. Database Syst., 7(3), 343-360 (1982).

  • Feature Processing in Go
    3 projects | news.ycombinator.com | 21 Dec 2020
    (Currently, it is not actively developed and the focus is moved to a similar project - https://github.com/asavinov/prosto - also focused on data preprocessing and feature engineering)

mito

Posts with mentions or reviews of mito. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-04.
  • The Design Philosophy of Great Tables (Software Package)
    7 projects | news.ycombinator.com | 4 Apr 2024
    2. The report you're sending out for display is _expected_ in an Excel format. The two main reasons for this are just organizational momentum, or that you want to let the receiver conduct additional ad-hoc analysis (Excel is best for this in almost every org).

    The way we've sliced this problem space is by improving the interfaces that users can use to export formatting to Excel. You can see some of our (open-core) code here [2]. TL;DR: Mito gives you an interface in Jupyter that looks like a spreadsheet, where you can apply formatting like Excel (number formatting, conditional formatting, color formatting) - and then Mito automatically generates code that exports this formatting to an Excel. This is one of our more compelling enterprise features, for decision makers that work with non-expert Python programmers - getting formatting into Excel is a big hassle.

    [1] https://trymito.io

    [2] https://github.com/mito-ds/mito/blob/dev/mitosheet/mitosheet...

  • What codegen is (actually) good for
    2 projects | news.ycombinator.com | 28 Sep 2023
    3. So you do want to do code-gen, does it make sense to do it in a chat interface, or can we do better?

    As a Figma user, I'd answer these in the following way:

    > Why is it necessary to generate code in the first place?

    Because mockups aren't your production website, and your production website is written in code. But maybe this is just for now?

    I'm sure some high-up PM at Figma has this as their goal - mockup the website in Figma, it generates the code for a website (you don't see this code!), and then you can click deploy _so easily_. Who wants to bet that hosting services like Vercel etc reach out to Figma once a week to try and pitch them...

    In the meantime, while we have websites that don't fit neatly inside Figma constraints, while developers are easier to hire than good designers (in my experience), while no-code tools are continually thought of as limiting and a bad long-term solution -- Figma code export is good.

    > Why is just writing the code by the hand not the best solution?

    For the majority of us full-stack devs who have written >0 CSS but are less than masters, I'll leave this as self-evident.

    > So you do want to do code-gen, does it make sense to do it in a chat interface, or can we do better?

    In the case of Figma, if they were a new startup with no existing product and they were trying to "automation UI creation" -- v1 of their interface probably would be a "describe your website" and then we'll generate the code for it.

    This would probably suck. What if you wanted to easily tweak the output? What if you had trouble describing what you wanted, but you could draw it (ok, OpenAI vision might help on this one)? What if you had experience with existing design tools you could use to augment the AI. A chat interface is not the best interface for design work.

    ChatGPT-style code-generation is like v0.1. Github Copilot is an example of next step - it's not just a chat interface, it's something a bit more integrated into an environment that make sense in the context of the work you're doing. For design work, a canvas (literally! [2]) like Figma is well-suited as an environment for code-gen that can augment (and maybe one day replace) the programmers working on frontend. For tabular data work, we think a spreadsheet is the interface where users want to be, and the interface it makes sense to bring code-gen to.

    Any thoughts appreciated!

    [1] https://trymito.io, https://github.com/mito-ds/mito

  • Pandas AI – The Future of Data Analysis
    7 projects | news.ycombinator.com | 17 May 2023
    I think the biggest area for growth for LLM based tools for data analysis is around helping users _understand what edits they actually made_.

    I'm a co-founder of a non-AI data code-gen tool for data analysis -- but we also have a basic version of an LLM integration. The problem we see with tooling like Pandas AI (in practice! with real users at enterprises!) is that users make an edit like "remove NaN values" and then get a new dataframe -- but they have no way of checking if the edited dataframe is actually what they want. Maybe the LLM removed NaN values. Maybe it just deleted some random rows!

    The key here: how can users build an understanding of how their data changed, and confirm that the changes made by the LLM are the changes they wanted. In other words, recon!

    We've been experimenting more with this recon step in the AI flow (you can see the final PR here: https://github.com/mito-ds/monorepo/pull/751). It takes a similar approach to the top comment (passing a subset of the data to the LLM), and then really focuses in the UI around "what changes were made." There's a lot of opportunity for growth here, I think!

    Any/all feedback appreciated :)

  • The hand-picked selection of the best Python libraries and tools of 2022
    11 projects | /r/Python | 26 Dec 2022
    Mito — spreadsheet inside notebooks
  • I made an open source spreadsheet that turns your edits into Python
    1 project | /r/programming | 26 Aug 2022
  • I made a tool that turns Excel into Python
    1 project | /r/excel | 19 Aug 2022
    You can see the open source code here.
  • I made a Spreadsheet for Python beginners that writes Python for you
    1 project | /r/learnpython | 18 Aug 2022
    Here is the Github again.
  • Learn Python through your Spreadsheet Skills
    1 project | /r/Python | 29 Jun 2022
    Mito is an open source Python package that allows the user to call an interactive spreadsheet into their Python environment. Each edit made in the spreadsheet generates the equivalent Python.
  • A Spreadsheet for Data Science that Writes Python for Every Edit
    1 project | /r/datascience | 28 Jun 2022
  • Mito lets you write Python by editing a spreadsheet
    1 project | /r/excel | 16 Jun 2022
    Mito is an open source Python tool that allows you to call a spreadsheet into your Python environment. Each edit you make in the spreadsheet generates the equivalent Python for you. This allows users to access Python with the spreadsheet skills they already have. Here is the Github

What are some alternatives?

When comparing prosto and mito you can also consider the following projects:

Preql - An interpreted relational query language that compiles to SQL.

qgrid - An interactive grid for sorting, filtering, and editing DataFrames in Jupyter notebooks

rel8 - Hey! Hey! Can u rel8?

Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai

opaleye

appsmith - Platform to build admin panels, internal tools, and dashboards. Integrates with 25+ databases and any API.

hamilton - A scalable general purpose micro-framework for defining dataflows. THIS REPOSITORY HAS BEEN MOVED TO www.github.com/dagworks-inc/hamilton

dtale - Visualizer for pandas data structures

Optimus - :truck: Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark

budibase - Budibase is an open-source low code platform that helps you build internal tools in minutes 🚀

cape-dataframes - Privacy transformations on Spark and Pandas dataframes backed by a simple policy language.

lux - Automatically visualize your pandas dataframe via a single print! 📊 💡