delta VS LakeSoul

Compare delta vs LakeSoul and see what are their differences.

delta

An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs (by delta-io)

LakeSoul

LakeSoul is an end-to-end, realtime and cloud native Lakehouse framework with fast data ingestion, concurrent update and incremental data analytics on cloud storages for both BI and AI applications. (by lakesoul-io)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
delta LakeSoul
69 21
6,874 2,301
2.2% 1.7%
9.8 9.3
6 days ago 11 days ago
Scala Java
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

delta

Posts with mentions or reviews of delta. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-19.
  • Delta Lake vs. Parquet: A Comparison
    2 projects | news.ycombinator.com | 19 Jan 2024
    Delta is pretty great, let's you do upserts into tables in DataBricks much easier than without it.

    I think the website is here: https://delta.io

  • Understanding Parquet, Iceberg and Data Lakehouses
    4 projects | news.ycombinator.com | 29 Dec 2023
    I often hear references to Apache Iceberg and Delta Lake as if they’re two peas in the Open Table Formats pod. Yet…

    Here’s the Apache Iceberg table format specification:

    https://iceberg.apache.org/spec/

    As they like to say in patent law, anyone “skilled in the art” of database systems could use this to build and query Iceberg tables without too much difficulty.

    This is nominally the Delta Lake equivalent:

    https://github.com/delta-io/delta/blob/master/PROTOCOL.md

    I defy anyone to even scope out what level of effort would be required to fully implement the current spec, let alone what would be involved in keeping up to date as this beast evolves.

    Frankly, the Delta Lake spec reads like a reverse engineering of whatever implementation tradeoffs Databricks is making as they race to build out a lakehouse for every Fortune 1000 company burned by Hadoop (which is to say, most of them).

    My point is that I’ve yet to be convinced that buying into Delta Lake is actually buying into an open ecosystem. Would appreciate any reassurance on this front!

  • Getting Started with Flink SQL, Apache Iceberg and DynamoDB Catalog
    4 projects | dev.to | 18 Dec 2023
    Apache Iceberg is one of the three types of lakehouse, the other two are Apache Hudi and Delta Lake.
  • [D] Is there other better data format for LLM to generate structured data?
    1 project | /r/MachineLearning | 10 Dec 2023
    The Apache Spark / Databricks community prefers Apache parquet or Linux Fundation's delta.io over json.
  • Delta vs Iceberg: make love not war
    1 project | /r/MicrosoftFabric | 30 Jun 2023
    Delta 3.0 extends an olive branch. https://github.com/delta-io/delta/releases/tag/v3.0.0rc1
  • Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
    4 projects | news.ycombinator.com | 26 Jun 2023
    Databricks provides Jupyter lab like notebooks for analysis and ETL pipelines using spark through pyspark, sparkql or scala. I think R is supported as well but it doesn't interop as well with their newer features as well as python and SQL do. It interfaces with cloud storage backend like S3 and offers some improvements to the parquet format of data querying that allows for updating, ordering and merged through https://delta.io . They integrate pretty seamlessly to other data visualisation tooling if you want to use it for that but their built in graphs are fine for most cases. They also have ML on rails type through menus and models if I recall but I typically don't use it for that. I've typically used it for ETL or ELT type workflows for data that's too big or isn't stored in a database.
  • The "Big Three's" Data Storage Offerings
    2 projects | /r/dataengineering | 15 Jun 2023
    Structured, Semi-structured and Unstructured can be stored in one single format, a lakehouse storage format like Delta, Iceberg or Hudi (assuming those don't require low-latency SLAs like subsecond).
  • Ideas/Suggestions around setting up a data pipeline from scratch
    3 projects | /r/dataengineering | 9 Jun 2023
    As the data source, what I have is a gRPC stream. I get data in protobuf encoded format from it. This is a fixed part in the overall system, there is no other way to extract the data. We plan to ingest this data in delta lake, but before we do that there are a few problems.
  • Medallion/lakehouse architecture data modelling
    1 project | /r/dataengineering | 3 Jun 2023
    Take a look at Delta Lake https://delta.io, it enables a lot of database-like actions on files
  • CSV or Parquet File Format
    3 projects | /r/Python | 1 Jun 2023
    I prefer parquet (or delta for larger datasets. CSV for very small datasets, or the ones that will be later used/edited in Excel or Googke sheets.

LakeSoul

Posts with mentions or reviews of LakeSoul. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-12-28.
  • Open Source first Anniversary Star 1.2K! Review on the anniversary of LakeSoul, the unique open-source Lakehouse
    2 projects | dev.to | 28 Dec 2022
    Review code reference: https://github.com/meta-soul/LakeSoul/pull/115
  • The best Open-source lakehouse project, LakeSoul 2.0, supports snapshot, rollback, Flink, and Hive interconnection
    1 project | dev.to | 8 Jul 2022
    In LakeSoul 2.0, metadata and database interaction are fully implemented using the Postgres SQL (PG) protocol for reasons mentioned at https://github.com/meta-soul/LakeSoul/issues/23. On the one hand, Cassandra does not support single-table multi-partition transactions. On the other hand, Cassandra cluster management has higher maintenance costs, while the Postgres SQL protocol is widely used in enterprises and has lower maintenance costs. You need to configure PG parameters. For details, click https://github.com/meta-soul/LakeSoul/wiki/02.-QuickStart
  • A New One-stop AI development and production platform, AlphaIDE
    2 projects | dev.to | 15 Jun 2022
    I’ve posted about LakeSoul, an open-source framework for unified streaming and batch table storage, and MetaSpore, an open-source platform for machine learning.
  • Build a real-time machine learning sample library using the best open-source project about big data and data lakehouse, LakeSoul
    1 project | /r/datascience | 9 Jun 2022
    2.4 Data Backfill Since LakeSoul supports Upsert of any Range partitioned data, there is no difference between backtracking and streaming write. When the data to be inserted is ready, Spark performs Upsert to update historical data. LakeSoul automatically recognizes Schema changes. Update meta information of tables to implement Schema evolution. LakeSoul provides a complete storage function of data warehouse tables, and each historical partition can be queried and updated. Compared with Flink’s window Join scheme, it solves the problem of invisible intermediate states and can quickly realize mass updates and traceability of historical data.
    1 project | dev.to | 6 May 2022
    The previous article, "The design concept of the best open-source project about big data and data lakehouse" introduced the design concept and partial realization principle of LakeSoul's open-source and stream batch integrated surface storage framework. The original intention of the design of LakeSoul is to solve various problems that are difficult to solve in traditional Hive data warehouse scenarios, including Upsert update, Merge on Read, and concurrent write. This article will demonstrate the core capabilities of LakeSoul using a typical application scenario: building a real-time machine learning sample library.
  • Solved a practical business problem when using Hudi: LakeSoul supports null field non-override semanticssemantics
    1 project | dev.to | 29 May 2022
    Recently, the LakeSoul r&d team helped users solve a practical business problem using Hudi. Here is a summary and record. The business process is that the upstream system extracts the original data from the online DB table into JSON format and writes it into Kafka. The downstream system uses Spark to read the messages in Kafka. The data is updated and aggregated using Hudi and sent to the downstream database for analysis.
  • What is the Lakehouse, the latest Direction of Big Data Architecture?
    2 projects | dev.to | 14 May 2022
    Lakesoul
  • Design concept of a best opensource project about big data and data lakehouse
    1 project | dev.to | 16 Apr 2022
    LakeSoul is a streaming batch integrated table storage framework developed by DMetaSoul, which has made a lot of design optimization around the new trend of big data architecture systems. This paper explains the core concept and design principle of LakeSoul, the Open-source Project, in detail.
  • Data engine engineers interview for help
    1 project | /r/learnprogramming | 9 Apr 2022
    Maybe you can use some of this code with a dataset over the next two days and compare the products to show the interviewer that you know a lot about the projects. Interviewers like candidates who can easily tell the difference between different products. Perhaps take a look at Lakesoul, similar to Iceberg, Hudi, etc., whose GitHub has a comparison of open-source data lake projectsand how to use them. You can also check out Iceberg, Hudi's website, which has detailed tutorials.
  • Details of 4 best opensource projects about big data you should try out(Ⅰ)
    2 projects | dev.to | 7 Apr 2022
    1.Introduction LakeSoul is a streaming batch integrated table storage framework built on The Apache Spark engine. It has highly extensible metadata management, ACID transactions, efficient and flexible UPSERT operations, Schema evolution, and batch integration processing. LakeSoul specifically optimizes the row and column level incremental updates, high concurrent entries, and batch scan reads for data on top of the Data Lake cloud storage. The storage separation architecture of cloud-native computing makes deployment very simple while supporting huge data volumes at a very low cost. LakeSoul supports high-performance write throughput in hashed partition primary key UPsert scenarios through lSM-tree, which can reach 30MB/s/core on object storage systems such as S3. The highly optimized Merge on Reading implementation also ensures Read performance. LakeSoul manages metadata through Cassandra to achieve high scalability of metadata. LakeSoul’s main features are as follows:

What are some alternatives?

When comparing delta and LakeSoul you can also consider the following projects:

dvc - 🦉 ML Experiments and Data Management with Git

MetaSpore - A unified end-to-end machine intelligence platform

Apache Cassandra - Mirror of Apache Cassandra

iceberg - Apache Iceberg

lakeFS - lakeFS - Data version control for your data lake | Git for data

hudi - Upserts, Deletes And Incremental Processing on Big Data.

delta-sharing - An open protocol for secure data sharing

delta-rs - A native Rust library for Delta Lake, with bindings into Python

starrocks - StarRocks, a Linux Foundation project, is a next-generation sub-second MPP OLAP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics, and ad-hoc queries. InfoWorld’s 2023 BOSSIE Award for best open source software.

nussknacker - Low-code tool for automating actions on real time data | Stream processing for the users.