amazon-s3-find-and-forget VS data-toolset

Compare amazon-s3-find-and-forget vs data-toolset and see what are their differences.

amazon-s3-find-and-forget

Amazon S3 Find and Forget is a solution to handle data erasure requests from data lakes stored on Amazon S3, for example, pursuant to the European General Data Protection Regulation (GDPR) (by awslabs)

data-toolset

Upgrade from avro-tools and parquet-tools jars to a more user-friendly Python package. (by luminousmen)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
amazon-s3-find-and-forget data-toolset
3 1
232 1
0.9% -
7.3 6.8
8 days ago about 2 months ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

amazon-s3-find-and-forget

Posts with mentions or reviews of amazon-s3-find-and-forget. We have used some of these posts to build our list of alternatives and similar projects.
  • Deleting particular data from S3 External Tables
    1 project | /r/dataengineering | 31 Oct 2022
    Take a look at this: https://github.com/awslabs/amazon-s3-find-and-forget We use it for GDPR compliance; it will open a file, delete a row and pack it back. It will modify the file so watch out if you are using Glue job bookmarks. Because you are using external tables, the manifest file will also have to be updated with a proper lenght for the new, updated file. If you have hundreds of tables and thousands of files, and you need to do this on a regular basis this would be the scalable solution, but if you have few files honestly I would do it manually
  • Update S3 Files
    1 project | /r/aws | 27 Jan 2022
    Have a look at S3 Find and Forget
  • How to handle GDPR requests for data stored in S3 ?
    1 project | /r/dataengineering | 22 Nov 2021
    S3 Find and Forget is probably worth looking into, even if just to get ideas on how to implement a similar solution for yourself

data-toolset

Posts with mentions or reviews of data-toolset. We have used some of these posts to build our list of alternatives and similar projects.

What are some alternatives?

When comparing amazon-s3-find-and-forget and data-toolset you can also consider the following projects:

DataEngineeringProject - Example end to end data engineering project.

prql-query - Query and transform data with PRQL

isp-data-pollution - ISP Data Pollution to Protect Private Browsing History with Obfuscation

dbd - dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.

awesome-aws - A curated list of awesome Amazon Web Services (AWS) libraries, open source repos, guides, blogs, and other resources. Featuring the Fiery Meter of AWSome.

rill - Rill is a tool for effortlessly transforming data sets into powerful, opinionated dashboards using SQL. BI-as-code.

s3-credentials - A tool for creating credentials for accessing S3 buckets

petastorm - Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.

pystore - Fast data store for Pandas time-series data

DataProfiler - What's in your data? Extract schema, statistics and entities from datasets