Mountpoint – file client for S3 written in Rust, from AWS

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Access the most powerful time series database as a service
  • SonarQube - Static code analysis for 29 languages.
  • SaaSHub - Software Alternatives and Reviews
  • mountpoint-s3

    A simple, high-throughput file client for mounting an Amazon S3 bucket as a local file system.

    > Truncation will not be supported.

    The sequential requirement for writes is the part that I've been mulling over whether or not it's actually required in S3. Last year I discovered that S3 can do transactional I/O via multipart upload[2] operations combined with the CopyObject[3] operation. This should, in theory, allow for out of order writes, existing partial object re-use, and file appends.

    [1] https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMAN...

  • rclone

    "rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

    I started out with davfs2 but it was a) very slow at uploading for some reason, b) there was no way to explicitly sync it but to either wait a minute for some internal timer or to unmount it, and c) it implements writes by writing to a cache directory in /var/cache, which is just yet another redundant copy of the data I already have.

    I use `rclone`. Currently rclone doesn't support the SHA1 checksums that Fastmail Files implements. I have a PR for that: https://github.com/rclone/rclone/pull/6839

  • InfluxDB

    Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.

  • aws-java-nio-spi-for-s3

    A Java NIO.2 service provider for Amazon S3

    There's a similar project under awslabs for using S3 as a FileSystem within the Java JVM: https://github.com/awslabs/aws-java-nio-spi-for-s3

  • s3fs-fuse

    FUSE-based file system backed by Amazon S3

    JungleDisk was backup software I used ~2008 that allowed mounting S3. They were bought by Rackspace and the product wasn't updated. Seems to be called/part of Cyberfortress now.

    Later I used Panic's Transmit Disk but they removed the feature.

    Recently I'd been looking at s3fs-fuse to use with gocryptfs but haven't actually installed it yet!

    https://github.com/s3fs-fuse/s3fs-fuse

    https://github.com/rfjakob/gocryptfs

  • gocryptfs

    Encrypted overlay filesystem written in Go

    JungleDisk was backup software I used ~2008 that allowed mounting S3. They were bought by Rackspace and the product wasn't updated. Seems to be called/part of Cyberfortress now.

    Later I used Panic's Transmit Disk but they removed the feature.

    Recently I'd been looking at s3fs-fuse to use with gocryptfs but haven't actually installed it yet!

    https://github.com/s3fs-fuse/s3fs-fuse

    https://github.com/rfjakob/gocryptfs

  • seekable-s3-stream

    Code library that uses S3's API to provide an efficient random-access (seekable) Stream implementation for use in code where efficient network I/O is paramount.

    I think you’re spot on: using multipart uploads, different sections of the ultimate object can be created out of order. Unfortunately, though, that’s subject to restrictions that require you to ensure all but the last part are sufficiently sized.

    I’m a little disappointed that this library (which is supposed to be “read optimized”) doesn’t take advantage of S3 Range requests to optimize read after seek. The simple example is a zip file in S3 for which you want only the listing of files from the central directory record at the end. As far as I can tell this library reads the entire zip to get that. I have some experience with this[1][2].

    [1] https://github.com/mlhpdx/seekable-s3-stream

  • s3-upload-stream

    Code library that provides a Stream implementation that makes working with uploads to S3 easier where the size of the content isn't known a priori. It holds only partial content in memory (works with large objects), and is compatible with code libraries that work with output streams.

  • SonarQube

    Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.

  • go-nfs-client

    NFSv4 client written in Go

    That depends on what you consider "fast". EFS (the "serverless" NFS) has sub-millisecond operation latency. S3 is more in the 10-20ms range for most operations, with occasional spikes.

    BTW, if you need a pure Go client for NFSv4 (including AWS EFS), feel free to check my: https://github.com/Cyberax/go-nfs-client

  • s4cmd

    Super S3 command line tool

  • nfs-win

    NFS for Windows

    WinFsp (FUSE for Windows) has an NFS driver: https://github.com/winfsp/nfs-win

  • azure-storage-fuse

    A virtual file system adapter for Azure Blob storage

  • PosixSyncFS

    PosixSyncFS is a set of Bash scripts that allow users to create a real POSIX filesystem and sync it to a remote storage bucket for backup and recovery purposes.

    Upon reading this idea I created https://github.com/lrvl/PosixSyncFS - feel free to comment

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts