verneuil
s3fs
Our great sponsors
verneuil | s3fs | |
---|---|---|
5 | 7 | |
392 | 813 | |
1.5% | 2.5% | |
6.7 | 8.0 | |
2 months ago | 15 days ago | |
C | Python | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
verneuil
- Show HN: Query SQLite files stored in S3
-
Embedded database with VFS support?
It'd be process wide. If you want an example can check out the example using a vfs here. There's an explicit passing of vfs there and an implicit usage of it. https://github.com/backtrace-labs/verneuil/blob/main/examples/rusqlite_integration.rs
- LiteFS a FUSE-based file system for replicating SQLite
-
A database for 2022 ยท Tailscale
It doesn't even have to be WAL-based system. Backtrace Labs has a SQLite virtual file system (VFS) called Verneuil that works similarly but works with the rollback journal instead of the WAL.
-
Ask HN: P2P Databases?
https://github.com/backtrace-labs/verneuil/ is one way to address the diffing / read replica part of the problem. I believe it's compatible with gossipping: most of the data is in small content-addressed chunks, with small manifests that tell clients what chunks to fetch and how to reassemble them to recreate a sqlite database. There's already client-side caching to persistent storage, and chunks can be fetched on demand.
Sharing replication data P2P, while retaining the simplicity of a single authoritative writer per database, is explicitly part of the project's long-term goals!
s3fs
- Read files from s3 using Pandas/s3fs or AWS Data Wrangler?
- Gcsfuse: A user-space file system for interacting with Google Cloud Storage
-
what's the best python client for AWS automation these days?
- https://github.com/fsspec/s3fs (used by `pandas`, wraps aiobotocore)
- High-level, file-system like interface for S3 with AsyncIO support to replace/extend `boto3`
- Show HN: Query SQLite files stored in S3
-
Getting 403 return code from head_object even with s3:ListBucket permission
I'm using Python's s3fs library to check if a particular file exists in s3 with s3fs.S3FileSystem().exists(path), but I'm getting a Forbidden exception. From the stack trace, I can see it fails when calling s3's head_object method. The documentation for head_object method says:
What are some alternatives?
litefs - FUSE-based file system for replicating SQLite databases across a cluster of machines
goofys - a high-performance, POSIX-ish Amazon S3 file system written in Go
go-ds-crdt - A distributed go-datastore implementation using Merkle-CRDTs.
rclone - "rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Azure Blob, Azure Files, Yandex Files
dqlite - Embeddable, replicated and fault-tolerant SQL engine.
s3www - Serve static files from any S3 compatible object storage services (Let's Encrypt ready)
WCDB - WCDB is a cross-platform database framework developed by WeChat.
aws-sdk-go-v2 - AWS SDK for the Go programming language.
bb-remote-execution - Tools for Buildbarn to allow remote execution of build actions
django-s3file - A lightweight file upload input for Django and Amazon S3
s3sqlite - Query SQLite files in S3 using s3fs
s3-proxy - S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)