bonsaidb
litestream
Our great sponsors
- Onboard AI - Learn any GitHub repo in 59 seconds
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
- SaaSHub - Software Alternatives and Reviews
bonsaidb | litestream | |
---|---|---|
25 | 157 | |
917 | 9,152 | |
1.4% | - | |
0.0 | 0.0 | |
2 days ago | 11 days ago | |
Rust | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bonsaidb
-
Some key-value storage engines in Rust
What about https://github.com/khonsulabs/bonsaidb? Progress seems stall since last summer but very cool project
-
Are there a demand for management system of embedded storage like RocksDB? I plan to build one in Rust as the language becoming a core of many popular databases but wonder if there’s a demand. Can’t find any similar project even in other languages.
There is Nebari which is the KV part of BonsaiDB I've used both successfully (and that is currently in production)
-
Is `inlining` a function essentially the same thing as writing a macro?
In BonsaiDb, I define entire test suites as macros. This crate has a common trait that has multiple implementations in different crates. Each implementation needs to be tested thoroughly. For cargo test to be able to work in each crate independently, I needed to have the #[test]-annotated functions in the crate being built. By using a macro, I can define the functions in one location and invoke the macro in each crate to import the test suite into that crate.
-
What's everyone working on this week (12/2022)?
I'm finishing up a large refactor of BonsaiDb which will add support for using BonsaiDb in non-async code.
-
What's everyone working on this week (10/2022)?
I'm working on a major refactoring of BonsaiDb, aiming to improve the design of several interrelated features. While it started by aiming to enable a non-async interface for BonsaiDb, I realized mid-refactor that another major refactor would be better to do simultaneously rather than separately. Thank goodness that refactoring in Rust is such a wonderful experience!
-
Announcing BonsaiDb v0.1.0: A Rust NoSQL database that grows with you
For collections, we haven't addressed migrations yet. It's one of the higher priority things on my mind, however, so it probably will be in the next release.
It depends on what you mean by "support graphs". If you mean support the abillity to build a GraphQL interface in front of it, yes that is already possible in a limited fashion, although there are no first-class relationship types yet.
Replication is not implemented yet.
The README has a link to the code coverage report. There is a common test suite that is run across all mechanisms that database access is offered, and there are additional crate-specific tests as needed.
-
What's everyone working on this week (5/2022)?
I'm trying to release the first alpha of BonsaiDb. I'm wrapping up replacing OPAQUE with Argon2, in an effort to make upgrading less likely to cause issues in the future (given that OPAQUE is still a draft protocol). I still love OPAQUE and will bring it back in the future.
litestream
-
Why you should probably be using SQLite
One possible strategy is to have one directory/file per customer which is one SQLite file. But then as the user logs in, you have to look up first what database they should be connected to.
OR somehow derive it from the user ID/username. Keeping all the customer databases in a single directory/disk and then constantly "lite streaming" to S3.
Because each user is isolated, they'll be writing to their own database. But migrations would be a pain. They will have to be rolled out to each database separately.
One upside is, you can give users the ability to take their data with them, any time. It is just a single file.
-
Monitor your Websites and Apps using Uptime Kuma
# Builder image FROM docker.io/alpine as BUILDER RUN apk add --no-cache curl jq tar RUN export LITESTREAM_VERSION=$(curl --silent https://api.github.com/repos/benbjohnson/litestream/releases/latest | jq -r .tag_name) && curl -L https://github.com/benbjohnson/litestream/releases/download/${LITESTREAM_VERSION}/litestream-${LITESTREAM_VERSION}-linux-amd64.tar.gz -o litestream.tar.gz && tar xzvf litestream.tar.gz # Main image FROM docker.io/louislam/uptime-kuma as KUMA ARG UPTIME_KUMA_PORT=3001 WORKDIR /app RUN mkdir -p /app/data COPY --from=BUILDER /litestream /usr/local/bin/litestream COPY litestream.yml /etc/litestream.yml COPY run.sh /usr/local/bin/run.sh EXPOSE ${UPTIME_KUMA_PORT} CMD [ "/usr/local/bin/run.sh" ]
Upstream Kuma uses a local SQLite database to store account data, configuration for services to monitor, notification settings, and more. To make sure that our data is available across redeploys, we will bundle Uptime Kuma with Litestream, a project that implements streaming replication for SQLite databases to a remote object storage provider. Effectively, this allows us to treat the local SQLite database as if it were securely stored in a remote database.
-
Backup Grafana SQLite with Litestream using s6-overlay in a container app
FROM docker.io/grafana/grafana-oss:9.5.12-ubuntu # Set USER to root escalating priviliges to perform installation of litestream and s6-overlay USER root RUN apt-get -qq update && \ apt-get -qq install -y xz-utils \ && rm -rf /var/libs/apt/lists/* # https://github.com/benbjohnson/litestream-s6-example/blob/main/Dockerfile # Download the static build of Litestream directly into the path & make it executable. ADD https://github.com/benbjohnson/litestream/releases/download/v0.3.11/litestream-v0.3.11-linux-amd64.tar.gz /tmp/litestream.tar.gz RUN tar -C / -xvzf /tmp/litestream.tar.gz ARG S6_OVERLAY_VERSION="3.1.5.0" # Download the s6-overlay for process supervision. ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz /tmp RUN tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz # Copy s6 init & service definitions. COPY etc/s6-overlay /etc/s6-overlay # Copy Litestream configuration file. COPY etc/litestream.yml /etc/litestream.yml # The kill grace time is set to zero because our app handles shutdown through SIGTERM. ENV S6_KILL_GRACETIME=0 # Sync disks is enabled so that data is properly flushed. ENV S6_SYNC_DISKS=1 # Reset USER to 472 to reset the escalated privileges USER 472 # # Run the s6 init process on entry. ENTRYPOINT [ "/init" ]
Litestream is a game changer allowing to backup all your changes to a an s3 compatible cloud storage at 1sec intervals using the WAL all done via the API and not interacting with the DB itself to avoid corruption or impacting performance.
-
An Introduction to LiteStack for Ruby on Rails
Recently, though, it has attracted a lot of experimentation and extensions. One of the most popular extensions is Litestream, which can recover stream changes to an S3-compatible bucket. This means you get a replica of your production database at a very cheap price point and can recover from failure anytime.
-
Show HN: My Single-File Python Script I Used to Replace Splunk in My Startup
Not only that, but with https://litestream.io/ things becomes even more interesting.
I'm currently using this for a small application to easily backup databases in docker containers.
- Fly.io Postgres cluster went down for 3 days, no word from them about it
-
Mycelite: SQLite extension to synchronize changes across SQLite instances
Be interested to hear a comparison between this lib and litestream/litefs, which seem to be actively developed by fly.io for a similar use case
-
The Stupid Programmer Manifesto
I mean, not really. The hard work was done by benbjohnson who is now working on https://litestream.io/ and https://fly.io/
I put a relatively thin layer on top of it.
Now, to address your point more directly: I'm too stupid to figure out configuration, but not too stupid to figure out code. Code gets compiled and type checked. You can have tests, etc. Tractability for code is much higher than configuration.
With configuration, you have to be really smart and keep many moving parts in your head.
With code, you can be a bit dumb and lean heavily on the tooling.
What are some alternatives?
rqlite - The lightweight, distributed relational database built on SQLite
pocketbase - Open Source realtime backend in 1 file
realtime - Broadcast, Presence, and Postgres Changes via WebSockets
k8s-mediaserver-operator - Repository for k8s Mediaserver Operator project
sqlcipher - SQLCipher is a standalone fork of SQLite that adds 256 bit AES encryption of database files and other security features.
flyctl - Command line tools for fly.io services
datasette - An open source multi-tool for exploring and publishing data
PostgreSQL - Mirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work with pull requests on github. To contribute, please see https://wiki.postgresql.org/wiki/Submitting_a_Patch
litefs - FUSE-based file system for replicating SQLite databases across a cluster of machines
sql.js - A javascript library to run SQLite on the web.
dqlite - Embeddable, replicated and fault tolerant SQL engine.
Bedrock - Rock solid distributed database specializing in active/active automatic failover and WAN replication