noms
sqltorrent
noms | sqltorrent | |
---|---|---|
11 | 5 | |
7,502 | 269 | |
- | 1.1% | |
1.9 | 0.0 | |
over 2 years ago | about 8 years ago | |
Go | C | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
noms
-
How Dolt Stores Table Data
This is from 2022. It is based on Noms [1], which is no longer maintained (they forked it).
I think the Noms doc linked from this article [2] is clearer than the article itself. That said I sill cannot turn my head around to grasp how this entire thing work tbh. I hope they wrote a peer reviewed paper to serve the audience better.
[1] https://github.com/attic-labs/
[2] https://github.com/attic-labs/noms/blob/master/doc/intro.md#...
-
I was wrong. CRDTs are the future
I am. But i know very little about CRDTs lol, so we'll see how that goes. I'm interested in converting some immutable, local-first data warehouse tooling i enjoy to a CRDT version. Prior it was more.. Git-like. Basically just Git with data structures inspired-massively from Noms[1].
The thing i've found most interesting is it appears[2] that CRDT backends need to expose CRDT flavored types to users. Which is to say how i'm writing this combines the notion of a type, say `[i32]` with how you want the merges to work. CRDT works great but based on my amateur-hour researching on the subject i don't feel you can write a single CRDT merge strategy for a single data type ala `[i32]` and have it be always correct. Applications need to indicate enough context on what makes sense for a given data type.
So yea, i agree with you. I'm interested in making a database-like thing, backed by CRDTs, but i also have seen very few general purpose implementations with CRDTs. It feels like i'm breaking "new ground", while having no idea what i'm doing and having no intention of being an actual researcher here. I'm just making apps i enjoy heh.
[1]: https://github.com/attic-labs/noms
- Building a decentralized database
-
Picking low-hanging memory usage bugs of an open source database
Most of the changes are in the noms package which used to live in a separate repo (https://github.com/attic-labs/noms), but Dolt has since adopted them.
-
Downsides of Offline First
Not much more to say other than Noms was my favorite project (https://github.com/attic-labs/noms) for a while until acquisition and the engineers are now the ones behind Replicache (https://replicache.dev/).
I think this is going to be the next "Realm" that works everywhere.
- calling Format() on a time struct in a golang program changes the default Location's timezone information in the rest of the program
-
Steps to build Database System from sratch?
The storage layer based on Noms: https://github.com/attic-labs/noms
- Noms: The versioned, forkable, syncable database
-
Dolt is Git for Data: a SQL database that you can fork, clone, branch, merge
Noms might be what you’re looking for (https://github.com/attic-labs/noms). Dolt is actually a fork of Noms.
-
CondensationDB: Build secure and collaborative apps [open-source]
People that are interested in a similar feature set should check out https://github.com/attic-labs/noms and the SQL fork of Noms, https://github.com/dolthub/dolt
sqltorrent
-
BTFS (BitTorrent Filesystem)
Or even better store data as an sqlite file that is full-text-search indexed. Then you can full-text search the torrent on demand: https://github.com/bittorrent/sqltorrent
- SQLite BitTorrent Vfs
-
How to circumvent Sci-Hub ISP block
"There was that project some guy posted a while back that used a combination of sqlite and partial downloads to enable searches on a database before it was downloaded all the way."
https://github.com/bittorrent/sqltorrent
- Hosting SQLite databases on GitHub Pages (or any static file hoster)
-
Distributed search engines using BitTorrent and SQLite
Interesting question. I looked at the source code to understand that.
SQLite knows where to look for when you open a SQLite database and you run a query, right? It just asks the underlying filesystem to provide N bytes starting from an offset using a C function, then it repeats the same operation on different portions of the file, it does its computation and everybody is happy.
The software relies on sqltorrent, which is a custom VFS for SQLite. That means that SQLite function to read data from a file stored in the filesystem is replaced by a custom function. Such custom code computes which Torrent block(s) should have the highest priority, by dividing the offset and the number of bytes that SQLite wants to read by the size of the torrent blocks. It is just a division.
See: https://github.com/bittorrent/sqltorrent/blob/master/sqltorr...
What are some alternatives?
rqlite - The lightweight, distributed relational database built on SQLite.
sql.js-httpvfs - Hosting read-only SQLite databases on static file hosters like Github Pages
dat - Go Postgres Data Access Toolkit
torrent-net - Distributed search engines using BitTorrent and SQLite
dolt - Dolt – Git for Data
ipfs - Peer-to-peer hypermedia protocol
sql-migrate - SQL schema migration tool for Go.
datasette - An open source multi-tool for exploring and publishing data
skeema - Declarative pure-SQL schema management for MySQL and MariaDB
IPSQL - InterPlanetary SQL
cockroach - CockroachDB - the open source, cloud-native distributed SQL database.
apsw - Another Python SQLite wrapper