prisma-engines
litestream
prisma-engines | litestream | |
---|---|---|
10 | 167 | |
1,117 | 10,063 | |
3.5% | - | |
9.7 | 7.5 | |
3 days ago | about 1 month ago | |
Rust | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
prisma-engines
-
We migrated to SQL. Our biggest learning? Don't use Prisma
This is a very strange comment section. And this article is insanely poorly written.
> Last week, we completed a migration that switched our underlying database from MongoDB to Postgres.
Okay cool, but why? MongoDB is a very capable and fast database.
> It was a shock finding out that Prisma needs almost a “db” engine layer of its own. Read more about it here: https://www.prisma.io/docs/concepts/components/prisma-engine...
If you did any research on Prisma rather than diving in head-first, you'd realize this is a core part of why Prisma exists.
> we discovered that at a low level, Prisma was fetching data from both tables and then combining the result in its “Rust” engine. This was a path for an absolute trash performance.
Can you confirm this is actually the case? Can you show some benchmarks re: this claim? Or are you just assuming this is the case?
-
Prisma laying off 28% staff
If you wish to auto-generate migrations, there are declarative schema change tools available for most relational databases. I'm the creator of Skeema [1] which provides them for MySQL, but there are options for other DBs too [2][3][4].
Prisma's migration system actually partially copied Skeema's design, while giving credit in a rather odd fashion which really rubbed me the wrong way: "The workflow of working with temporary databases and introspecting it to determine differences between schemas seems to be pretty common, this is for example what skeema does." [5]
While I doubt I was the first person to ever use that technique, I absolutely didn't copy it from anywhere, and it was never "pretty common". I'm not aware of any other older schema change systems that work this way.
[1] https://www.skeema.io
[2] https://github.com/djrobstep/migra
[3] https://github.com/k0kubun/sqldef
[4] https://david.rothlis.net/declarative-schema-migration-for-s...
[5] https://github.com/prisma/prisma-engines/blob/6be410e/migrat...
-
Maintenance of popular ORMs (explanation inside)
If you're serious about your review then you shouldn't ignore the fact that Prisma has a big blob of Rust code at its core, where other ORMs use standard database adapters from NPM. As someone who has maintained database adapters for other languages, let me tell you that the maintenance burden of that is quite significant. Especially if they ever want to support more advanced database features. If the company behind Prisma ever runs out of money, the project is probably toast.
- Show HN: WunderBase – Serverless OSS Database on Top of SQLite, Firecracker
-
If Prisma's query engine is compiled by Rust, why don't I need Rust to compile it?
prisma generate generates the code for the Prisma client. The code generated for the client is all JavaScript which calls into the “Prisma Engine” Rust native Node module to perform database operations. As others here have said, the Prisma Engine is pre-compiled by rustc via CI and gets dowloaded to your machine as a pre-built binary by npm, so there’s no need for you to build it yourself by running the Rust compiler locally.
-
Alternatives to SQLAlchemy for your project - Prisma case
Note: you may notice that it downloads some binaries when you first invoke this command. This is normal it fetches the node prisma cli and engines used by prisma. 😁
-
I went about learning Rust
We solved this with flat vectors and just sharing index values in cheap walker objects. It is much nicer to work with compared to arc/weak pointers.
Code here: https://github.com/prisma/prisma-engines/tree/main/libs%2Fda...
-
Show HN: Prisma Python – A fully typed ORM for Python
Because Prisma Python currently interfaces with the Rust engine over HTTP (I am looking into changing this) and the Rust engines can be found here:
https://github.com/prisma/prisma-engines
-
MariaDB to go public at $672M valuation
Thanks! I know of a couple Postgres tools that work in a declarative fashion: migra [1] and sqldef [2].
Migra is Postgres-specific. Its model is similar to Skeema's, in that the desired-state CREATEs are run in a temporary location and then introspected, to build an in-memory understanding of the desired state which can be diff'ed against the current actual state. (This approach was also borrowed by Prisma Migrate [3]). In this manner, the tool doesn't need a SQL parser, instead relying on the real DBMS to guarantee the CREATE is interpreted correctly with your exact DBMS version/flavor/settings.
In contrast, sqldef supports multiple databases, including Postgres and MySQL (among others). Unlike other tools, it uses a SQL parser-based approach to build its in-memory understanding of the desired state. As a DB professional, personally this approach scares me a bit, given the amount of nonstandard stuff in each DBMS's SQL dialect. But I'm inherently biased on this topic. And I will note sqldef's author is a core Ruby committer and JIT author, and is extremely skilled at parsers.
[1] https://databaseci.com/docs/migra
[2] https://github.com/k0kubun/sqldef
[3] https://github.com/prisma/prisma-engines/blob/main/migration...
-
Prisma 2 - When Can I Use it Alone and When Should I add Graphql
Prisma 2 is a program, written in Rust that exposes a GraphQL API on top of your database of choice. Here's a link to the "engine": https://github.com/prisma/prisma-engines
litestream
-
Ask HN: SQLite in Production?
I have not, but I keep meaning to collate everything I've learned into a set of useful defaults just to remind myself what settings I should be enabling and why.
Regarding Litestream, I learned pretty much all I know from their documentation: https://litestream.io/
-
How (and why) to run SQLite in production
This presentation is focused on the use-case of vertically scaling a single server and driving everything through that app server, which is running SQLite embedded within your application process.
This is the sweet-spot for SQLite applications, but there have been explorations and advances to running SQLite across a network of app servers. LiteFS (https://fly.io/docs/litefs/), the sibling to Litestream for backups (https://litestream.io), is aimed at precisely this use-case. Similarly, Turso (https://turso.tech) is a new-ish managed database company for running SQLite in a more traditional client-server distribution.
-
SQLite3 Replication: A Wizard's Guide🧙🏽
This post intends to help you setup replication for SQLite using Litestream.
-
Ask HN: Time travel" into a SQLite database using the WAL files?
I've been messing around with litestream. It is so cool. And, I either found a bug in the -timestamp switch or don't understand it correctly.
What I want to do is time travel into my sqlite database. I'm trying to do some forensics on why my web service returned the wrong data during a production event. Unfortunately, after the event, someone deleted records from the database and I'm unsure what the data looked like and am having trouble recreating the production issue.
Litestream has this great switch: -timestamp. If you use it (AFAICT) you can time travel into your database and go back to the database state at that moment. However, it does not seem to work as I expect it to:
https://github.com/benbjohnson/litestream/issues/564
I have the entirety of the sqlite database from the production event as well. Is there a way I could cycle through the WAL files and restore the database to the point in time before the records I need were deleted?
Will someone take sqlite and compile it into the browser using WASM so I can drag a sqlite database and WAL files into it and then using a timeline slider see all the states of the database over time? :)
-
Ask HN: Are you using SQLite and Litestream in production?
We're using SQLite in production very heavily with millions of databases and fairly high operations throughput.
But we did run into some scariness around trying to use Litestream that put me off it for the time being. Litestream is really cool but it is also very much a cool hack and the risk of database corruption issues feels very real.
The scariness I ran into was related to this issue https://github.com/benbjohnson/litestream/issues/510
-
Pocketbase: Open-source back end in 1 file
Litestream is a library that allows you to easily create backups. You can probably just do analytic queries on the backup data and reduce load on your server.
https://litestream.io/
- Litestream – Disaster recovery and continuous replication for SQLite
- Litestream: Replicated SQLite with no main and little cost
-
Why you should probably be using SQLite
One possible strategy is to have one directory/file per customer which is one SQLite file. But then as the user logs in, you have to look up first what database they should be connected to.
OR somehow derive it from the user ID/username. Keeping all the customer databases in a single directory/disk and then constantly "lite streaming" to S3.
Because each user is isolated, they'll be writing to their own database. But migrations would be a pain. They will have to be rolled out to each database separately.
One upside is, you can give users the ability to take their data with them, any time. It is just a single file.
[0]. https://litestream.io/
-
Monitor your Websites and Apps using Uptime Kuma
Upstream Kuma uses a local SQLite database to store account data, configuration for services to monitor, notification settings, and more. To make sure that our data is available across redeploys, we will bundle Uptime Kuma with Litestream, a project that implements streaming replication for SQLite databases to a remote object storage provider. Effectively, this allows us to treat the local SQLite database as if it were securely stored in a remote database.
What are some alternatives?
litefs - FUSE-based file system for replicating SQLite databases across a cluster of machines
rqlite - The lightweight, distributed relational database built on SQLite.
migra - Like diff but for PostgreSQL schemas
pocketbase - Open Source realtime backend in 1 file
sqldef - Idempotent schema management for MySQL, PostgreSQL, and more
realtime - Broadcast, Presence, and Postgres Changes via WebSockets
gopy - gopy generates a CPython extension module from a go package.
k8s-mediaserver-operator - Repository for k8s Mediaserver Operator project
prisma-client-rust - Type-safe database access for Rust
sqlcipher - SQLCipher is a standalone fork of SQLite that adds 256 bit AES encryption of database files and other security features.
flyctl - Command line tools for fly.io services