Our great sponsors
-
prql
PRQL is a modern language for transforming data — a simple, powerful, pipelined SQL replacement
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
PostgreSQL
Mirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work with pull requests on github. To contribute, please see https://wiki.postgresql.org/wiki/Submitting_a_Patch
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
> Have a look at https://prql-lang.org/. They got this right.
Yes, they absolutely got this right. But I agree with the author of TFA in that I don't want another query language. I just want this specific change to SQL. Maybe others as well. But I do not want another query language. PRQL is yet another query language.
You might like kakoune (https://github.com/mawww/kakoune), which does exactly that: first you select the range (which can even be disjoint, e.g. all words matching a regex), then you operate on it. By default, the selected range is the character under cursor, and multiple cursors work out of the box.
It also generally follows the Unix philosophy, e.g. by using shell script, pipes, and built-in Unix utilities to do complex operations, rather than inventing a new language (vimscript) for it.
(Not affiliated with the creator, but kakoune has been my daily driver for years now.)
I felt the same way when Malloy[0] launched. It has some interesting features, but I couldn't see myself ever using it. Nothing makes a big enough difference to spend the time to learn it.
Would love to hear from anybody that's using it regularly
0 - https://www.malloydata.dev/
I feel exactly the same way about ActiveRecord ORMs. That’s why I created PluSQL:
https://github.com/iaindooley/PluSQL
Helix[1] is another editor which heavily borrows from kakoune’s “selection then action” paradigm. The editor is very good, but still in heavy development
[1] https://helix-editor.com
1. Read the source. It's very efficient: https://github.com/postgres/postgres/blob/a14e75eb0b6a73821e...
2. You don't have to do deserialization in the application layer. If all you're using JSON for is to convert to OOP objects, just deserialize in the db -- which again, trivial.
3a. This is wrong on many counts. If you want efficient passing of JSON, use JSONB which is the binary encode of the JSON, as a tree structure. It will not include the structural characters.
3b. Bandwidth is also cheap.
4. Don't optimize without profiling. A few extra CPU cycles is not going to make-or-break your scaling journey, you'll most likely run into larger problems before that happens.
5. You can get "non-uniform" tuples by using UNIONs and a smart flagging system that points to tuple schemas -- rather than using JSON; the difference is entirely economical.
6. If you're in a low-latency environment and the CPU cycles are absolutely critical, write your own extensions to handle what you're trying to do, instead of twisting Postgres into doing your bidding.
I tried showing that 10 years ago: https://github.com/ngs-doo/revenj/ but it just resulted in confusion.
I was kind of assuming you had those things for a YAML based frontend, and just wanted to implement SQL support.
I can see that if your YAML solution doesn’t have a way to express GROUP BY, so the backend doesn’t support it, then of course that’ll be extra work, but then that’s IMO a different feature.
SQL itself is a tiny language - a parser that transforms it into your YAML based AST really would be pretty small. Here’s the one I made many years ago: https://github.com/google/dotty/blob/master/efilter/parsers/...
It’s not the best quality code, and it doesn’t implement SQL92, but we did run it in production.