simonwillisonblog
rum
simonwillisonblog | rum | |
---|---|---|
28 | 11 | |
163 | 693 | |
- | 0.7% | |
8.1 | 4.0 | |
about 23 hours ago | 4 months ago | |
JavaScript | C | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simonwillisonblog
- Sandboxing Python with Win32 App Isolation
-
AI for Web Devs: Addressing Bugs, Security, & Reliability
Simon Willison has pointed out several examples of prompt injection attacks and why it may never be a solved problem:
-
Where Have All the Websites Gone?
I want more people to have link blogs.
I have one in the sidebar of https://simonwillison.net/ which I've been running since November 2003. You can search through all 6,836 links here: https://simonwillison.net/search/?type=blogmark
I can post things to it with a bookmarklet. It has an Atom feed.
It's such a low-friction way of publishing. A lot of https://daringfireball.net works like this too. I also like https://waxy.org/ and https://kottke.org/ for this.
I'd love to see more of these.
- Ask HN: Is it feasible to train my own LLM?
-
Moving Away from Substack
My approach is to publish to my own blog at https://simonwillison.net and then copy and paste content from that into a Substack newsletter at https://simonw.substack.com a few times a month.
It's been working really well.
Substack don't have an API, but they do support copy and paste - so I built myself a tool that assembles my blog content into rich text I can copy and paste straight into the Substack editor.
I wrote about how that works here: https://simonwillison.net/2023/Apr/4/substack-observable/
-
Building a Blog in Django
Hah, yeah securing something like WordPress can be a challenge, especially if you're running a bunch of plugins.
My blog is a pretty straight-forward Django setup without many other dependencies, so it's a lot less of an attack surface: https://github.com/simonw/simonwillisonblog
-
Show HN: Superfunctions – AI prompt templates as an API
That specific prompt is just an example and it's pretty bad, it was the shortest and simplest prompt I could come up with that would be easily understood.
You can set response content-types (text, html, json, etc...). If you use json it will get pretty good results because I have some is some logic to attempt to pick out json or json5 objects from the text output. I dont yet have logic to support json arrays, but I'm hoping to add that soon.
But still client side validation is needed for applications with untrusted input. I dont attempt to solve prompt injection. I saw a lot of interesting posts on this topic from this blog https://simonwillison.net/. I need to find sometime to read more about it.
Try this one instead, it should be better
-
Stopping at 90%
I've started to consider "commit to writing about it" as the price I have to pay for giving into the lure of another project. It's one of the main reasons I publish so much content on https://simonwillison.net/ and https://til.simonwillison.net
A project with a published write-up unlocks so much more value than one which you complete without giving others a chance of understanding what you built.
I've maintained internal blogs (sometimes just a Slack channel or Confluence area) at previous employers for this purpose too.
-
Stanford A.I. Courses
I think you are asking specifically about practical LLM engineering and not the underlying science.
Honestly this is all moving so fast you can do well by reading the news, following a few reddits/substacks, and skimming the prompt engineering papers as they come out every week (!).
https://www.latent.space/p/ai-engineer provides an early manifesto for this nascent layer of the stack.
Zvi writes a good roundup (though he is concerned mostly with alignment so skip if you don’t like that angle): https://thezvi.substack.com/p/ai-18-the-great-debate-debates
Simon W has some good writeups too: https://simonwillison.net/
I strongly recommend playing with the OpenAI APIs and working with langchain in a Colab notebook to get a feel for how these all fit together. Also, the tools here are incredibly simple and easy to understand (very new) so looking at, say, https://github.com/minimaxir/simpleaichat/tree/main/simpleai... or https://github.com/smol-ai/developer and digging in to the prompts, what goes in system vs assistant roles, how you gourde the LLM, etc.
-
Seeking Your Top Recommendations for Resources on ChatGPT and Generative AI
Simon Willison's Weblog
rum
-
Code Search Is Hard
the rum index has worked well for us on roughly 1TB of pdfs. written by postgrespro, same folks who wrote core text search and json indexing. not sure why rum not in core. we have no problems.
https://github.com/postgrespro/rum
-
Is it worth using Postgres' builtin full-text search or should I go straight to Elastic?
If you need ranking, and you have the possibility to install PostgreSQL extensions, then you can consider an extension providing RUM indexes: https://github.com/postgrespro/rum. Otherwise, you'll have to use an "external" FTS engine like ElasticSearch.
-
Features I'd Like in PostgreSQL
>Reduce the memory usage of prepared queries
Yes query plan reuse like every other db, this still blows me away PG replans every time unless you explicitly prepare and that's still per connection.
Better full-text scoring is one for me that's missing in that list, TF/IDF or BM25 please see: https://github.com/postgrespro/rum
-
Ask HN: Books about full text search
for postgres, i highly recommend the rum index over the core fts. rum is written by postgrespro, who also wrote core fts and json indexing in pg.
https://github.com/postgrespro/rum
-
Postgres Full Text Search vs. the Rest
My experience with Postgres FTS (did a comparison with Elastic a couple years back), is that filtering works fine and is speedy enough, but ranking crumbles when the resulting set is large.
If you have a large-ish data set with lots of similar data (4M addresses and location names was the test case), Postgres FTS just doesn't perform.
There is no index that helps scoring results. You would have to install an extension like RUM index (https://github.com/postgrespro/rum) to improve this, which may or may not be an option (often not if you use managed databases).
If you want a best of both worlds, one could investigate this extensions (again, often not an option for managed databases): https://github.com/matthewfranglen/postgres-elasticsearch-fd...
Either way, writing something that indexes your postgres database into elastic/opensearch is a one time investment that usually pays off in the long run.
-
Postgres Full-Text Search: A Search Engine in a Database
Mandatory mention of the RUM extension (https://github.com/postgrespro/rum) if this caught your eye. Lots of tutorials and conference presentations out there showcasing the advantages in terms of ranking, timestamps...
You might be just fine adding an unindexed tsvector column, since you've already filtered down the results.
The GIN indexes for FTS don't really work in conjunction with other indices, which is why https://github.com/postgrespro/rum exists. Luckily, it sounds like you can use your existing indices to filter and let postgres scan for matches on the tsvector.
- Postgrespro/rum: RUM access method – inverted index with additional information
-
Debugging random slow writes in PostgreSQL
We have been bitten by the same behavior. I gave a talk with a friend about this exact topic (diagnosing GIN pending list updates) at PGCon 2019 in Ottawa[1][2].
What you need to know is that the pending list will be merged with the main b-tree during several operations. Only one of them is so extremely critical for your insert performance - that is during actual insert. Both vacuum and autovacuum (including autovacuum analyze but not direct analyze) will merge the pending list. So frequent autovacuums are the first thing you should tune. Merging on insert happens when you exceed the gin_pending_list_limit. In all cases it is also interesting to know which memory parameter is used to rebuild the index as that inpacts how long it will take: work_mem (when triggered on insert), autovacuum_work_mem (when triggered during autovauum) and maintainance_work_mem (triggered by a call to gin_clean_pending_list()) define how much memory can be used for the rebuild.
What you can do is:
- tune the size of the pending list (like you did)
- make sure vacuum runs frequently
- if you have a bulk insert heavy workload (ie. nightly imports), drop the index and create it after inserting rows (not always makes sense business wise, depends on your app)
- disable fastupdate, you pay a higher cost per insert but remove the fluctuctuation when the merge needs to happen
The first thing was done in the article. However I believe the author still relies on the list being merged on insert. If vacuums were tuned agressively along with the limit (vacuums can be tuned per table). Then the list would be merged out of bound of ongoing inserts.
I also had the pleasure of speaking with one main authors of GIN indexes (Oleg Bartunov) during the mentioned PGCon. He gave probably the best solution and informed me to "just use RUM indexes". RUM[3] indexes are like GIN indexes, without the pending list and with faster ranking, faster phrase searches and faster timestamp based ordering. It is however out of the main postgresql release so it might be hard to get it running if you don't control the extensions that are loaded to your Postgres instance.
[1] - wideo https://www.youtube.com/watch?v=Brt41xnMZqo&t=1s
[2] - slides https://www.pgcon.org/2019/schedule/attachments/541_Let's%20...
[3] - https://github.com/postgrespro/rum
-
Show HN: Full text search Project Gutenberg (60m paragraphs)
I suggest to have a look at https://github.com/postgrespro/rum if you haven’t yet. It solves the issue of slow ranking in PostgreSQL FTS.
What are some alternatives?
pg_cjk_parser - Postgres CJK Parser pg_cjk_parser is a fts (full text search) parser derived from the default parser in PostgreSQL 11. When a postgres database uses utf-8 encoding, this parser supports all the features of the default parser while splitting CJK (Chinese, Japanese, Korean) characters into 2-gram tokens. If the database's encoding is not utf-8, the parser behaves just like the default parser.
postgres-elasticsearch-fdw - Postgres to Elastic Search Foreign Data Wrapper
pgvector - Open-source vector similarity search for Postgres
recoll - recoll with webui in a docker container
awesome-personal-blogs - A delightful list of personal tech blogs
zombodb - Making Postgres and Elasticsearch work together like it's 2023
tsv-utils - eBay's TSV Utilities: Command line tools for large, tabular data files. Filtering, statistics, sampling, joins and more.
awesome-ml - Curated list of useful LLM / Analytics / Datascience resources
pg_search - pg_search builds ActiveRecord named scopes that take advantage of PostgreSQL’s full text search
knowledge - Everything I know