aifiles
llm-client
aifiles | llm-client | |
---|---|---|
10 | 7 | |
129 | 478 | |
- | - | |
5.0 | 8.9 | |
9 days ago | 6 days ago | |
TypeScript | TypeScript | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aifiles
-
Cleaning up my 200GB iCloud with some JavaScript
So yes, those 10TB archives may end up being 5TB if someone spent the time to really comb over, understand, make good decisions, and organize that data. But I have not yet seen anything that can scratch that surface yet, other than perhaps https://github.com/jjuliano/aifiles - but I won't use it until it's local only and has guarantees not to destroy data without explicit permission. An overlay filesystem that shows compression/deduplication with LLM capability like aifiles is probably the best option here.
However, I wouldn't imagine that most people's life data is less than 2TB even with all of this - it's mostly imposed as an artificial constraint by these companies.
-
Dropbox axes 16%
I imagine things like this are underway: https://github.com/jjuliano/aifiles
Honestly I think it's actually a pretty fantastic development if that's the direction. I don't use Dropbox much, nor have I used aifiles (due to sending all of your data to OpenAI) - but the idea of being able to not have to manually and tediously look over terabytes of files to completely reorganize all of your files into a better directory hierarchy, and tag each files with meaningful labels, etc sounds phenomenal.
Obviously there are some implementation details for this to be not awful - for example: 1) only local models for local data, 2) making the changes on e.g. ZFS (to allow rollback) or as some type of optional 'overlay' view to switch back and forth from the original to ai-organized, etc. and 3) having thresholds and logic for what may be considered 'duplicates' to be removed, and how to better compress data
As for the de-duplication and processing: this could be very good for dropbox in that, e.g. if a person wants to completely re-encode all of their image files or video files with AV1, the resulting data could be cut in half or more - which saves Dropbox storage space. After which, neural perceptual hashing could be done on all of the files and a threshold of similarity could do de-duplication on a perceptual basis (for example, keep the bigger size file that is 99% similar to a 2x downsized version, and re-encode it). User preference to keep things like tiff files completely intact or any other lossless encoding of their choosing could be good options as well
There's definitely a strange disparity between the computation cost for deploying a decent model to do this compared to the storage cost - but if a (perhaps even non-LLM!) small model is created to be able to plow through data at fast rates could be deployed it may make sense.
Or perhaps the type of semantic compression that LLMs do are of interest for making a new type of lossy compression algorithm of which Dropbox is interested.
- Digital clutter: Learning to let go and stop hoarding terabytes
- Big media is gearing up for battle with Google and Microsoft over AI chatbots using their articles for training: 'We are actively considering our options'
-
JSTools Weekly — ✨2023#8: TS-Reset: A ‘CSS reset’ For TS, Improving JS Types
aifiles: A CLI that organize and manage your files using AI
- AI Files: organize and manage your files using AI
-
Show HN: AI Files – manage and organize your files with AI
Output from the language model is also being injected into a script that is then executed: https://github.com/jjuliano/aifiles/blob/ef529fd6281eaf8d373...
He argued below that he is not vulnerable to indirect prompt injection attacks (https://github.com/greshake/llm-security), but I think he is wrong.
llm-client
- Show HN: Open-source LLM Proxy (Node.js/TS) multi-LLM, tracing, caching, memory
- LLMClient: Open Source LLM Proxy for Logging, Debugging and Long Term Memory
- A library to use OpenAI & other LLMs in your apps. Focused on function calling and reasoning.
-
JSTools Weekly — ✨2023#8: TS-Reset: A ‘CSS reset’ For TS, Improving JS Types
minds: MindsJS - Build AI powered workflows easily
-
[P] A Prompt Engineering Library in JS works with LLMs OpenAI and Cohere.
MindJS Library https://github.com/dosco/minds
- Minds - A Typescript library to build LLM (AI) powered workflows (OpenAI & Cohere)
- [P] Minds - A JS library to build LLM powered backends and workflows (OpenAI & Cohere)
What are some alternatives?
chatgpt-api - Node.js client for the official ChatGPT API. 🔥
ts-async-kit - the easiest API to deal with promises in Typescript. Currently, ↩️ Retrying 🏃♂️ looping & 😴 sleeping
picorpc - A tiny RPC library and spec, inspired by JSON-RPC 2.0 and tRPC.
suspense - Utilities for working with React Suspense
garph - Fullstack GraphQL Framework for TypeScript
ts-reset - A 'CSS reset' for TypeScript, improving types for common JavaScript API's
axflow - The TypeScript framework for AI development
concurrent.js - Non-blocking Concurrent Computation for JavaScript RTEs (Web Browsers, Node.js & Deno & Bun)
bling - 💍 Framework agnostic transpilation utilities for client/server RPCs, env isolation, islands, module splitting, and more.
nuxt-scheduler - Create scheduled jobs with human readable time settings