-
For my expense sharing app [1], I added receipt scanning in a few minutes and a few lines of code by using GPT 4 with Vision. I am aware that LLMs often are a solution looking for a problem, but there are some situations where a bit of magic is just great :)
It is a Next.js application, calling OpenAI’s API using a plain API route.
[1] https://spliit.app
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
I've created just-tell-me [1] that summarizes youtube videos with ChatGPT. It's built with Deno, uses TypeScript, deployed with deno deploy. And it's open source, you can run it from CLI as well [2]
[1] https://just-tell-me.deno.dev/
[2] https://github.com/franekmagiera/just-tell-me
-
We've built a prompting, synthetic data generation, and training library called DataDreamer: https://github.com/datadreamer-dev/DataDreamer
-
I don't like selling. I wanted a way to practice cold calling in a realistic way. I set up a phone number you can call and talk to an AI that simulates sales calls.
I ended up using it for more general purpose things because being able to have a hands-free phone call with an AI turned out to be pretty useful.
It's offline now, but here's the code with all the stack and deployment info: https://github.com/kevingduck/ChatGPT-phone/
-
We've made a lot of data tooling things based on LLMs, and are in the process of rebranding and launching our main product.
1. sketch (in notebook, ai for pandas) https://github.com/approximatelabs/sketch
2. datadm (open source, "chat with data", with support for the open source LLMs (https://github.com/approximatelabs/datadm)
3. Our main product: julyp. https://julyp.com/ (currently under very active rebrand and cleanup) -- but a "chat with data" style app, with a lot of specialized features. I'm also streaming me using it (and sometimes building it) every weekday on twitch to solve misc data problems (https://www.twitch.tv/bluecoconut)
For your next question, about the stack and deploy:
-
We've made a lot of data tooling things based on LLMs, and are in the process of rebranding and launching our main product.
1. sketch (in notebook, ai for pandas) https://github.com/approximatelabs/sketch
2. datadm (open source, "chat with data", with support for the open source LLMs (https://github.com/approximatelabs/datadm)
3. Our main product: julyp. https://julyp.com/ (currently under very active rebrand and cleanup) -- but a "chat with data" style app, with a lot of specialized features. I'm also streaming me using it (and sometimes building it) every weekday on twitch to solve misc data problems (https://www.twitch.tv/bluecoconut)
For your next question, about the stack and deploy:
-
We're using all sorts of different stacks and tooling. We made our own tooling at one point (https://github.com/approximatelabs/lambdaprompt/), but have more recently switched to just using the raw requests ourselves and writing out the logic ourselves in the product. For our main product, the code just lives in our next app, and deploys on vercel.
-
I wrote gait, an LLM-powered CLI that sits on top of git and translates natural language commands into git commands. It's open-source: https://github.com/jordanful/gait
I also wrote PromptPrompt, which is a free and extremely light-weight prompt management system that hosts + serves prompts on CDNs for rapid retrieval (plus version history): https://promptprompt.io
-
A chrome extension to ask about selected text with a right click. https://github.com/SMUsamaShah/LookupChatGPT
A chrome extension to show processed video overlay on YouTube to highlight motion.
A script to show stories going up and down on HN front page. This one just took 1 prompt.
-
A Twitter filter to take back control of your social media feed from recommendation engines. Put in natural language instructions like "Only show tweets about machine learning, artificial intelligence, and large language models. Hide everything else" and it will filter out all the tweets that you tell it to.
Runs on a local LLM, because even using GPT3 costs would have added up quickly.
Currently requires CUDA and uses a 10.7B model but if anyone wants to try a smaller one and report results let me know on github and I can give some help.
https://github.com/thomasj02/AiFilter
-
sql-ai-prompt-generator
Utility to generate ChatGPT prompts for SQL writing, offering table structure snapshots and sample row data from Postgres and sqlite databases.
Some little projects I've been playing around with:
- https://github.com/iloveitaly/sql-ai-prompt-generator generate a ChatGPT prompt with example data for a sqlite or postgres DB
-
- https://github.com/iloveitaly/conventional-notes-summarizati... summarize notes (originally for summarizing raw user interview notes)
-
- https://github.com/iloveitaly/conventional-notes-summarizati... summarize notes (originally for summarizing raw user interview notes)
-
-
I am currently building an automatic book generator of Rust source code, in which the LLM will write the description of the code of a whole Rust project. It will be a bot, which will connect to the website, generate descriptions, download them, and create the book. It is very early in the project, 3 days in, but it's going well.
https://github.com/pramatias/documentdf
-
coppermind
Instruction based LLM contextual memory manager to power custom AI personalities and chatbots
I have two main projects that are public ATM with LLMs.
The more notable one was experimenting with LLMs as high level task planners for robots (https://hlfshell.ai/posts/llm-task-planner/).
The other is a golang based AI assistant, like everyone else is building. Worked over text, had some neat memory features. This was more of a "first pass" learning about LLM applications. (https://github.com/hlfshell/coppermind).
I plan to revisit LLMs as context enriched planners for robot task planning soon.
-
flask-socketio-llm-completions
Chatroom app where messages are sent to GPT, Claude, Mistral, Together, Groq AI and streamed to the frontend.
https://github.com/russellballestrini/flask-socketio-llm-com...
This project is a chatroom application that allows users to join different chat rooms, send messages, and interact with multiple language models in real-time. The backend is built with Flask and Flask-SocketIO for real-time web communication, while the frontend uses HTML, CSS, and JavaScript to provide an interactive user interface.
demo here supports communication with `vllm/openchat`:
* http://home.foxhop.net:5001
-
https://github.com/russellballestrini/flask-socketio-llm-com...
This project is a chatroom application that allows users to join different chat rooms, send messages, and interact with multiple language models in real-time. The backend is built with Flask and Flask-SocketIO for real-time web communication, while the frontend uses HTML, CSS, and JavaScript to provide an interactive user interface.
demo here supports communication with `vllm/openchat`:
* http://home.foxhop.net:5001
-
I'm making two LLM's negotiate the exchange of a product, price is the main issue but I'm trying to make them negotiate another issues too in order to avoid the "bargaining" case.
I've tried several models and gpt4 is currently the one that better performs, but OS LLM's like Mixtral and Mixtral-Nous are quite capable too.
https://github.com/mfalcon/negotia
-
I built a couple of things, but the most useful is probably allalt[1], which describe images and generate alt tags for visually impaired users using GPT-4V. Next I want to add the option to use local LLMs using ollama[2], but I'm still trying to decide the UX for that.
There's also Moss[3], a GPT that acts as a senior, inquisitive, and clever Go pair programmer. I use it almost daily to help me code and it has been an huge help productivity-wise.
[1] https://git.sr.ht/~jamesponddotco/allalt
[2] https://ollama.ai/
[3] https://git.sr.ht/~jamesponddotco/moss
-
gophersignal
Gopher Signal uses smart technology to quickly summarize important points from HackerNews.com articles. https://gophersignal.com
Built this little tool to summarize Hacker News articles using HuggingFace. https://gophersignal.com
It doesn't do a ton, but it's kinda cool. Feel free to fix/add anything https://github.com/k-zehnder/gophersignal
-
-
I've been learning about RAG using LlamaIndex, and wrote a small CLI tool to ingest folders of my documents and run RAG queries through a gauntlet of models (CodeLlama 70b, Phind, Mixtral, Gemini, GPT-4, etc etc) as a batch proccess, then consolidate the responses. It is mostly boilerplate but comparing the available models is fun, and the RAG part kind-of works.
https://github.com/StuartRiffle/ragtag-tiger
-
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
Constrained-Text-Generation-Studio
Code repo for "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" at the (CAI2) workshop, jointly held at (COLING 2022)
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
CX_DB8
a contextual, biasable, word-or-sentence-or-paragraph extractive summarizer powered by the latest in text embeddings (Bert, Universal Sentence Encoder, Flair)
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
summaryfeeds
Repo for Summary Feeds, a website that shows summaries of AI-focused videos like a daily news feed.
-
I started working on a Rust based agent host with the goal of running locally. It has Rhai scripting built in which is what the agent calling function calling is based on. Very rough at the moment. Also on hold for me because I need to do more dirt cheap Upwork projects to scrape by this month.
I think what will be really powerful is to have a registry for plugins and agents that can be easily installed in the system. Sort of like WordPress in that way. Also similar to an open source GPT store.
https://github.com/runvnc/agenthost
I believe the are several variations of this type of idea out there.
-
codevideo-backend-engine
Create shockingly realistic automated software videos! The backend / CLI tool from CodeVideo to create videos.
I'm building a way to automation creation of software video lessons and courses, putting it all under the name 'CodeVideo'. Our tool that leverages OpenAI's whisper, as well as GPT3.5 or GPT4 for help with generating steps (not yet in repo, everything a work in progress). The AI focused tool is here:
https://github.com/codevideo/codevideo-ai
My goal is to definitely NOT generate the course content itself, but just take the effort out of recording and editing these courses. The goal is eventually get to written book or article style writing to generate the steps to generate the video in an as-close-as-possible-to-one-shot.
I also leverage Eleven Lab's voice cloning (technically not an LLM, but impressive ML models nonetheless)
For anyone more curious, I'm wondering if what I'm trying to do is in general a closed problem - to be able to generate step by step instructions to write functional code (including modifications, refactoring, or whatever you might do in an actual software course) or if this truly is something that can't be automated... any resources on the characteristics of coding itself would be awesome! What I'm trying to say is, at the end of the day code in an editor is a state machine - certain characters in a certain order produce certain results. Would love if anyone had more information about the meta of programming itself - abstract syntax trees and work there comes to mind, but I'm not even sure of the question I'm asking yet or trying to clarify at this point.
-
-
emerging-trajectories
Open source framework for using LLMs to forecast political, economic, and social events.
LLM agents to forecast geopolitical and economic events.
- Site: https://emergingtrajectories.com/
- GitHub repo: https://github.com/wgryc/emerging-trajectories
I've helped a number of companies build various sorts of LLM-powered apps (chatbots mainly) and found it interesting but not incredibly inspiring. The above is my attempt to build something no one else is working on.
It's been a lot of fun. Not sure if it'll be a "thing" ever, but I enjoy it.
-
At https://openadapt.ai/ we are using LLMs to automate repetitive tasks in GUI interfaces. Think robotic process automation, but via learning from demonstration rather than no-code scripting.
The stack is mostly python running locally, and calling the OpenAI API (although we have plans to support offline models).
For better visual understanding, we use a custom fork of Set-of-Mark prompting (https://github.com/microsoft/SoM) deployed to EC2 (see https://github.com/OpenAdaptAI/SoM/pull/3).
-
At https://openadapt.ai/ we are using LLMs to automate repetitive tasks in GUI interfaces. Think robotic process automation, but via learning from demonstration rather than no-code scripting.
The stack is mostly python running locally, and calling the OpenAI API (although we have plans to support offline models).
For better visual understanding, we use a custom fork of Set-of-Mark prompting (https://github.com/microsoft/SoM) deployed to EC2 (see https://github.com/OpenAdaptAI/SoM/pull/3).
-
gpt_jailbreak_status
This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
-
-
script-toolbox
This repository contains a collection of scripts and tools that I have written to solve various problems that I have come across.
-
comic-translate
Desktop app for automatically translating comics - BDs, Manga, Manhwa, Fumetti and more in a variety of formats (Image, Pdf, Epub, cbr, cbz, etc) and in multiple languages.
-
Code is available here https://github.com/kredar/data_analytics/tree/master/career_....
-
I've built an open-source ChatGPT UI designed for team collaboration.
Github Link: https://github.com/joiahq/joia
Benefits vs the original:
-
AutoRAG : https://github.com/Marker-Inc-Korea/AutoRAG
Since it is python library, we deploy it to pypi. But for using it on my own, I am using H100 linux server on the torch docker & CUDA.
-
AI Assisted Open Source Communication App for Autism - https://github.com/RonanOD/OpenAAC
It's a flutter app (in beta on Google play store currently) that uses OpenAI embeddings with Postgres pg_vector DB hosted in Supabase. Any poor matches go to Dalle3 for generation.
Our charity (I am vice-chair on the board) is hoping to use it as part of our program: https://learningo.org/app/
Related posts
-
Share the most underrated No-Code tools you know of? Something nobody is talking about: No Bubble, webflow, airtable, glide...
-
Building automated machine learning with type inference
-
Ask HN: Who is hiring? (October 2024)
-
Create ML models that can run in any environment
-
TransformerEngine: A library for accelerating Transformer models on Nvidia GPUs