AiFilter
CX_DB8
AiFilter | CX_DB8 | |
---|---|---|
2 | 4 | |
50 | 222 | |
- | - | |
5.9 | 0.0 | |
3 months ago | over 1 year ago | |
JavaScript | Python | |
- | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AiFilter
-
Ask HN: What have you built with LLMs?
A Twitter filter to take back control of your social media feed from recommendation engines. Put in natural language instructions like "Only show tweets about machine learning, artificial intelligence, and large language models. Hide everything else" and it will filter out all the tweets that you tell it to.
Runs on a local LLM, because even using GPT3 costs would have added up quickly.
Currently requires CUDA and uses a 10.7B model but if anyone wants to try a smaller one and report results let me know on github and I can give some help.
https://github.com/thomasj02/AiFilter
-
Show HN: AI-Powered Twitter Filter
While exploring new applications for local LLMs, I built a Chrome extension that filters your Twitter feed based on natural language instructions.
For instance, you can instruct it to "Hide all tweets, except for tweets about machine learning (ML), artificial intelligence (AI) and large language models (LLMs)."
I've tested it and got good results with a 10B parameter model, but I suspect a high-quality small model like Phi-2 might work almost as well.
It's open source and available at https://github.com/thomasj02/AiFilter
Video demo: https://www.youtube.com/watch?v=CligVVTC5io
CX_DB8
-
Ask HN: What have you built with LLMs?
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
How critical theory is radicalizing high school debate
I really missed out on this thread despite being likely one of the most important folks to post on it (I turned my time in Policy Debate into an NLP career - see DebateSum: https://huggingface.co/datasets/Hellisotherpeople/DebateSum and CX_DB8: https://github.com/Hellisotherpeople/CX_DB8)
For those who are interested in the intersection of AI and Debate Evidence, there's a lot more work being done right now. We have a follow-up dataset to DebateSum on its way to a paper at some conference called OpenCaseList: https://huggingface.co/datasets/Yusuf5/OpenCaselist which is basically DebateSum but 40x better in every way. This is also likely the largest and best quality argument mining dataset ever gathered.
Fun anecdote, when I tried to introduce automatic extractive summarization tools to the debate community, I had parent/judge/teacher groups who were FLIPPING out about this. They were not happy at the idea of automatic debating or computer assisted debating systems.
-
Copy is all you need
This has deep connections with my attempt to implement an effective queryable word-level grammatically correct extractive text summarizer (AKA: The way most people actually summarize documents) - https://github.com/Hellisotherpeople/CX_DB8
I will try to implement this with the necessary changes to actually make this work properly, where instead of generating a new answer, it simply highlights the most likely text spans.
-
Haystack 1.0 – open-source NLP framework to build NLProc back end applications
Is there any path forward to make Haystack do word-level extractive summarization? e.g. like this: https://github.com/Hellisotherpeople/CX_DB8
or like this: https://huggingface.co/spaces/Hellisotherpeople/Unsupervised...
I am trying to find anything better than these two for this task. I feel like Haystack could be an option - but I am not sure.
What are some alternatives?
Language-games - Dead simple games made with word vectors.
newscatcher - Programmatically collect normalized news from (almost) any website.
data-analytics - Welcome to the Data-Analytics repository
reddit-thread-summarizer - A Reddit thread summarizer is a tool that generates a summary of the main points or themes discussed in a Reddit thread
frogbase - Transform audio-visual content into navigable knowledge.
CNNMRF - code for paper "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis"
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
gpt_jailbreak_status - This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
joia - A ChatGPT alternative designed for team collaboration. Lightweight, privacy-friendly and open source.
LookupChatGPT - A chrome extension which looks up selected text via ChatGPT using your custom prompts
SoM - Set-of-Mark Prompting for LMMs