datadm
CX_DB8
datadm | CX_DB8 | |
---|---|---|
7 | 4 | |
369 | 222 | |
3.3% | - | |
7.3 | 0.0 | |
8 months ago | over 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
datadm
-
Ask HN: What have you built with LLMs?
We've made a lot of data tooling things based on LLMs, and are in the process of rebranding and launching our main product.
1. sketch (in notebook, ai for pandas) https://github.com/approximatelabs/sketch
2. datadm (open source, "chat with data", with support for the open source LLMs (https://github.com/approximatelabs/datadm)
3. Our main product: julyp. https://julyp.com/ (currently under very active rebrand and cleanup) -- but a "chat with data" style app, with a lot of specialized features. I'm also streaming me using it (and sometimes building it) every weekday on twitch to solve misc data problems (https://www.twitch.tv/bluecoconut)
For your next question, about the stack and deploy:
-
A LLM+OLAP Solution
From making a few variations on data chatbots in the past year, I found that my favorite / most fun to use ones seem to be more "chain-of-thought" and conversational rather than "retrieval-augmented" style.
Less about one-shotting the answer, and more about showing its work, if it errors, letting it self-correct. Latency goes up, but quality of the entire conversation also goes up, and feels like it builds more trust with the user. Key steps are asking it to "check its work", and watching it work through new code etc. (I open-sourced one version of this: https://github.com/approximatelabs/datadm that can be run entirely locally / privately)
From their article: I'm surprised they got something working well by going through an intermediate DSL -- thats moving even further away from the source-material that the LLMs are trained on, so it's an entirely new thing to either teach or assume is part of the in-context learning.
All that said, interesting: I'll definitely have to try out tencentmusic/supersonic and see how it feels myself.
-
How to Use AI to Do Stuff: An Opinionated Guide
Pretty good examples and simple explanations. I didn't realize Claude 2 was so good at working with PDFs natively. I wonder if they're doing anything special? Is this just due to larger context length they have?
Also, biased opinion on my part: I'm especially interested in watching how these things affect data science and data literacy as a whole. Code interpreter is a game changer in my opinion, the most powerful tool that somehow isn't getting as much press I think it deserves. I released an open source code-interpreter for data (https://github.com/approximatelabs/datadm) and even though I know how to code and use Jupyter daily, I still find myself doing analysis with it instead.
All in all, it does seem like the different models and agents are gaining "specialization" skill is actually good for the user (rather than just using a single jack of all trades super chat model). Even though GPT-4 takes the language model crown, there's still specialization that matters and improves quality for different tasks as discussed here.
I wonder if in 2-5 years we'll all use "a single" AI chat interface for everything, or every specialization continues to "win at its own vertical" and we just have AI embedded inside of every app
- Show HN: Self-hostable open-source code interpreter with open-model support
- DataDM – Search and analyze datasets with LLMs
-
Microsoft Bringing OpenAI’s GPT-4 AI Model to US Government Agencies
I completely agree that greatly increasing data accessibility is a huge unlock and value add.
A package I open sourced recently might be useful for use cases like this, https://github.com/approximatelabs/datadm It's essentially a chatGPT code interpreter, specifically designed to work with data, that can be run entirely on open models (eg. StarChat). True local mode operation.
-
I made a tool for talking with your data via LLMs: DataDM. An open source code-interpreter you can use today: it supports running with GPT-4 as well as local models for keeping your data completely private
Here's the github repo https://github.com/approximatelabs/datadm
CX_DB8
-
Ask HN: What have you built with LLMs?
I was working on this stuff before it was cool, so in the sense of the precursor to LLMs (and sometimes supporting LLMs still) I've built many things:
1. Games you can play with word2vec or related models (could be drop in replaced with sentence transformer). It's crazy that this is 5 years old now: https://github.com/Hellisotherpeople/Language-games
2. "Constrained Text Generation Studio" - A research project I wrote when I was trying to solve LLM's inability to follow syntactic, phonetic, or semantic constraints: https://github.com/Hellisotherpeople/Constrained-Text-Genera...
3. DebateKG - A bunch of "Semantic Knowledge Graphs" built on my pet debate evidence dataset (LLM backed embeddings indexes synchronized with a graphDB and a sqlDB via txtai). Can create compelling policy debate cases https://github.com/Hellisotherpeople/DebateKG
4. My failed attempt at a good extractive summarizer. My life work is dedicated to one day solving the problems I tried to fix with this project: https://github.com/Hellisotherpeople/CX_DB8
-
How critical theory is radicalizing high school debate
I really missed out on this thread despite being likely one of the most important folks to post on it (I turned my time in Policy Debate into an NLP career - see DebateSum: https://huggingface.co/datasets/Hellisotherpeople/DebateSum and CX_DB8: https://github.com/Hellisotherpeople/CX_DB8)
For those who are interested in the intersection of AI and Debate Evidence, there's a lot more work being done right now. We have a follow-up dataset to DebateSum on its way to a paper at some conference called OpenCaseList: https://huggingface.co/datasets/Yusuf5/OpenCaselist which is basically DebateSum but 40x better in every way. This is also likely the largest and best quality argument mining dataset ever gathered.
Fun anecdote, when I tried to introduce automatic extractive summarization tools to the debate community, I had parent/judge/teacher groups who were FLIPPING out about this. They were not happy at the idea of automatic debating or computer assisted debating systems.
-
Copy is all you need
This has deep connections with my attempt to implement an effective queryable word-level grammatically correct extractive text summarizer (AKA: The way most people actually summarize documents) - https://github.com/Hellisotherpeople/CX_DB8
I will try to implement this with the necessary changes to actually make this work properly, where instead of generating a new answer, it simply highlights the most likely text spans.
-
Haystack 1.0 – open-source NLP framework to build NLProc back end applications
Is there any path forward to make Haystack do word-level extractive summarization? e.g. like this: https://github.com/Hellisotherpeople/CX_DB8
or like this: https://huggingface.co/spaces/Hellisotherpeople/Unsupervised...
I am trying to find anything better than these two for this task. I feel like Haystack could be an option - but I am not sure.
What are some alternatives?
ClickBench - ClickBench: a Benchmark For Analytical Databases
newscatcher - Programmatically collect normalized news from (almost) any website.
gpt_jailbreak_status - This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
reddit-thread-summarizer - A Reddit thread summarizer is a tool that generates a summary of the main points or themes discussed in a Reddit thread
data-analytics - Welcome to the Data-Analytics repository
frogbase - Transform audio-visual content into navigable knowledge.
flask-socketio-llm-com
CNNMRF - code for paper "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis"
ibis - the portable Python dataframe library
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
coppermind - Instruction based LLM contextual memory manager to power custom AI personalities and chatbots