datadm
ollama
datadm | ollama | |
---|---|---|
7 | 209 | |
369 | 66,540 | |
3.3% | 21.5% | |
7.3 | 9.9 | |
8 months ago | about 11 hours ago | |
Python | Go | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
datadm
-
Ask HN: What have you built with LLMs?
We've made a lot of data tooling things based on LLMs, and are in the process of rebranding and launching our main product.
1. sketch (in notebook, ai for pandas) https://github.com/approximatelabs/sketch
2. datadm (open source, "chat with data", with support for the open source LLMs (https://github.com/approximatelabs/datadm)
3. Our main product: julyp. https://julyp.com/ (currently under very active rebrand and cleanup) -- but a "chat with data" style app, with a lot of specialized features. I'm also streaming me using it (and sometimes building it) every weekday on twitch to solve misc data problems (https://www.twitch.tv/bluecoconut)
For your next question, about the stack and deploy:
-
A LLM+OLAP Solution
From making a few variations on data chatbots in the past year, I found that my favorite / most fun to use ones seem to be more "chain-of-thought" and conversational rather than "retrieval-augmented" style.
Less about one-shotting the answer, and more about showing its work, if it errors, letting it self-correct. Latency goes up, but quality of the entire conversation also goes up, and feels like it builds more trust with the user. Key steps are asking it to "check its work", and watching it work through new code etc. (I open-sourced one version of this: https://github.com/approximatelabs/datadm that can be run entirely locally / privately)
From their article: I'm surprised they got something working well by going through an intermediate DSL -- thats moving even further away from the source-material that the LLMs are trained on, so it's an entirely new thing to either teach or assume is part of the in-context learning.
All that said, interesting: I'll definitely have to try out tencentmusic/supersonic and see how it feels myself.
-
How to Use AI to Do Stuff: An Opinionated Guide
Pretty good examples and simple explanations. I didn't realize Claude 2 was so good at working with PDFs natively. I wonder if they're doing anything special? Is this just due to larger context length they have?
Also, biased opinion on my part: I'm especially interested in watching how these things affect data science and data literacy as a whole. Code interpreter is a game changer in my opinion, the most powerful tool that somehow isn't getting as much press I think it deserves. I released an open source code-interpreter for data (https://github.com/approximatelabs/datadm) and even though I know how to code and use Jupyter daily, I still find myself doing analysis with it instead.
All in all, it does seem like the different models and agents are gaining "specialization" skill is actually good for the user (rather than just using a single jack of all trades super chat model). Even though GPT-4 takes the language model crown, there's still specialization that matters and improves quality for different tasks as discussed here.
I wonder if in 2-5 years we'll all use "a single" AI chat interface for everything, or every specialization continues to "win at its own vertical" and we just have AI embedded inside of every app
- Show HN: Self-hostable open-source code interpreter with open-model support
- DataDM – Search and analyze datasets with LLMs
-
Microsoft Bringing OpenAI’s GPT-4 AI Model to US Government Agencies
I completely agree that greatly increasing data accessibility is a huge unlock and value add.
A package I open sourced recently might be useful for use cases like this, https://github.com/approximatelabs/datadm It's essentially a chatGPT code interpreter, specifically designed to work with data, that can be run entirely on open models (eg. StarChat). True local mode operation.
-
I made a tool for talking with your data via LLMs: DataDM. An open source code-interpreter you can use today: it supports running with GPT-4 as well as local models for keeping your data completely private
Here's the github repo https://github.com/approximatelabs/datadm
ollama
- Ollama v0.1.34 Is Out
-
Ask HN: What do you use local LLMs for?
- Basic internet search (I start ollama CLI faster than I can start a browser - https://ollama.com)
- Formatting/changing text
- Troubleshooting code, esp. new frameworks/libs
- Recipes
- Data entry
- Organizing thoughts: High-level lists, comparison, classification, synonyms, jargon & nomenclature
- Learning esp. by analogy and example
RAG for:
- Website assistants (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Game NPCs (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Discord/Slack/forum bots (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Character-driven storytelling and creating art in a specific style for video game loading screens, background images, avatars, website art, etc. (https://github.com/bennyschmidt/ragdoll-studio/tree/master/r...)
- FLaNK-AIM Weekly 06 May 2024
-
Introducing Jan
Jan goes a step further by integrating with other local engines like LM Studio and ollama.
- Ollama v0.1.33
-
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
# install the Ollama curl -fsSL https://ollama.com/install.sh | sh # get the llama3 model ollama pull llama2 # install the MLFlow pip install mlflow
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
Ollama for running LLMs locally
-
Setup Llama 3 using Ollama and Open-WebUI
curl -fsSL https://ollama.com/install.sh | sh
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
-
I Said Goodbye to ChatGPT and Hello to Llama 3 on Open WebUI - You Should Too
I’m a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, I’ve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
What are some alternatives?
ClickBench - ClickBench: a Benchmark For Analytical Databases
llama.cpp - LLM inference in C/C++
gpt_jailbreak_status - This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
gpt4all - gpt4all: run open-source LLMs anywhere
data-analytics - Welcome to the Data-Analytics repository
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
flask-socketio-llm-com
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
ibis - the portable Python dataframe library
llama - Inference code for Llama models
coppermind - Instruction based LLM contextual memory manager to power custom AI personalities and chatbots
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.