emerging-trajectories
datadm
emerging-trajectories | datadm | |
---|---|---|
6 | 7 | |
57 | 369 | |
- | 3.3% | |
9.1 | 7.3 | |
14 days ago | 8 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
emerging-trajectories
-
Large language models (e.g., ChatGPT) as research assistants
I think LLMs can do a lot more than people assume, but they need to be given the proper frameworks.
When was the last time a researcher, economist, etc. was given 10,000 papers and simply told "do some original work"? That's not how it works. Daniel (the author) provides some good examples where _streamlined_ work can happen, but again, this is pretty basic stuff.
To push this further, though, imagine LLMs that fill in frameworks... A few steps here: (1) do a lit review, (2) fill in the framework, (3) discuss what might be missing, and maybe even try and fill in the missing information.
I'm doing something like this with politics and economics (see: https://emergingtrajectories.com/) and it works generally well. I think with a ton more engineering, curating of knowledge bases, etc., one can get these LLMs to actually find some new "nuggets" of information.
Admittedly, it's very hard, but I think there's something there.
-
Ask HN: Is RAG the Future of LLMs?
RAG will have a place in the LLM world, since it's a way to obtain data/facts/info for relevant queries.
Since you asked about alternatives...
(a) "World models" where LLMs structure information into code, structured data, etc. and query those models will likely be a thing. AlphaGeometry uses this[1], and people have tried to abstract this in different ways[2].
(b) Depending on how you define RAG, knowledge graphs could be a form of RAG or alternatively an alternative to them. Companies like Elemental Cognition[3] are building distinct alternatives to RAG that use such graphs and give LLMs the ability to run queries on said graphs. Another approach here is to build "fact databases" where, you structure observations about the world into standalone concepts/ideas/observations and reference those[4]. Again, similar to RAG but not quite RAG as we know it today.
[1] https://deepmind.google/discover/blog/alphageometry-an-olymp...
[2] https://arxiv.org/abs/2306.12672
[3] https://ec.ai/
[4] https://emergingtrajectories.com/
-
Long-form factuality in large language models
For those interested in using search-augmented "reasoning", I implemented something similar in Emerging Trajectories[1], an open source package that forecasts geopolitical and economic events. We extract facts[2] from various websites (Google searches, news articles, RSS feeds) and have the LLM generate a hypothesis on a metric.
We're tracking the info forecasts to see how well this does for future events. For example, we're pitting the LLMs against each other to predict March 2024 CPI[3].
[1] https://emergingtrajectories.com/
[2] Sample code: https://github.com/wgryc/emerging-trajectories/blob/main/eme...
[3] https://emergingtrajectories.com/a/statement/28
-
Ask HN: What are some actual use cases of AI Agents?
I'm working on research agents to help with economic, financial, and political research. These agents are open source (see: https://github.com/wgryc/emerging-trajectories).
The use cases are pretty straight forward and low risk:
1. Run a Google web search.
2. Query a news API.
3. Write a document based on the above, while citing sources.
Here's an example of something written yesterday, where I'm forecasting whether July 2024 will be the hottest on record: https://emergingtrajectories.com/a/forecast/74
This is working well in that the writeups are great and there are some "aha" moments, like the agent finding and referencing the The National Snow and Ice Data Center (NSIDC)... Very cool! I wouldn't have thought of it.
Then there's the part where the agent also tells me that the Oregon Department of Transportation has holidays during the summer, which doesn't matter at all.
So, YMMV, as they say... But I am more productive with these agents. I wouldn't publish anything formally without confirming and reviewing the content, though.
-
Ask HN: What have you built with LLMs?
LLM agents to forecast geopolitical and economic events.
- Site: https://emergingtrajectories.com/
- GitHub repo: https://github.com/wgryc/emerging-trajectories
I've helped a number of companies build various sorts of LLM-powered apps (chatbots mainly) and found it interesting but not incredibly inspiring. The above is my attempt to build something no one else is working on.
It's been a lot of fun. Not sure if it'll be a "thing" ever, but I enjoy it.
datadm
-
Ask HN: What have you built with LLMs?
We've made a lot of data tooling things based on LLMs, and are in the process of rebranding and launching our main product.
1. sketch (in notebook, ai for pandas) https://github.com/approximatelabs/sketch
2. datadm (open source, "chat with data", with support for the open source LLMs (https://github.com/approximatelabs/datadm)
3. Our main product: julyp. https://julyp.com/ (currently under very active rebrand and cleanup) -- but a "chat with data" style app, with a lot of specialized features. I'm also streaming me using it (and sometimes building it) every weekday on twitch to solve misc data problems (https://www.twitch.tv/bluecoconut)
For your next question, about the stack and deploy:
-
A LLM+OLAP Solution
From making a few variations on data chatbots in the past year, I found that my favorite / most fun to use ones seem to be more "chain-of-thought" and conversational rather than "retrieval-augmented" style.
Less about one-shotting the answer, and more about showing its work, if it errors, letting it self-correct. Latency goes up, but quality of the entire conversation also goes up, and feels like it builds more trust with the user. Key steps are asking it to "check its work", and watching it work through new code etc. (I open-sourced one version of this: https://github.com/approximatelabs/datadm that can be run entirely locally / privately)
From their article: I'm surprised they got something working well by going through an intermediate DSL -- thats moving even further away from the source-material that the LLMs are trained on, so it's an entirely new thing to either teach or assume is part of the in-context learning.
All that said, interesting: I'll definitely have to try out tencentmusic/supersonic and see how it feels myself.
-
How to Use AI to Do Stuff: An Opinionated Guide
Pretty good examples and simple explanations. I didn't realize Claude 2 was so good at working with PDFs natively. I wonder if they're doing anything special? Is this just due to larger context length they have?
Also, biased opinion on my part: I'm especially interested in watching how these things affect data science and data literacy as a whole. Code interpreter is a game changer in my opinion, the most powerful tool that somehow isn't getting as much press I think it deserves. I released an open source code-interpreter for data (https://github.com/approximatelabs/datadm) and even though I know how to code and use Jupyter daily, I still find myself doing analysis with it instead.
All in all, it does seem like the different models and agents are gaining "specialization" skill is actually good for the user (rather than just using a single jack of all trades super chat model). Even though GPT-4 takes the language model crown, there's still specialization that matters and improves quality for different tasks as discussed here.
I wonder if in 2-5 years we'll all use "a single" AI chat interface for everything, or every specialization continues to "win at its own vertical" and we just have AI embedded inside of every app
- Show HN: Self-hostable open-source code interpreter with open-model support
- DataDM – Search and analyze datasets with LLMs
-
Microsoft Bringing OpenAI’s GPT-4 AI Model to US Government Agencies
I completely agree that greatly increasing data accessibility is a huge unlock and value add.
A package I open sourced recently might be useful for use cases like this, https://github.com/approximatelabs/datadm It's essentially a chatGPT code interpreter, specifically designed to work with data, that can be run entirely on open models (eg. StarChat). True local mode operation.
-
I made a tool for talking with your data via LLMs: DataDM. An open source code-interpreter you can use today: it supports running with GPT-4 as well as local models for keeping your data completely private
Here's the github repo https://github.com/approximatelabs/datadm
What are some alternatives?
ClickBench - ClickBench: a Benchmark For Analytical Databases
gpt_jailbreak_status - This is a repository that aims to provide updates on the status of jailbreaking the OpenAI GPT language model.
data-analytics - Welcome to the Data-Analytics repository
flask-socketio-llm-com
ibis - the portable Python dataframe library
coppermind - Instruction based LLM contextual memory manager to power custom AI personalities and chatbots
Language-games - Dead simple games made with word vectors.
CX_DB8 - a contextual, biasable, word-or-sentence-or-paragraph extractive summarizer powered by the latest in text embeddings (Bert, Universal Sentence Encoder, Flair)