evadb
jsonformer
evadb | jsonformer | |
---|---|---|
27 | 25 | |
2,578 | 3,816 | |
0.9% | - | |
9.5 | 5.4 | |
16 days ago | 3 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
evadb
-
Show HN: Stargazers Reloaded – LLM-Powered Analyses of Your GitHub Community
Hey friends!
We have built an app for getting insights about your favorite GitHub community using large language models.
The app uses LLMs to analyze the GitHub profiles of users who have starred the repository, capturing key details like the topics they are interested in. It takes screenshots of the stargazer's GitHub webpage, extracts text using an OCR model, and extracts insights embedded in the extracted text using LLMs.
This app is inspired by the “original” Stargazers app written by Spencer Kimball (CEO of CockroachDB). While the original app exclusively used the GitHub API, this LLM-powered app built using EvaDB additionally extracts insights from unstructured data obtained from the stargazers’ webpages.
Our analysis of the fast-growing GPT4All community showed that the majority of the stargazers are proficient in Python and JavaScript, and 43% of them are interested in Web Development. Web developers love open-source LLMs!
We found that directly using GPT-4 to generate the “golden” table is super expensive — costing $60 to process the information of 1000 stargazers. To maintain accuracy while also reducing cost, we set up an LLM model cascade in a SQL query, running GPT-3.5 before GPT-4, that lowers the cost to $5.5 for analyzing 1000 GitHub stargazers.
We’ve been working on this app for a month now and are excited to open source it today :)
Some useful links:
* Blog Post - https://medium.com/evadb-blog/stargazers-reloaded-llm-powere...
* GitHub Repository - https://github.com/pchunduri6/stargazers-reloaded/
* EvaDB - https://github.com/georgia-tech-db/evadb
Please let us know what you think!
-
Language Model UXes in 2027
The discord link seems to be not working. Just a heads up.
The YOLO example on your Github page is super interesting. We are finding it easier to get LLMs to write functions with a more constrained function interface in EvaDB. Here is an example of an YOLO function in EvaDB: https://github.com/georgia-tech-db/evadb/blob/staging/evadb/....
Once the function is loaded, it can be used in queries in this way:
SELECT id, Yolo(data)
- EvaDB: Bring AI to your Database System
- Show HN: I wrote a RDBMS (SQLite clone) from scratch in pure Python
-
Gorilla: Large Language Model Connected with APIs
Neat idea, @shishirpatil! We are developing EvaDB [1] for shipping simpler, faster, and cost-effective AI apps. Can you share your thoughts on transforming the output of the Gorilla LLM to functions in EvaDB apps -- like this function that uses the HuggingFace API -- https://evadb.readthedocs.io/en/stable/source/tutorials/07-o...?
[1] https://github.com/georgia-tech-db/eva
- PrivateGPT in SQL
-
Eva AI-Relational Database System
Thanks for checking! Currently, we have a Docker image for deploying EVA [1]. We plan to release a Terraform config soon that will make it easier to deploy EVA DB on an AWS/Azure server with GPUs.
[1] https://github.com/georgia-tech-db/eva/tree/master/docker
-
This week's top indie A.I projects, launches and resources
EVA AI-Relational Database System; build simpler and faster AI-powered apps
- Show HN: EVA – AI-Relational Database System
jsonformer
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
- Tools like jsonformer https://github.com/1rgs/jsonformer are not possible with OpenAIs API.
-
Show HN: LLMs can generate valid JSON 100% of the time
How does this compare in terms of latency, cost, and effectiveness to jsonformer? https://github.com/1rgs/jsonformer
-
Ask HN: Explain how size of input changes ChatGPT performance
You're correct with interpreting how the model works wrt it returning tokens one at a time. The model returns one token, and the entire context window gets shifted right by one to for account it when generating the next one.
As for model performance at different context sizes, it's seems a bit complicated. From what I understand, even if models are tweaked (for example using the superHOT RoPE hack or sparse attention) to be able to use longer contexts, they still have to be fined tuned on input of this increased context to actually utilize it, but performance seems to degrade regardless as input length increases.
For your question about fine tuning models to respond with only "yes" or "no", I recommend looking into how the jsonformers library works: https://github.com/1rgs/jsonformer . Essentially, you still let the model generate many tokens for the next position, and only accept the ones that satisfy certain criteria (such as the token for "yes" and the token for "no".
You can do this with openAI API too, using tiktoken https://twitter.com/AAAzzam/status/1669753722828730378?t=d_W... . Be careful though as results will be different on different selections of tokens, as "YES", "Yes", "yes", etc are all different tokens to the best of my knowledge
- A framework to securely use LLMs in companies – Part 1: Overview of Risks
-
LLMs for Schema Augmentation
From here, we just need to continue generating tokens until we get to a closing quote. This approach was borrowed from Jsonformer which uses a similar approach to induce LLMs to generate structured output. Continuing to do so for each property using Replit's code LLM gives the following output:
-
Doesn't a 4090 massively overpower a 3090 for running local LLMs?
https://github.com/1rgs/jsonformer or https://github.com/microsoft/guidance may help get better results, but I ended up with a bit more of a custom solution.
-
“Sam altman won't tell you that GPT-4 has 220B parameters and is 16-way mixture model with 8 sets of weights”
I think function calling is just JSONformer idk: https://github.com/1rgs/jsonformer
- Inference Speed vs. Quality Hacks?
-
Best bet for parseable output?
jsonformer: https://github.com/1rgs/jsonformer
What are some alternatives?
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
emdash - 📚🧙♂️ Wisdom indexer — use AI to organize text snippets so you can actually remember & learn from what you read
aider - aider is AI pair programming in your terminal
MindsDB - The platform for customizing AI from enterprise data
clownfish - Constrained Decoding for LLMs against JSON Schema
gpt-json - Structured and typehinted GPT responses in Python
outlines - Structured Text Generation
steampipe - Zero-ETL, infinite possibilities. Live query APIs, code & more with SQL. No DB required.
jikkou - The Open source Resource as Code framework for Apache Kafka