spider
calishot
Our great sponsors
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spider
- An open source DuckDB text to SQL LLM
-
Test adventureworks questions to validate self-service tool?
Hey all - I'm currently working on a self-service, natural language BI tool that aims to go beyond the base "text to sql" of current tools. I've got the bones built, but I'm struggling to develop a suite of test questions (ideally with complex metrics like "What is our profitability" or abstract concepts like "How are our sales doing"). Does anyone know of any lists of questions (and ideally answers on the quantitative questions) for the MS adventureworks database, or any other complex (30+ table) public test databases? I've looked at Spider, but most of the datasets are too small to simulate real-world business datasets, and the questions are more "can you write fancy SQL" and less "can you answer a vague stakeholder question on unknown data".
-
Show HN: Dataherald AI – Natural Language to SQL Engine
Hi HN community. We are excited to open source Dataherald’s natural-language-to-SQL engine today (https://github.com/Dataherald/dataherald). This engine allows you to set up an API from your structured database that can answer questions in plain English.
GPT-4 class LLMs have gotten remarkably good at writing SQL. However, out-of-the-box LLMs and existing frameworks would not work with our own structured data at a necessary quality level. For example, given the question “what was the average rent in Los Angeles in May 2023?” a reasonable human would either assume the question is about Los Angeles, CA or would confirm the state with the question asker in a follow up. However, an LLM translates this to:
select price from rent_prices where city=”Los Angeles” AND month=”05” AND year=”2023”
This pulls data for Los Angeles, CA and Los Angeles, TX without getting columns to differentiate between the two. You can read more about the challenges of enterprise-level text-to-SQL in this blog post I wrote on the topic: https://medium.com/dataherald/why-enterprise-natural-languag...
Dataherald comes with “batteries-included.” It has best-in-class implementations of core components, including, but not limited to: a state of the art NL-to-SQL agent, an LLM-based SQL-accuracy evaluator. The architecture is modular, allowing these components to be easily replaced. It’s easy to set up and use with major data warehouses.
There is a “Context Store” where information (NL2SQL examples, schemas and table descriptions) is used for the LLM prompts to make the engine get better with usage. And we even made it fast!
This version allows you to easily connect to PG, Databricks, BigQuery or Snowflake and set up an API for semantic interactions with your structured data. You can then add business and data context that are used for few-shot prompting by the engine.
The NL-to-SQL agent in this open source release was developed by our own Mohammadreza Pourreza, whose DIN-SQL algorithm is currently top of the Spider (https://yale-lily.github.io/spider) and Bird (https://bird-bench.github.io/) NL 2 SQL benchmarks. This agent has outperformed the Langchain SQLAgent anywhere from 12%-250%.5x (depending on the provided context) in our own internal benchmarking while being only ~15s slower on average.
Needless to say, this is an early release and the codebase is under swift development. We would love for you to try it out and give us your feedback! And if you are interested in contributing, we’d love to hear from you!
-
Thoughts on using GPT tools with databases
This is an active field of research. You might want to look at the main challenge dataset for it : https://yale-lily.github.io/spider. It would be interesting to use ChatGPT's model as a pre-processor and then feed its output in to a more finetuned model like PICARD.
-
It’s Like GPT-3 but for Code–Fun, Fast, and Full of Flaws
We tried using OpenAI/Davinci for SQL query authoring, but it quickly became obvious that we are still really far from something the business could find value in. The state of the art as described below is nowhere near where we would need it to be:
https://yale-lily.github.io/spider
https://arxiv.org/abs/2109.05093
https://github.com/ElementAI/picard
To be clear, we haven't tried this on actual source code (i.e. procedural concerns), so I feel like this is a slightly different battle.
The biggest challenge I see is that the queries we would need the most assistance with are the same ones that are the rarest to come by in terms of training data. They are also incredibly specific in the edge cases, many time requiring subjective evaluation criteria to produce an acceptable outcome (i.e. recursive query vs 5k lines of unrolled garbage).
-
Ask HN: Fake real-world databases to test SQL queries? SaSS, paid service?
I've been looking for databases with real-world schema and faker data (eg 10,000 entries of fake users) to test my natural langaugae to SQL generative model, as well as the efficiency of the generated queries
The cloest thing I can find is annotated dataset like Spider (https://yale-lily.github.io/spider) but after digging more into it, it's not as real-world-ish as I've hoped for.
Are there any SaSS, paid services, etc, where I can have access databases with complex real-world(-ish) schemas (populated with real-world-ish data)?
Thanks!
-
Show HN: Describe SQL using natural language, and execute against real data
There are projects out there that do this.
Possibly relevant: https://yale-lily.github.io/spider
I briefly worked on a startup to commercialize this tech, but we decided it wasn't accurate enough to be useful. It was very cool when it actually worked. If you can only produce what you want half the time on simple queries, that doesn't seem very useful to me though.
- Do you see SQL being under threat in any way as a way of querying databases? I know it's possibly a dumb question but wondering.
- [R] Facebook AI Introduces ‘Neural Databases’, A New Approach Which Enables Machines to Search Unstructured Data and Connect The Fields of Databases and NLP
-
What is rhe significance of gold files in NLP to SQL datasets like Spider and Sparc?
There is very less description available in the Spider dataset research paper. Which says that these files are used for value specific queries. Does that means gold.sql files should only contain queries with value check (for example: SELECT * FROM table WHERE student_name = 'Student_A'). If that's the case there are many instances in gold files without actual values from the dataset (for example: SELECT COUNT(*) FROM table). Thanks.
calishot
-
CALISHOT 2022-01: Find ebooks among 373 Calibre sites this month
Other resources will be proposed on it soon, like a wiki, tips, the datasets, original calibres, and some news about related tools like calisuck, calishot ... which are now turning into a single new project and will be released soon.
-
CALISHOT 2021-06: Find ebooks among 383 Calibre sites
If you want to build the db with your own list of servers, here is the python project on Github with commands to run on you own list of servers.
-
Need help with an OD indexer that I am writing in Python
This way you can also evolve your application to become async. As your using requests rather than aiohttp, may I suggest you to use gevent with a pool of requests in parallel (not too much ~ 10). You can look at this file as an example.
What are some alternatives?
lux - Automatically visualize your pandas dataframe via a single print! 📊 💡
open-directory-downloader - A NodeJS wrapper around KoalaBear84/OpenDirectoryDownloader
LLMStack - No-code platform to build LLM Agents, workflows and applications with your data
demeter - Demeter is a tool for scraping the calibre web ui
DiskCache - Python disk-backed cache (Django-compatible). Faster than Redis and Memcached. Pure-Python.
odcrawler-scanner - A reddit bot that scans ODs over at /r/OpenDirectories and submits the results to the ODCrawler discovery server
sqlcoder - SoTA LLM for converting natural language questions to SQL queries
spider - spider is an OD crawler that crawls through opendirectories and indexes the urls
lux - 👾 Fast and simple video download library and CLI tool written in Go
webextension-polyfill-ts - This is a TypeScript ready "wrapper" for the WebExtension browser API Polyfill by Mozilla
ODmovieindexer - Extract and index movie information of movies found in open directories posted on r/opendirectories.