Our great sponsors
-
picard
PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. PICARD is a ServiceNow Research project that was started at Element AI. (by ServiceNow)
-
spider
scripts and baselines for Spider: Yale complex and cross-domain semantic parsing and text-to-SQL challenge
-
SurveyJS
Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App. With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
-
webextension-polyfill-ts
This is a TypeScript ready "wrapper" for the WebExtension browser API Polyfill by Mozilla
We tried using OpenAI/Davinci for SQL query authoring, but it quickly became obvious that we are still really far from something the business could find value in. The state of the art as described below is nowhere near where we would need it to be:
https://yale-lily.github.io/spider
https://arxiv.org/abs/2109.05093
https://github.com/ElementAI/picard
To be clear, we haven't tried this on actual source code (i.e. procedural concerns), so I feel like this is a slightly different battle.
The biggest challenge I see is that the queries we would need the most assistance with are the same ones that are the rarest to come by in terms of training data. They are also incredibly specific in the edge cases, many time requiring subjective evaluation criteria to produce an acceptable outcome (i.e. recursive query vs 5k lines of unrolled garbage).
We tried using OpenAI/Davinci for SQL query authoring, but it quickly became obvious that we are still really far from something the business could find value in. The state of the art as described below is nowhere near where we would need it to be:
https://yale-lily.github.io/spider
https://arxiv.org/abs/2109.05093
https://github.com/ElementAI/picard
To be clear, we haven't tried this on actual source code (i.e. procedural concerns), so I feel like this is a slightly different battle.
The biggest challenge I see is that the queries we would need the most assistance with are the same ones that are the rarest to come by in terms of training data. They are also incredibly specific in the edge cases, many time requiring subjective evaluation criteria to produce an acceptable outcome (i.e. recursive query vs 5k lines of unrolled garbage).
I've written extensions before and Firefox has a very good polyfill [0] that makes it quite easy to write extensions for all browsers. It does get a bit trickier if you also want to incorporate TypeScript [1] or React however.
[0] https://github.com/mozilla/webextension-polyfill
[1] https://github.com/Lusito/webextension-polyfill-ts
I've written extensions before and Firefox has a very good polyfill [0] that makes it quite easy to write extensions for all browsers. It does get a bit trickier if you also want to incorporate TypeScript [1] or React however.
[0] https://github.com/mozilla/webextension-polyfill
[1] https://github.com/Lusito/webextension-polyfill-ts
Related posts
- I switch from Eslint to Biome
- Google Search Parameters (2024 Guide)
- Parquet-WASM: Rust-based WebAssembly bindings to read and write Parquet data
- Meet Cheryl Murphy: Full-Stack Developer, lifelong learner, and volunteer Project Team Lead at Web Dev Path
- ADA Compliance Made Easy: ADA Testing for Websites and Applications