tool-use-benchmark
instructor
tool-use-benchmark | instructor | |
---|---|---|
1 | 19 | |
4 | 5,417 | |
- | - | |
5.9 | 9.8 | |
about 1 month ago | 6 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tool-use-benchmark
-
Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
No fine tuning. Looks like he's do raw model capabilities with simple prompt. repo: https://github.com/parea-ai/tool-use-benchmark
instructor
- Instructor: Structured Outputs for LLMs
-
Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
Ah yes. Have you tried out instructor [0] or Guidance [1]?
[0]: https://github.com/jxnl/instructor/
- Instructor: Structured Data Like JSON from Large Language Models
-
Show HN: Fructose, LLM calls as strongly typed functions
Good stuff. How does this compare to Instructor? I’ve been using this extensively
https://jxnl.github.io/instructor/
-
Show HN: Ellipsis – Automatic pull request reviews
it's super cool! checkout how the Instructor repo uses it to keep various parts of their docs in sync: https://github.com/jxnl/instructor/blob/main/ellipsis.yaml
-
Pushing ChatGPT's Structured Data Support to Its Limits
I've been using the instructor[1] library recently and have found the abstractions simple and extremely helpful for getting great structured outputs from LLMs with pydantic.
1 https://github.com/jxnl/instructor/tree/main
-
Efficiently using python in GPTs
Maybe try using jason liu’s instructor package (https://github.com/jxnl/instructor) to structure the outputs with pydantic? It’s explained in his presentation from the AI Engineer summit (https://youtu.be/yj-wSRJwrrc)
-
Ask HN: Cheapest way to run local LLMs?
One of the most powerful ways to integrate LLMs with existing systems is constrained generation. Libraries such as outlines[1] and instructor[2] allow structural specification of the expected outputs as regex patterns, simple types, jsonschema or pydantic models.
These outputs often consume significantly fewer tokens than chat or text completion.
[1] https://github.com/outlines-dev/outlines
[2] https://github.com/jxnl/instructor
- OpenAI Function Calls for Humans
-
Unbounded Books: Search by ~Vibes
The best GPT-wrapper you’ll see today?
...but this one hasn't raised oodles of cash.
Mike (creator) here, excited to hear what HN-folks think. Anything to add/improve?
Had fun building, extra s/out to Railway, NextJS, and https://github.com/jxnl/instructor
Check it out: https://www.unboundedbooks.com/
What are some alternatives?
langchainjs - 🦜🔗 Build context-aware reasoning applications 🦜🔗
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
chatgpt-localfiles - Make local files accessible to ChatGPT
PythonGPT - PythonGPT writes and indexes code to implement dynamic code execution using generative models. Younger sibling of DoctorGPT.
httpx - A next generation HTTP client for Python. 🦋
outlines - Structured Text Generation