gpu_poor
instructor
gpu_poor | instructor | |
---|---|---|
3 | 17 | |
646 | 5,417 | |
- | - | |
8.3 | 9.8 | |
6 months ago | 2 days ago | |
JavaScript | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpu_poor
-
Ask HN: Cheapest way to run local LLMs?
Here's a simple calculator for LLM inference requirements: https://rahulschand.github.io/gpu_poor/
- How many token/s can I get? A simple GitHub tool to see token/s u can get for a LLM
- Show HN: Can your LLM run this?
instructor
- Instructor: Structured Outputs for LLMs
-
Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
Ah yes. Have you tried out instructor [0] or Guidance [1]?
[0]: https://github.com/jxnl/instructor/
- Instructor: Structured Data Like JSON from Large Language Models
-
Show HN: Fructose, LLM calls as strongly typed functions
Good stuff. How does this compare to Instructor? I’ve been using this extensively
https://jxnl.github.io/instructor/
-
Show HN: Ellipsis – Automatic pull request reviews
it's super cool! checkout how the Instructor repo uses it to keep various parts of their docs in sync: https://github.com/jxnl/instructor/blob/main/ellipsis.yaml
-
Pushing ChatGPT's Structured Data Support to Its Limits
I've been using the instructor[1] library recently and have found the abstractions simple and extremely helpful for getting great structured outputs from LLMs with pydantic.
1 https://github.com/jxnl/instructor/tree/main
-
Efficiently using python in GPTs
Maybe try using jason liu’s instructor package (https://github.com/jxnl/instructor) to structure the outputs with pydantic? It’s explained in his presentation from the AI Engineer summit (https://youtu.be/yj-wSRJwrrc)
-
Ask HN: Cheapest way to run local LLMs?
One of the most powerful ways to integrate LLMs with existing systems is constrained generation. Libraries such as outlines[1] and instructor[2] allow structural specification of the expected outputs as regex patterns, simple types, jsonschema or pydantic models.
These outputs often consume significantly fewer tokens than chat or text completion.
[1] https://github.com/outlines-dev/outlines
[2] https://github.com/jxnl/instructor
- OpenAI Function Calls for Humans
-
Unbounded Books: Search by ~Vibes
The best GPT-wrapper you’ll see today?
...but this one hasn't raised oodles of cash.
Mike (creator) here, excited to hear what HN-folks think. Anything to add/improve?
Had fun building, extra s/out to Railway, NextJS, and https://github.com/jxnl/instructor
Check it out: https://www.unboundedbooks.com/
What are some alternatives?
LLamaStack - ASP.NET Core Web, WebApi & WPF implementations for LLama.cpp & LLamaSharp
langchainjs - 🦜🔗 Build context-aware reasoning applications 🦜🔗
chatd - Chat with your documents using local AI
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
llama.net - .NET wrapper for LLaMA.cpp for LLaMA language model inference on CPU. 🦙
chatgpt-localfiles - Make local files accessible to ChatGPT
chitchat - A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine.
PythonGPT - PythonGPT writes and indexes code to implement dynamic code execution using generative models. Younger sibling of DoctorGPT.
Pacha - "Pacha" TUI (Text User Interface) is a JavaScript application that utilizes the "blessed" library. It serves as a frontend for llama.cpp and provides a convenient and straightforward way to perform inference using local language models.
httpx - A next generation HTTP client for Python. 🦋
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
outlines - Structured Text Generation