simpleaichat
instructor
simpleaichat | instructor | |
---|---|---|
22 | 14 | |
3,386 | 5,292 | |
- | - | |
8.7 | 9.8 | |
4 months ago | about 5 hours ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleaichat
- Efficient Coding Assistant with Simpleaichat
-
Please Don't Ask If an Open Source Project Is Dead
I checked both the issues mentioned, people have been respectful and showing empathy to author's situation
https://github.com/minimaxir/simpleaichat/issues/91
https://github.com/minimaxir/simpleaichat/issues/92
-
We Built an AI-Powered Magic the Gathering Card Generator
ChatGPT's June updated added support for "function calling", which in practice is structured data I/O marketed very poorly: https://openai.com/blog/function-calling-and-other-api-updat...
Here's an example of using structured data for better output control (lightly leveraging my Python package to reduce LoC: https://github.com/minimaxir/simpleaichat/blob/main/examples... )
-
LangChain Agent Simulation – Multi-Player Dungeons and Dragons
So what are the alternatives to LangChain that the HN crowd uses?
I see two contenders:
https://github.com/minimaxir/simpleaichat/tree/main/simpleai...
https://github.com/griptape-ai/griptape
There is also the llm command line utility that has a very thin underlying library, but which might grow eventually:
-
Custom Instructions for ChatGPT
A fun note is that even with system prompt engineering it may not give the most efficient solution: ChatGPT still outputs the avergage case.
I tested around it and doing two passes (generate code and "make it more efficient") works best, with system prompt engineering to result in less code output: https://github.com/minimaxir/simpleaichat/blob/main/examples...
-
The Problem with LangChain
I played around with simpleaichat for a few minutes just now, and I really like it. Unlike LangChain, I can understand what it does in minutes, and it looks like its primitives are fairly powerful. It looks like it's going to replace the `openai` library for me, it seems like a nice wrapper.
I'm especially looking forward to playing with the structured data models bit: https://github.com/minimaxir/simpleaichat/blob/main/examples...
Well done, Max!
-
How is Langchain's dev experience? Any alternatives?
https://github.com/minimaxir/simpleaichat bills itself as a simpler alternative to langchain. I have not tried it, but it looks interesting.
-
Stanford A.I. Courses
I think you are asking specifically about practical LLM engineering and not the underlying science.
Honestly this is all moving so fast you can do well by reading the news, following a few reddits/substacks, and skimming the prompt engineering papers as they come out every week (!).
https://www.latent.space/p/ai-engineer provides an early manifesto for this nascent layer of the stack.
Zvi writes a good roundup (though he is concerned mostly with alignment so skip if you don’t like that angle): https://thezvi.substack.com/p/ai-18-the-great-debate-debates
Simon W has some good writeups too: https://simonwillison.net/
I strongly recommend playing with the OpenAI APIs and working with langchain in a Colab notebook to get a feel for how these all fit together. Also, the tools here are incredibly simple and easy to understand (very new) so looking at, say, https://github.com/minimaxir/simpleaichat/tree/main/simpleai... or https://github.com/smol-ai/developer and digging in to the prompts, what goes in system vs assistant roles, how you gourde the LLM, etc.
-
Where is the engineering part in "prompt engineer"?
This notebook from the repo I linked to is a concise example, and the reason you would want to optimize prompts.
- Show HN: Python package for interfacing with ChatGPT with minimized complexity
instructor
- Instructor: Structured Outputs for LLMs
-
Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
Ah yes. Have you tried out instructor [0] or Guidance [1]?
[0]: https://github.com/jxnl/instructor/
- Instructor: Structured Data Like JSON from Large Language Models
-
Show HN: Fructose, LLM calls as strongly typed functions
Good stuff. How does this compare to Instructor? I’ve been using this extensively
https://jxnl.github.io/instructor/
-
Show HN: Ellipsis – Automatic pull request reviews
it's super cool! checkout how the Instructor repo uses it to keep various parts of their docs in sync: https://github.com/jxnl/instructor/blob/main/ellipsis.yaml
-
Pushing ChatGPT's Structured Data Support to Its Limits
I've been using the instructor[1] library recently and have found the abstractions simple and extremely helpful for getting great structured outputs from LLMs with pydantic.
1 https://github.com/jxnl/instructor/tree/main
-
Efficiently using python in GPTs
Maybe try using jason liu’s instructor package (https://github.com/jxnl/instructor) to structure the outputs with pydantic? It’s explained in his presentation from the AI Engineer summit (https://youtu.be/yj-wSRJwrrc)
-
Ask HN: Cheapest way to run local LLMs?
One of the most powerful ways to integrate LLMs with existing systems is constrained generation. Libraries such as outlines[1] and instructor[2] allow structural specification of the expected outputs as regex patterns, simple types, jsonschema or pydantic models.
These outputs often consume significantly fewer tokens than chat or text completion.
[1] https://github.com/outlines-dev/outlines
[2] https://github.com/jxnl/instructor
- OpenAI Function Calls for Humans
-
Unbounded Books: Search by ~Vibes
The best GPT-wrapper you’ll see today?
...but this one hasn't raised oodles of cash.
Mike (creator) here, excited to hear what HN-folks think. Anything to add/improve?
Had fun building, extra s/out to Railway, NextJS, and https://github.com/jxnl/instructor
Check it out: https://www.unboundedbooks.com/
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
langchainjs - 🦜🔗 Build context-aware reasoning applications 🦜🔗
langroid - Harness LLMs with Multi-Agent Programming
chatgpt-localfiles - Make local files accessible to ChatGPT
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
PythonGPT - PythonGPT writes and indexes code to implement dynamic code execution using generative models. Younger sibling of DoctorGPT.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
httpx - A next generation HTTP client for Python. 🦋
gchain - Composable LLM Application framework inspired by langchain
outlines - Structured Text Generation
transynthetical-engine - Applied methods of analytical augmentation to build tools using large-language models.
griptape - Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.