simpleaichat
transynthetical-engine
simpleaichat | transynthetical-engine | |
---|---|---|
22 | 6 | |
3,386 | 26 | |
- | - | |
8.7 | 6.2 | |
4 months ago | about 1 year ago | |
Python | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleaichat
- Efficient Coding Assistant with Simpleaichat
-
Please Don't Ask If an Open Source Project Is Dead
I checked both the issues mentioned, people have been respectful and showing empathy to author's situation
https://github.com/minimaxir/simpleaichat/issues/91
https://github.com/minimaxir/simpleaichat/issues/92
-
We Built an AI-Powered Magic the Gathering Card Generator
ChatGPT's June updated added support for "function calling", which in practice is structured data I/O marketed very poorly: https://openai.com/blog/function-calling-and-other-api-updat...
Here's an example of using structured data for better output control (lightly leveraging my Python package to reduce LoC: https://github.com/minimaxir/simpleaichat/blob/main/examples... )
-
LangChain Agent Simulation – Multi-Player Dungeons and Dragons
So what are the alternatives to LangChain that the HN crowd uses?
I see two contenders:
https://github.com/minimaxir/simpleaichat/tree/main/simpleai...
https://github.com/griptape-ai/griptape
There is also the llm command line utility that has a very thin underlying library, but which might grow eventually:
-
Custom Instructions for ChatGPT
A fun note is that even with system prompt engineering it may not give the most efficient solution: ChatGPT still outputs the avergage case.
I tested around it and doing two passes (generate code and "make it more efficient") works best, with system prompt engineering to result in less code output: https://github.com/minimaxir/simpleaichat/blob/main/examples...
-
The Problem with LangChain
I played around with simpleaichat for a few minutes just now, and I really like it. Unlike LangChain, I can understand what it does in minutes, and it looks like its primitives are fairly powerful. It looks like it's going to replace the `openai` library for me, it seems like a nice wrapper.
I'm especially looking forward to playing with the structured data models bit: https://github.com/minimaxir/simpleaichat/blob/main/examples...
Well done, Max!
-
How is Langchain's dev experience? Any alternatives?
https://github.com/minimaxir/simpleaichat bills itself as a simpler alternative to langchain. I have not tried it, but it looks interesting.
-
Stanford A.I. Courses
I think you are asking specifically about practical LLM engineering and not the underlying science.
Honestly this is all moving so fast you can do well by reading the news, following a few reddits/substacks, and skimming the prompt engineering papers as they come out every week (!).
https://www.latent.space/p/ai-engineer provides an early manifesto for this nascent layer of the stack.
Zvi writes a good roundup (though he is concerned mostly with alignment so skip if you don’t like that angle): https://thezvi.substack.com/p/ai-18-the-great-debate-debates
Simon W has some good writeups too: https://simonwillison.net/
I strongly recommend playing with the OpenAI APIs and working with langchain in a Colab notebook to get a feel for how these all fit together. Also, the tools here are incredibly simple and easy to understand (very new) so looking at, say, https://github.com/minimaxir/simpleaichat/tree/main/simpleai... or https://github.com/smol-ai/developer and digging in to the prompts, what goes in system vs assistant roles, how you gourde the LLM, etc.
-
Where is the engineering part in "prompt engineer"?
This notebook from the repo I linked to is a concise example, and the reason you would want to optimize prompts.
- Show HN: Python package for interfacing with ChatGPT with minimized complexity
transynthetical-engine
-
Native JSON Output from GPT-4
Here’s an approach to return just JavaScript:
https://github.com/williamcotton/transynthetical-engine
The key is the addition of few-shot exemplars.
-
The Dual LLM pattern for building AI assistants that can resist prompt injection
I think the two-layer approach is worthwhile if only for limiting tokens!
Here’s an example of what I mean:
https://github.com/williamcotton/transynthetical-engine#brow...
By keeping the main discourse between the user and the LLM from containing all of the generated code and instead just using that main “thread” to orchestrate instructions to write code it allows for more back-and-forth.
It’s a good technique in general!
I’m still too paranoid to execute instructions via email without a very limited set of abilities!
-
Prompt Engineering vs. Blind Prompting
Here is an example of some prompt engineering in order to build augmentations for factual question-and-answer as well as building web applications:
https://github.com/williamcotton/transynthetical-engine
-
Ask HN: People who were laid off or quit recently, how are you doing?
Hey Simon! I've been digging your writings on LLMs lately.
I've been having some decent luck with some of the approaches that I've discussed in the following articles and projects:
From Prompt Alchemy to Prompt Engineering: An Introduction to Analytic Augmentation: https://github.com/williamcotton/empirical-philosophy/blob/m...
https://www.williamcotton.com/articles/writing-web-applicati...
https://github.com/williamcotton/transynthetical-engine
I'd love to hear your thoughts on the matter!
-
We need to tell people ChatGPT will lie to them, not debate linguistics
Sure you can. The easiest way is to go to https://chat.openai.com/chat and paste in a Wikipedia article.
There are more involved manners like this: https://github.com/williamcotton/transynthetical-engine/blob...
-
ChatGPT-Linux-Assistant
Parsel : A (De-)compositional Framework for Algorithmic Reasoning with Language Models
https://arxiv.org/abs/2212.10561
Here's a notebook with an introduction:
https://github.com/ezelikman/parsel/blob/main/parsel.ipynb
And here's a GUI interface the author has been developing:
http://zelikman.me/parsel/interface.html
I've been working on an augmented large language model that given these few-shot exemplars can build the below fully-functional ToDo App: ==
https://github.com/williamcotton/transynthetical-engine/tree...
https://www.williamcotton.com/articles/junie-browser-builder...
All of this is still very rough around the edges, prone to errors of various kinds, and generally not ready for prime time, but anyone is welcome to play around with what is there!
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
langroid - Harness LLMs with Multi-Agent Programming
chrono - A natural language date parser in Javascript
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
chatgpt-linux-assistant - An ai assistant in your CLI. But it knows what's on your system and can help you get things done.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
geppetto - Your personal assistant with ChatGPT and Linux superpowers, ready for any task!
gchain - Composable LLM Application framework inspired by langchain
openai-cookbook - Examples and guides for using the OpenAI API
griptape - Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.
empirical-philosophy - A collection of empirical experiments using large language models and other neural network architectures to test the usefulness of metaphysical constructs.