guidance
babyagi
Our great sponsors
guidance | babyagi | |
---|---|---|
23 | 33 | |
17,246 | 19,186 | |
5.1% | - | |
9.8 | 5.5 | |
6 days ago | 7 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guidance
-
Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
[1]: https://github.com/guidance-ai/guidance/tree/main
-
Show HN: Prompts as (WASM) Programs
> The most obvious usage of this is forcing a model to output valid JSON
Isn't this something that Outlines [0], Guidance [1] and others [2] already solve much more elegantly?
0. https://github.com/outlines-dev/outlines
1. https://github.com/guidance-ai/guidance
2. https://github.com/sgl-project/sglang
- Show HN: Fructose, LLM calls as strongly typed functions
-
LiteLlama-460M-1T has 460M parameters trained with 1T tokens
Or combine it with something like llama.cpp's grammer or microsoft's guidance-ai[0] (which I prefer) which would allow adding some react-style prompting and external tools. As others have mentioned, instruct tuning would help too.
[0] https://github.com/guidance-ai/guidance
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Prompting LLMs to constrain output
have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
- Guidance is back π₯³
- New: LangChain templates β fastest way to build a production-ready LLM app
-
Is supervised learning dead for computer vision?
Thanks for your comment.
I did not know about "Betteridge's law of headlines", quite interesting. Thanks for sharing :)
You raise some interesting points.
1) Safety: It is true that LVMs and LLMs have unknown biases and could potentially create unsafe content. However, this is not necessarily unique to them, for example, Google had the same problem with their supervised learning model https://www.theverge.com/2018/1/12/16882408/google-racist-go.... It all depends on the original data. I believe we need systems on top of our models to ensure safety. It is also possible to restrict the output domain of our models (https://github.com/guidance-ai/guidance). Instead of allowing our LVMs to output any words, we could restrict it to only being able to answer "red, green, blue..." when giving the color of a car.
2) Cost: You are right right now LVMs are quite expensive to run. As you said are a great way to go to market faster but they cannot run on low-cost hardware for the moment. However, they could help with training those smaller models. Indeed, with see in the NLP domain that a lot of smaller models are trained on data created with GPT models. You can still distill the knowledge of your LVMs into a custom smaller model that can run on embedded devices. The advantage is that you can use your LVMs to generate data when it is scarce and use it as a fallback when your smaller device is uncertain of the answer.
3) Labelling data: I don't think labeling data is necessarily cheap. First, you have to collect the data, depending on the frequency of your events could take months of monitoring if you want to build a large-scale dataset. Lastly, not all labeling is necessarily cheap. I worked at a semiconductor company and labeled data was scarce as it required expert knowledge and could only be done by experienced employees. Indeed not all labelling can be done externally.
However, both approaches are indeed complementary and I think systems that will work the best will rely on both.
Thanks again for the thought-provoking discussion. I hope this answer some of the concerns you raised
-
Show HN: Elelem β TypeScript LLMs with tracing, retries, and type safety
I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.
Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.
Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.
babyagi
-
AGI has, in some sense, been achieved: Tell me why I am wrong
Define agency. Does AutoGPT or BabyAGI fit the definition?
-
Overview: AI Assembly Architectures
BabyAGI: github.com/yoheinakajima/babyagi
-
List of Awesome AI Agents like AutoGPT and BabyAGI / Many open-source Agents with code included!
In my opinion the most interesting Agents: Auto-GPT Github: https://github.com/Significant-Gravitas/Auto-GPT BabyAGI Github: https://github.com/yoheinakajima/babyagi Voyager Github: https://github.com/MineDojo/Voyager / Paper: https://arxiv.org/abs/2305.16291 I would also add: ChemCrow: Augmenting large-language models with chemistry tools Github: https://github.com/ur-whitelab/chemcrow-public/ Paper: https://arxiv.org/abs/2304.05376
- Weaviate as Vector Database in BabyAGI
- BabyAGI
-
What innovations/discoveries have come out because/since the release of LLMS since the gain of popularity in the last 5ish months?
People also have been trying to build multi-agent and task-planning systems. MS research in Asia seems to produce decent results with Task Matrix and HuggingGPT. Similar things have been tried in the form of Auto-GPT and BabyAGI , but both projects are setting their goal so high that they may not achieve the at all, and they are likely to see a complete rework when multi-modal solutions become widespread.
-
Palantir in the world of Generative AI
Joke's on you, /u/ILoveThisPlace is actually just a bot responding using the BabyAGI script, we've all been had!
-
autogpt-like framework?
BabyAGI AI-Powered Task Management for OpenAI + Pinecone or Llama.cpp
-
Whatβs with the fear?
Yes, we haven't seen anything like that yet. But we do see the people trying to build these things (see AutoGPT, babyagi, ChaosGPT, etc) today, and with the last few years of advancement in LLMs they now have the fundamental building blocks to succeed in the near term (say the next 5 years) rather than in some imaginary far future.
-
Could an AI learn things or discover things humans have not been able to understand or not discovered yet?
You should check out some of the projects that combine LangChain with LLMs to automate this process like BabyAGI (https://github.com/yoheinakajima/babyagi) and AutoGPT (https://github.com/Significant-Gravitas/Auto-GPT). They were originally designed around ChatGPT models but have expanded to include llamacpp as an alternative. These provide your language models with the ability to save long term memory, a goal-oriented task list and extra functionality like surfing the web and, in some cases, creating and modifying files on disk.
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
langchain - π¦π Build context-aware reasoning applications
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
AgentGPT - π€ Assemble, configure, and deploy autonomous AI Agents in your browser.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/AutoGPT]
outlines - Structured Text Generation
AGiXT - AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.