ChatDev
dspy
ChatDev | dspy | |
---|---|---|
10 | 26 | |
24,077 | 12,858 | |
4.4% | 18.6% | |
9.4 | 9.9 | |
5 days ago | 3 days ago | |
Shell | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ChatDev
-
AutoGen: Enable Next-Gen Large Language Model Applications
Check https://github.com/OpenBMB/ChatDev out, they simulate personas in a company and build products by simulating the interactions.
-
[task] Looking for someone with experience in chatgpt
Hey I have being playing with Chatdev for a long time now, and I am looking for customization on the company config file, I am pretty busy so I can't do that on my own time I have already customized for two tasks but I am looking to do that for more than one task. I can pay 5-20$ depending on the customization length and how much time it takes you to do so
-
I will pay you to customize ChatDev for me
Hey I have being playing with Chatdevfor a long time now,
-
AI hype is built on high test scores. Those tests are flawed
Exactly and things are actually getting crazy now. For some reason this hasn't reached the frontpage on HN yet: https://github.com/OpenBMB/ChatDev
Making your own "internal family system" of AI's is a making this exponential (and frightening), like an ensemble on top of the ensemble, with specific "mindsets", that with shared memory can build and do stuff continuously.
I remember a couple of comments here on HN when the hype began about how some dude had figured out how to actually make an AGI - can't find it now, but it was something about having multiple AIS, discoursing with a shared memory - and now it seems to be happening.
-
[D] Are there examples of AI-written projects actively maintained/worked on by humans?
I've seen a rising trend in these "ChatDev"-like projects (https://arxiv.org/abs/2307.07924, their github is https://github.com/OpenBMB/ChatDev) that can "assemble a team of AIs" to build a complete project, such as a snake game. The possible projects it can build are rather limited, and the "team" is really a chat dialogue. But it does in fact generate runnable code with some underlying system design philosophy.
- FLaNK Stack Weekly 2 October 2023
- Show HN: ChatDev – Simulating a software company with LLMs
- Communicative Agents for Software Development
-
FLaNK Stack Weekly for 12 September 2023
https://github.com/OpenBMB/ChatDev?
dspy
-
OpenAI and Microsoft Azure to deprecate GPT-4 32K
My naive answer: turn away from Silicon Valley modernity with its unicorns and runways and “”marketing””, and embrace the boring stuffy academics! https://dspy-docs.vercel.app/
-
Show HN: Route your prompts to the best LLM
I agree this is an interesting direction, I think this is on the roadmap for DSPy [https://github.com/stanfordnlp/dspy], but right now they mainly focus on optimizing the in-context examples.
-
Thoughts on DSPy
- Python - https://github.com/stanfordnlp/dspy
-
Computer Vision Meetup: Develop a Legal Search Application from Scratch using Milvus and DSPy!
Legal practitioners often need to find specific cases and clauses across thousands of dense documents. While traditional keyword-based search techniques are useful, they fail to fully capture semantic content of queries and case files. Vector search engines and large language models provide an intriguing alternative. In this talk, I will show you how to build a legal search application using the DSPy framework and the Milvus vector search engine.
-
Pydantic Logfire
I’ve observed that Pydantic - which we’ve used for years in our API stack - has become very popular in LLM applications, for its type-adjacent features. It serves as a foundational technology for prompting libraries like [DSPy](https://github.com/stanfordnlp/dspy) which are abstracting “up the stack” of LLM apps. (some opinions there)
Operating AI apps reveals a big challenge, in that debugging probabilistic code paths requires more than the usual introspective abilities, and in an environment where function calls can have very real monetary impact we have to be able to see what’s happening in the runtime. See LangChain’s hosted solution (can’t recall the name) that allows an operator to see prompts and responses “on the wire”. (It just occurred to me that Langchain and Pydantic have a lot in common here, in approach.)
Having a coupling between Pydantic - which is *just about* the data layer itself - and an observability tool seems very interesting to me, and having this come from the folks who built it does not seem unreasonable. WRT open source and monetization, I would be lying if I said I wasn’t a little worried - given the recent few months - but I am choosing to see this in a positive light, given this team’s “believability weight” (to overuse Dalio) and history of delivering solid and really useful tooling.
- Ask HN: Most efficient way to fine-tune an LLM in 2024?
-
Princeton group open sources "SWE-agent", with 12.3% fix rate for GitHub issues
DSPy is the best tool for optimizing prompts [0]: https://github.com/stanfordnlp/dspy
Think of it as a meta-prompt optimizer, it uses a LLM to optimize your prompts, to optimize your LLM.
-
Winner of the SF Mistral AI Hackathon: Automated Test Driven Prompting
Isn’t this just a very naive implementation of what DsPY does?
https://github.com/stanfordnlp/dspy
I don’t understand what is exceptional here.
-
Show HN: Fructose, LLM calls as strongly typed functions
Have you done any comparison with DSPy ? (https://github.com/stanfordnlp/dspy)
Feels very similiar to DSPy except you dont have optimizations yet. But I like your API and the programming model your are enforcing through this.
-
AI Prompt Engineering Is Dead
I'm interested in hearing if anyone has used DSPy (https://github.com/stanfordnlp/dspy) just for prompt optimization for GPT-3.5 or GPT-4. Was it worth the effort and much better than manual prompt iteration? Was the optimized prompt some weird incantation? Any other insights?
What are some alternatives?
FLaNK-HuggingFace-BLOOM-LLM - https://huggingface.co/bigscience/bloom into NiFi
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
minum - A minimalist Java web framework built from scratch
MLflow - Open source platform for the machine learning lifecycle
LLM-Finetuning-Hub - Toolkit for fine-tuning, ablating and unit-testing open-source LLMs. [Moved to: https://github.com/georgian-io/LLM-Finetuning-Toolkit]
open-interpreter - A natural language interface for computers
RecipeUI - Discover, test, and share APIs in seconds
playground - Play with neural networks!
kafkaflow - Apache Kafka .NET Framework to create applications simple to use and extend.
FastMJPG - FastMJPG is a command line tool for capturing, sending, receiving, rendering, piping, and recording MJPG video with extremely low latency. It is optimized for running on constrained hardware and battery powered devices.
initializr - A quickstart generator for Spring projects
prompt-engine-py - A utility library for creating and maintaining prompts for Large Language Models