chat
dify
chat | dify | |
---|---|---|
1 | 14 | |
3 | 33,181 | |
- | 22.7% | |
8.2 | 9.9 | |
12 days ago | 4 days ago | |
TypeScript | TypeScript | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chat
-
GPT4 for Free
Hi guys, we are allowing free access to GPT4 through our chat, we are stress testing it. We would appreciate it if you use it for whatever you want.
you can see the the open source of it here in case you are curious https://github.com/embedelite/chat.
dify
-
What We've Learned from a Year of Building with LLMs
Perhaps this would be of use? https://github.com/langgenius/dify/ I use it for quick workflows and it's pretty intuitive.
-
Ask HN: LLM workflows to avoid copying and pasting from the web interfaces?
This visual IDE for LLM pipelines was posted recently: https://github.com/langgenius/dify
See if it helps.
- FLaNK AI Weekly for 29 April 2024
-
Dify, a visual workflow to build/test LLM applications
> https://github.com/langgenius/dify/blob/main/LICENSE
everyone is apparently a license pioneer
- Dify, an end-to-end, visualized workflow to build/test LLM applications
-
GreptimeAI + Xinference - Efficient Deployment and Monitoring of Your LLM Applications
Xorbits Inference (Xinference) is an open-source platform to streamline the operation and integration of a wide array of AI models. With Xinference, you’re empowered to run inference using any open-source LLMs, embedding models, and multimodal models either in the cloud or on your own premises, and create robust AI-driven applications. It provides a RESTful API compatible with OpenAI API, Python SDK, CLI, and WebUI. Furthermore, it integrates third-party developer tools like LangChain, LlamaIndex, and Dify, facilitating model integration and development.
-
Which LLM framework(s) do you use in production and why?
If you are looking to develop QnA or chat based apps then check out https://dify.ai. Do a quick check and see if it fit your requirements. You can integrate it with your app using the apis it provides
-
New Discoveries in No-Code AI App Building with ChatGPT
As an AI newbie, I used to find coding apps from scratch an absolute nightmare! The learning curve was steep as a ski slope, debugging took endless hours, and developing even a simple AI app nearly drove me insane! But since discovering Dify, it has totally revolutionized my life by enabling app development without any coding skills!
- FLaNK Stack Weekly for 14 Aug 2023
- Interesting LLMOps Tools Dify.ai
What are some alternatives?
langchain-llm-katas - This is a an open-source project designed to help you improve your skills with AI engineering using LLMs and the langchain library
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
chainlit - Build Conversational AI in minutes ⚡️
duet-gpt - A conversational semi-autonomous developer assistant. AI pair programming without the copypasta.
IncognitoPilot - An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
jdbc-connector-for-apache-kafka - Aiven's JDBC Sink and Source Connectors for Apache Kafka®
kudu - Mirror of Apache Kudu
flatdraw - A simple canvas drawing web app with responsive UI. Made with TypeScript, React, and Next.js.
symmetric-ds - SymmetricDS is database replication and file synchronization software that is platform independent, web enabled, and database agnostic. It is designed to make bi-directional data replication fast, easy, and resilient. It scales to a large number of nodes and works in near real-time across WAN and LAN networks.
GeniA - Your Engineering Gen AI Team member 🧬🤖💻
project-euler-llm - A comparison of Project Euler performance by different LLMs
llama2.c - Inference Llama 2 in one file of pure C