ialacol
dify
Our great sponsors
ialacol | dify | |
---|---|---|
4 | 12 | |
138 | 25,645 | |
- | 29.8% | |
8.9 | 9.9 | |
3 months ago | about 5 hours ago | |
Python | TypeScript | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ialacol
-
Cloud Native Workflow for *Private* AI Apps
# This is the configuration file for DevSpace # # devspace use namespace private-ai # suggest to use a namespace instead of the default name space # devspace deploy # deploy the skeleton of the app and the dependencies (ialacol) # devspace dev # start syncing files to the container # devspace purge # to clean up version: v2beta1 deployments: # This are the manifest our private app deployment # The app will be in "sleep mode" after `devspace deploy`, and start when we start # syncing files to the container by `devspace dev` private-ai-app: helm: chart: # We are deploying the so-called Component Chart: https://devspace.sh/component-chart/docs name: component-chart repo: https://charts.devspace.sh values: containers: - image: ghcr.io/loft-sh/devspace-containers/python:3-alpine command: - "sleep" args: - "99999" service: ports: - port: 8000 labels: app.kubernetes.io/name: private-ai-app ialacol: helm: # the backend for the AI app, we are using ialacol https://github.com/chenhunghan/ialacol/ chart: name: ialacol repo: https://chenhunghan.github.io/ialacol # overriding values.yaml of ialacol helm chart values: replicas: 1 deployment: image: quay.io/chenhunghan/ialacol:latest env: # We are using MPT-30B, which is the most sophisticated model at the moment # If you want to start with some small but mightym try orca-mini # DEFAULT_MODEL_HG_REPO_ID: TheBloke/orca_mini_3B-GGML # DEFAULT_MODEL_FILE: orca-mini-3b.ggmlv3.q4_0.bin # MPT-30B DEFAULT_MODEL_HG_REPO_ID: TheBloke/mpt-30B-GGML DEFAULT_MODEL_FILE: mpt-30b.ggmlv0.q4_1.bin DEFAULT_MODEL_META: "" # Request more resource if needed resources: {} # pvc for storing the cache cache: persistence: size: 5Gi accessModes: - ReadWriteOnce storageClass: ~ cacheMountPath: /app/cache # pvc for storing the models model: persistence: size: 20Gi accessModes: - ReadWriteOnce storageClass: ~ modelMountPath: /app/models service: type: ClusterIP port: 8000 annotations: {} # You might want to use the following to select a node with more CPU and memory # for MPT-30B, we need at least 32GB of memory nodeSelector: {} tolerations: [] affinity: {}
-
Offline AI 🤖 on Github Actions 🙅♂️💰
You might be wondering why running Kubernetes is necessary for this project. This article was actually created during the development of a testing CI for the OSS project ialacol. The goal was to have a basic smoke test that verifies the Helm charts and ensures the endpoint returns a 200 status code. You can find the full source of the testing CI YAML here.
-
Containerized AI before Apocalypse 🐳🤖
We are deploying a Helm release orca-mini-3b using Helm chart ialacol
- Deploy private AI to cluster
dify
- FLaNK AI Weekly for 29 April 2024
-
Dify, a visual workflow to build/test LLM applications
> https://github.com/langgenius/dify/blob/main/LICENSE
everyone is apparently a license pioneer
- Dify, an end-to-end, visualized workflow to build/test LLM applications
-
GreptimeAI + Xinference - Efficient Deployment and Monitoring of Your LLM Applications
Xorbits Inference (Xinference) is an open-source platform to streamline the operation and integration of a wide array of AI models. With Xinference, you’re empowered to run inference using any open-source LLMs, embedding models, and multimodal models either in the cloud or on your own premises, and create robust AI-driven applications. It provides a RESTful API compatible with OpenAI API, Python SDK, CLI, and WebUI. Furthermore, it integrates third-party developer tools like LangChain, LlamaIndex, and Dify, facilitating model integration and development.
-
Which LLM framework(s) do you use in production and why?
If you are looking to develop QnA or chat based apps then check out https://dify.ai. Do a quick check and see if it fit your requirements. You can integrate it with your app using the apis it provides
-
New Discoveries in No-Code AI App Building with ChatGPT
As an AI newbie, I used to find coding apps from scratch an absolute nightmare! The learning curve was steep as a ski slope, debugging took endless hours, and developing even a simple AI app nearly drove me insane! But since discovering Dify, it has totally revolutionized my life by enabling app development without any coding skills!
- FLaNK Stack Weekly for 14 Aug 2023
- Interesting LLMOps Tools Dify.ai
- Dify.ai – Simply create and operate AI-native apps based on GPT-4
- langgenius/dify: One API for plugins and datasets, one interface for prompt engineering and visual operation, all for creating powerful AI applications.
What are some alternatives?
langstream - LangStream. Event-Driven Developer Platform for Building and Running LLM AI Apps. Powered by Kubernetes and Kafka.
langchain-llm-katas - This is a an open-source project designed to help you improve your skills with AI engineering using LLMs and the langchain library
Pontus - Open Source Privacy Layer
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
chainlit - Build Conversational AI in minutes ⚡️
duet-gpt - A conversational semi-autonomous developer assistant. AI pair programming without the copypasta.
IncognitoPilot - An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
jdbc-connector-for-apache-kafka - Aiven's JDBC Sink and Source Connectors for Apache Kafka®
kudu - Mirror of Apache Kudu
flatdraw - A simple canvas drawing web app with responsive UI. Made with TypeScript, React, and Next.js.
GeniA - Your Engineering Gen AI Team member 🧬🤖💻
symmetric-ds - SymmetricDS is database replication and file synchronization software that is platform independent, web enabled, and database agnostic. It is designed to make bi-directional data replication fast, easy, and resilient. It scales to a large number of nodes and works in near real-time across WAN and LAN networks.