ialacol
Pontus
Our great sponsors
ialacol | Pontus | |
---|---|---|
4 | 1 | |
138 | 25 | |
- | - | |
8.9 | 10.0 | |
3 months ago | 6 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ialacol
-
Cloud Native Workflow for *Private* AI Apps
# This is the configuration file for DevSpace # # devspace use namespace private-ai # suggest to use a namespace instead of the default name space # devspace deploy # deploy the skeleton of the app and the dependencies (ialacol) # devspace dev # start syncing files to the container # devspace purge # to clean up version: v2beta1 deployments: # This are the manifest our private app deployment # The app will be in "sleep mode" after `devspace deploy`, and start when we start # syncing files to the container by `devspace dev` private-ai-app: helm: chart: # We are deploying the so-called Component Chart: https://devspace.sh/component-chart/docs name: component-chart repo: https://charts.devspace.sh values: containers: - image: ghcr.io/loft-sh/devspace-containers/python:3-alpine command: - "sleep" args: - "99999" service: ports: - port: 8000 labels: app.kubernetes.io/name: private-ai-app ialacol: helm: # the backend for the AI app, we are using ialacol https://github.com/chenhunghan/ialacol/ chart: name: ialacol repo: https://chenhunghan.github.io/ialacol # overriding values.yaml of ialacol helm chart values: replicas: 1 deployment: image: quay.io/chenhunghan/ialacol:latest env: # We are using MPT-30B, which is the most sophisticated model at the moment # If you want to start with some small but mightym try orca-mini # DEFAULT_MODEL_HG_REPO_ID: TheBloke/orca_mini_3B-GGML # DEFAULT_MODEL_FILE: orca-mini-3b.ggmlv3.q4_0.bin # MPT-30B DEFAULT_MODEL_HG_REPO_ID: TheBloke/mpt-30B-GGML DEFAULT_MODEL_FILE: mpt-30b.ggmlv0.q4_1.bin DEFAULT_MODEL_META: "" # Request more resource if needed resources: {} # pvc for storing the cache cache: persistence: size: 5Gi accessModes: - ReadWriteOnce storageClass: ~ cacheMountPath: /app/cache # pvc for storing the models model: persistence: size: 20Gi accessModes: - ReadWriteOnce storageClass: ~ modelMountPath: /app/models service: type: ClusterIP port: 8000 annotations: {} # You might want to use the following to select a node with more CPU and memory # for MPT-30B, we need at least 32GB of memory nodeSelector: {} tolerations: [] affinity: {}
-
Offline AI ðĪ on Github Actions ð
ââïļð°
You might be wondering why running Kubernetes is necessary for this project. This article was actually created during the development of a testing CI for the OSS project ialacol. The goal was to have a basic smoke test that verifies the Helm charts and ensures the endpoint returns a 200 status code. You can find the full source of the testing CI YAML here.
-
Containerized AI before Apocalypse ðģðĪ
We are deploying a Helm release orca-mini-3b using Helm chart ialacol
- Deploy private AI to cluster
Pontus
What are some alternatives?
langstream - LangStream. Event-Driven Developer Platform for Building and Running LLM AI Apps. Powered by Kubernetes and Kafka.
INSIGHT - INSIGHT is an autonomous AI that can do medical research!
dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
summarizepaper - An AI-powered arXiv paper summarization website with a virtual assistant for answering questions.
searchGPT - Grounded search engine (i.e. with source reference) based on LLM / ChatGPT / OpenAI API. It supports web search, file content search etc.
llmflows - LLMFlows - Simple, Explicit and Transparent LLM Apps
IncognitoPilot - An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
oss-fuzz-gen - LLM powered fuzzing via OSS-Fuzz.