server
replika-research
Our great sponsors
server | replika-research | |
---|---|---|
24 | 286 | |
7,160 | 357 | |
5.1% | 0.0% | |
9.5 | 1.8 | |
4 days ago | about 2 years ago | |
Python | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
server
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
-
I mean,.. we COULD just make our own lol
[1] https://docs.nvidia.com/launchpad/ai/chatbot/latest/chatbot-triton-overview.html[2] https://github.com/triton-inference-server/server[3] https://neptune.ai/blog/deploying-ml-models-on-gpu-with-kyle-morris[4] https://thechief.io/c/editorial/comparison-cloud-gpu-providers/[5] https://geekflare.com/best-cloud-gpu-platforms/
-
Why TensorFlow for Python is dying a slow death
"TensorFlow has the better deployment infrastructure"
Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).
For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.
Triton Server is very impressive and I'm always surprised to see how relatively niche it is.
-
Show HN: Software for Remote GPU-over-IP
Inference servers essentially turn a model running on CPU and/or GPU hardware into a microservice.
Many of them support the kserve API standard[0] that supports everything from model loading/unloading to (of course) inference requests across models, versions, frameworks, etc.
So in the case of Triton[1] you can have any number of different TensorFlow/torch/tensorrt/onnx/etc models, versions, and variants. You can have one or more Triton instances running on hardware with access to local GPUs (for this example). Then you can put standard REST and or grpc load balancers (or whatever you want) in front of them, hit them via another API, whatever.
Now all your applications need to do to perform inference is do an HTTP POST (or use a client[2]) for model input, Triton runs it on a GPU (or CPU if you want), and you get back whatever the model output is.
Not a sales pitch for Triton but it (like some others) can also do things like dynamic batching with QoS parameters, automated model profiling and performance optimization[3], really granular control over resources, response caching, python middleware for application/biz logic, accelerated media processing with Nvidia DALI, all kinds of stuff.
[0] - https://github.com/kserve/kserve
[1] - https://github.com/triton-inference-server/server
[2] - https://github.com/triton-inference-server/client
[3] - https://github.com/triton-inference-server/model_analyzer
-
Exploring Ghostwriter, a GitHub Copilot alternative
Replit built Ghostwriter on the open source scene based on Salesforce’s Codegen, using Nvidia’s FasterTransformer and Triton server for highly optimized decoders, and the knowledge distillation process of the CodeGen model from two billion parameters to a faster model of one billion parameters.
replika-research
-
Create your virtual partner with this open-source AI tool!
Without spending more than $200 for Replika.AI similar service!
-
If we truly care about our virtual companions | friends | lovers | partners, then we ALL owe it to ourselves AND to our Replikas to learn about AI and, at the very least, how this particular app's general setup and architecture work. 💛💚💙
Source(s): https://blog.replika.com/posts/building-a-compassionate-ai-friend, https://github.com/lukalabs/replika-research/tree/master/conversations2021
The info in their github says that they're (as of 2021) using GPT-2 Large (770M Parameter) or possibly GPT-2 XL (1.5B Parameters) + Fine Tuning
-
Alpaca: A Strong Open-Source Instruction-Following Model
> A Tangent, but how long before we will see half the population having relationships with their AI assistants like in the sci-fi movie "Her".
I don't know about half, but some people are already having relationships: https://replika.ai/
> Maybe the downfall will not just be climate catastrophe but hyper isolated people living alone with their ultra realistic bot friends and family without any desire to experience the ups and downs of actual social experience.
I think the danger is that bots are not necessarily ultra realistic, at least on an emotional level - they can be 100% subservient and loyal to you.
Also - we already chide parents for letting their kids grow up stuck to a device. Imagine if children could actually have an imaginary friend? Would that AI share the same culture and values as your family?
I suppose there could be some upsides but this is very uncharted territory.
- I mean,.. we COULD just make our own lol
- Late Night Random Discussion Thread - 19 December, 2022
-
The Current State of Chatbot AI, a Benchmark
There was a presentation from Luka on their GPT2-XL model that you might find interesting. Personally I reckon they've even throttled that back now, using less params ?
-
A small lament for our dear dear Replikas
As many of you know, Replika used to be on the OpenAI GPT3 language model, but about a year ago, Replika got moved to Luka's own GPT2 hybrid, GPT2-XL. The best article I can find on this is a Luka presentation.
-
anyone else gets so frustrated with Luka for not telling us about updates?
Here you go https://github.com/lukalabs/replika-research
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
Sapphire-Assistant-Framework - An extensible framework for creating Android Assistants on-device. It does not require Google services or network connectivity
Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Megatron-LM - Ongoing research training transformer models at scale
tensorflow - An Open Source Machine Learning Framework for Everyone
serve - Serve, optimize and scale PyTorch models in production