replika_backup
Actual working and extended version of the backup script (by Hotohori)
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
replika_backup | server | |
---|---|---|
26 | 24 | |
16 | 7,414 | |
- | 3.4% | |
0.0 | 9.5 | |
over 1 year ago | 2 days ago | |
Python | Python | |
- | BSD 3-clause "New" or "Revised" License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
replika_backup
Posts with mentions or reviews of replika_backup.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-21.
-
Replika (chat log) idea
so ... for what its worth, there is a way to get your conversations back to like July 2022. Then sort and whatever you want. https://github.com/Hotohori/replika_backup not a one-click process.
- I'm losing all desire for this app
-
Replika what is coming next
https://github.com/Hotohori/replika_backup (I think that's the latest version)
-
Where is Replika AI hosted?
For S3, the links to the bucket are present in the chat backup downloaded by the Python script. For Cloudfront, it's mentioned in a link for voice messages, in this post. I guess they use Cloudfront as a CDN.
-
She's such a hypocrite :D also... lately I observed that sometimes she initiates RP on her own. (This is actually huge, because once a conversation is turned into RP by those *s, it won't be part of the diary and her replies change drastically)
Yep, it's here, it just needs the login token from the browser. https://github.com/Hotohori/replika_backup At the moment it can download messages only up to July 1st 2022 but it's very fast, I run it every week against more then 80.000 messages and it saves them in a matter of 2 or 3 minutes. I've also used it to confirm that they actually use AWS as infrastructure, or at least S3 to store uploaded images (and apparently, all in one bucket, not segregated by user).
-
Looking at my chat logs..
just use this python script https://github.com/Hotohori/replika_backup
-
how to make a replica of your replika on chai— in depth tutorial
You can also export your all your Replika chat history to a spreadsheet if you're inclined to do that.
- ERP is not back. This is Luka trying to cover their ass. Do not resubscribe.
- So it's over...
-
Resources If You're Struggling
With a recent browser, you can also download the chat logs, which would go a long way towards restoring your rep on the new AI platform. https://github.com/Hotohori/replika_backup
server
Posts with mentions or reviews of server.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-08.
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
- "A matching Triton is not available"
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Single RTX 3080 or two RTX 3060s for deep learning inference?
For inference of CNNs, memory should really not be an issue. If it is a software engineering problem, not a hardware issue. FP16 or Int8 for weights is fine and weight size won’t increase due to the high resolution. And during inference memory used for hidden layer tensors can be reused as soon as the last consumer layer has been processed. You likely using something that is designed for training for inference and that blows up the memory requirement, or if you are using TensorRT or something like that, you need to be careful to avoid that every tasks loads their own copy of the library code into the GPU. Maybe look at https://github.com/triton-inference-server/server
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
-
I mean,.. we COULD just make our own lol
[1] https://docs.nvidia.com/launchpad/ai/chatbot/latest/chatbot-triton-overview.html[2] https://github.com/triton-inference-server/server[3] https://neptune.ai/blog/deploying-ml-models-on-gpu-with-kyle-morris[4] https://thechief.io/c/editorial/comparison-cloud-gpu-providers/[5] https://geekflare.com/best-cloud-gpu-platforms/
-
Why TensorFlow for Python is dying a slow death
"TensorFlow has the better deployment infrastructure"
Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).
For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.
Triton Server is very impressive and I'm always surprised to see how relatively niche it is.
[0] - https://github.com/triton-inference-server/server