colab-ssh
Connect to Google Colab using SSH (by WassimBenzarti)
mixtral-offloading
Run Mixtral-8x7B models in Colab or consumer desktops (by dvmazur)
colab-ssh | mixtral-offloading | |
---|---|---|
1 | 3 | |
945 | 2,235 | |
- | - | |
1.8 | 8.7 | |
almost 2 years ago | 24 days ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
colab-ssh
Posts with mentions or reviews of colab-ssh.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Google Has Banned the Training of Deepfakes in Colab
So I really dislike Jupyter, and I've tried using this[0] before to ssh into Colab and do work in a terminal setup.
You have to be careful to back up your code since your ssh session goes away before your "kernel" (or whatever your colab session is called) goes away. You wouldn't have this worry if you were just using the web interface, since the code is always saved.
[0] https://github.com/WassimBenzarti/colab-ssh
mixtral-offloading
Posts with mentions or reviews of mixtral-offloading.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-03-27.
-
DBRX: A New Open LLM
Waiting for Mixed Quantization with MQQ and MoE Offloading [1]. With that I was able to run Mistral 8x7B on my 10 GB VRAM rtx3080... This should work for DBRX and should shave off a ton of VRAM requirement.
1. https://github.com/dvmazur/mixtral-offloading?tab=readme-ov-...
- Mixtral in Colab
- Run Mixtral-8x7B models in Colab or consumer desktops
What are some alternatives?
When comparing colab-ssh and mixtral-offloading you can also consider the following projects:
vscode-theme-afterglow-remastered - Afterglow remastered theme for Visual Studio Code
lightning-mlflow-hf - Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow
secure-wireguard-implementation - A guide on implementing a secure Wireguard server on OVH (or any other Debian VPS) with DNSCrypt, Port Knocking & an SSH-Honeypot