ubicloud
vllm
ubicloud | vllm | |
---|---|---|
16 | 32 | |
3,146 | 20,742 | |
3.5% | 10.5% | |
9.9 | 9.9 | |
1 day ago | 7 days ago | |
Ruby | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ubicloud
- FLaNK AI for 11 March 2024
-
Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
The docs still say the Elastic license is used but looking at https://github.com/ubicloud/ubicloud/blob/main/LICENSE it looks like the project might have switched to GNU Affero General Public License v3.0 in the last day.
- GitHub - ubicloud/ubicloud: Open, free, and portable cloud. Elastic compute, block storage (non replicated), and virtual networking services in public alpha.
-
Ask HN: How does your company balance test coverage and deploy speed?
At Ubicloud, we have 100% line and branch coverage that is mandated on every PR (https://github.com/ubicloud/ubicloud). We also have an E2E test suite that we run periodically and with every commit. We did not really feel like our tests are slowing us down, it actually makes us faster since we have a higher trust to the payload and many manual checks that would need to be done is safely skipped.
-
Ubicloud – open, free and portable cloud
> Taken from here: https://ubicloud.com/
Am I the only one getting a certificate error browsing there?
-
Ask HN: Thoughts about Elastic V2, SSPL, or mixed software licenses?
Link to our project: https://github.com/ubicloud/ubicloud
We’re choosing Elastic V2 for three reasons: (1) We’re planning to monetize through a managed service and we’d like the license to support that, (2) Later if we change our mind, we think it’s easier on our users if we go from a restrictive license to a more permissive one, and (3) The Elastic V2 license is much simpler than its cousin, Server Side Public License (SSPL).
That said, Elastic V2 is a new license and doesn’t seem to as popular as SSPL. Also, some projects out there mix and match multiple licenses in their repo to be able to call themselves open source.
Any insights / feedback on Elastic V2 or software licenses in general?
- Attribute-Based Access Control (ABAC) Implementation in 130 Lines of Code
vllm
-
Best LLM Inference Engines and Servers to Deploy LLMs in Production
GitHub repository: https://github.com/vllm-project/vllm
-
AI leaderboards are no longer useful. It's time to switch to Pareto curves
I guess the root cause of my claim is that OpenAI won't tell us whether or not GPT-3.5 is an MoE model, and I assumed it wasn't. Since GPT-3.5 is clearly nondeterministic at temp=0, I believed the nondeterminism was due to FPU stuff, and this effect was amplified with GPT-4's MoE. But if GPT-3.5 is also MoE then that's just wrong.
What makes this especially tricky is that small models are truly 100% deterministic at temp=0 because the relative likelihoods are too coarse for FPU issues to be a factor. I had thought 3.5 was big enough that some of its token probabilities were too fine-grained for the FPU. But that's probably wrong.
On the other hand, it's not just GPT, there are currently floating-point difficulties in vllm which significantly affect the determinism of any model run on it: https://github.com/vllm-project/vllm/issues/966 Note that a suggested fix is upcasting to float32. So it's possible that GPT-3.5 is using an especially low-precision float and introducing nondeterminism by saving money on compute costs.
Sadly I do not have the money[1] to actually run a test to falsify any of this. It seems like this would be a good little research project.
[1] Or the time, or the motivation :) But this stuff is expensive.
-
Mistral AI Launches New 8x22B Moe Model
The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)
- FLaNK AI for 11 March 2024
-
Show HN: We got fine-tuning Mistral-7B to not suck
Great question! scheduling workloads onto GPUs in a way where VRAM is being utilised efficiently was quite the challenge.
What we found was the IO latency for loading model weights into VRAM will kill responsiveness if you don't "re-use" sessions (i.e. where the model weights remain loaded and you run multiple inference sessions over the same loaded weights).
Obviously projects like https://github.com/vllm-project/vllm exist but we needed to build out a scheduler that can run a fleet of GPUs for a matrix of text/image vs inference/finetune sessions.
disclaimer: I work on Helix
-
Mistral CEO confirms 'leak' of new open source AI model nearing GPT4 performance
FYI, vLLM also just added experiment multi-lora support: https://github.com/vllm-project/vllm/releases/tag/v0.3.0
Also check out the new prefix caching, I see huge potential for batch processing purposes there!
- VLLM Sacrifices Accuracy for Speed
- Easy, fast, and cheap LLM serving for everyone
- vllm
- Mixtral Expert Parallelism
What are some alternatives?
manageiq - ManageIQ Open-Source Management Platform
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
fog-azure-rm - Fog for Azure Resource Manager
CTranslate2 - Fast inference engine for Transformer models
cloudfront-signer - Ruby gem for signing AWS CloudFront private content URLs and streaming paths.
lmdeploy - LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
AWS SDK for Ruby - The official AWS SDK for Ruby.
Llama-2-Onnx
forem - For empowering community 🌱
tritony - Tiny configuration for Triton Inference Server
homebrew-portable-ruby - 🚗 Versions of Ruby that can be installed and run from anywhere on the filesystem.
faster-whisper - Faster Whisper transcription with CTranslate2