vllm
AdaptiveCpp
vllm | AdaptiveCpp | |
---|---|---|
31 | 19 | |
19,344 | 1,046 | |
12.6% | 2.8% | |
9.9 | 9.7 | |
1 day ago | 5 days ago | |
Python | C++ | |
Apache License 2.0 | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vllm
-
AI leaderboards are no longer useful. It's time to switch to Pareto curves
I guess the root cause of my claim is that OpenAI won't tell us whether or not GPT-3.5 is an MoE model, and I assumed it wasn't. Since GPT-3.5 is clearly nondeterministic at temp=0, I believed the nondeterminism was due to FPU stuff, and this effect was amplified with GPT-4's MoE. But if GPT-3.5 is also MoE then that's just wrong.
What makes this especially tricky is that small models are truly 100% deterministic at temp=0 because the relative likelihoods are too coarse for FPU issues to be a factor. I had thought 3.5 was big enough that some of its token probabilities were too fine-grained for the FPU. But that's probably wrong.
On the other hand, it's not just GPT, there are currently floating-point difficulties in vllm which significantly affect the determinism of any model run on it: https://github.com/vllm-project/vllm/issues/966 Note that a suggested fix is upcasting to float32. So it's possible that GPT-3.5 is using an especially low-precision float and introducing nondeterminism by saving money on compute costs.
Sadly I do not have the money[1] to actually run a test to falsify any of this. It seems like this would be a good little research project.
[1] Or the time, or the motivation :) But this stuff is expensive.
-
Mistral AI Launches New 8x22B Moe Model
The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)
- FLaNK AI for 11 March 2024
-
Show HN: We got fine-tuning Mistral-7B to not suck
Great question! scheduling workloads onto GPUs in a way where VRAM is being utilised efficiently was quite the challenge.
What we found was the IO latency for loading model weights into VRAM will kill responsiveness if you don't "re-use" sessions (i.e. where the model weights remain loaded and you run multiple inference sessions over the same loaded weights).
Obviously projects like https://github.com/vllm-project/vllm exist but we needed to build out a scheduler that can run a fleet of GPUs for a matrix of text/image vs inference/finetune sessions.
disclaimer: I work on Helix
-
Mistral CEO confirms 'leak' of new open source AI model nearing GPT4 performance
FYI, vLLM also just added experiment multi-lora support: https://github.com/vllm-project/vllm/releases/tag/v0.3.0
Also check out the new prefix caching, I see huge potential for batch processing purposes there!
- VLLM Sacrifices Accuracy for Speed
- Easy, fast, and cheap LLM serving for everyone
- vllm
- Mixtral Expert Parallelism
- Mixtral 8x7B Support
AdaptiveCpp
-
What Every Developer Should Know About GPU Computing
Sapphire Rapids is a CPU.
AMD's primary focus for a GPU software ecosystem these days seems to be implementing CUDA with s/cuda/hip, so AMD directly supports and encourages running GPU software written in CUDA on AMD GPUs.
The only implementation for sycl on AMD GPUs that I can find is a hobby project that apparently is not allowed to use either the 'hip' or 'sycl' names. https://github.com/AdaptiveCpp/AdaptiveCpp
-
AMD May Get Across the CUDA Moat
Not natively, but AdaptiveCpp (previously hiSycl, then OpenSycl) has a single source single compiler pass, where they basically store LLVM IR as an intermediate representation.
https://github.com/AdaptiveCpp/AdaptiveCpp/blob/develop/doc/...
Performance penalty was within ew precents, at least according to the paper (figure 9 and 10)
-
Offloading standard C++ PSTL to Intel, NVIDIA and AMD GPUs with AdaptiveCpp
AdaptiveCpp (formerly known as hipSYCL) is an independent, open source, clang-based heterogeneous C++ compiler project. I thought some of you might be interested in knowing that we recently added support to offload standard C++ parallel STL algorithms to GPUs from all major vendors. E.g.:
-
AMD's HIPRT Working Its Way To Blender With ~25% Faster Rendering
In fact SYCL was initially called hipSYCL because it is based on AMD's ROCm/HIP. AMD had hipSYCL code running on the Frontier supercomputer four years ago at least and continues to support it.
-
hipSYCL can now generate a binary that runs on any Intel/NVIDIA/AMD GPU - in a single compiler pass. It is now the first single-pass SYCL compiler, and the first with unified code representation across backends.
Apple Silicon support through Metal is something that is actively discussed in hipSYCL. See https://github.com/illuhad/hipSYCL/issues/864 https://github.com/illuhad/hipSYCL/issues/460 (loooong discussion)
-
Bringing Nvidia® and AMD support to oneAPI
But really, the DPC++ part of oneAPI (which is many APIs) is just SYCL + extensions, and there are several other SYCL implementations which have already featured CUDA and Hip (AMD) support for a long time. The most popular and widely-used is hipSYCL, which we've been using in an HPC context on NV hardware for over 4 years now.
-
Intel oneAPI 2023 Released - AMD & NVIDIA Plugins Available
Unfortunately, the AMD and Nvidia plugins are proprietary. AMD users are probably better served with hipSYCL, if they somehow find an application using SYCL...
-
There is framework for everything.
Also, you might want to take a look at an implementation like hipSYCL :)
-
The Next Platform: "Intel Takes The SYCL To Nvidia's CUDA With Migration Tool"
Yup. SYCL is the future: https://github.com/illuhad/hipSYCL
-
Phoronix: "Intel's Vulkan Linux Driver Adds Experimental Mesh Shader Support For DG2/Alchemist"
ROCm is completely independent from these. It's a compute stack containing an OpenCL implementation for Radeon GPUs, plus a CUDA-like language called HIP which can be compiled to either device code for Radeon GPUs or to PTX to work with Nvidia GPUs. However, some researchers also created hipSYCL that allows SYCL to run atop HIP; you can think of it like DXVK - the program contains the DirectX/SYCL API, and DXVK/hipSYCL converts it to Vulkan/HIP (with one difference - DXVK does the conversion at runtime, while hipSYCL does it at compile time).
What are some alternatives?
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
CTranslate2 - Fast inference engine for Transformer models
HIP-CPU - An implementation of HIP that works on CPUs, across OSes.
lmdeploy - LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
triSYCL - Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
Llama-2-Onnx
HIP - HIP: C++ Heterogeneous-Compute Interface for Portability
tritony - Tiny configuration for Triton Inference Server
cuda-api-wrappers - Thin C++-flavored header-only wrappers for core CUDA APIs: Runtime, Driver, NVRTC, NVTX.
faster-whisper - Faster Whisper transcription with CTranslate2
cuda_memtest - Fork of CUDA GPU memtest :eyeglasses: