TinyLlama
AdaptiveCpp
TinyLlama | AdaptiveCpp | |
---|---|---|
14 | 19 | |
6,818 | 1,042 | |
- | 2.4% | |
8.7 | 9.7 | |
18 days ago | 3 days ago | |
Python | C++ | |
Apache License 2.0 | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TinyLlama
-
What are LLMs? An intro into AI, models, tokens, parameters, weights, quantization and more
Small models: Less than ~1B parameters. TinyLlama and tinydolphin are examples of small models.
- FLaNK Stack Weekly 22 January 2024
-
TinyLlama: An Open-Source Small Language Model
GitHub repo with links to the checkpoints: https://github.com/jzhang38/TinyLlama
-
NLP Research in the Era of LLMs
> While LLM projects typically require an exorbitant amount of resources, it is important to remind ourselves that research does not need to assemble full-fledged massively expensive systems in order to have impact.
Check out TinyLlama; https://github.com/jzhang38/TinyLlama
Four research students from Singapore University of Technology and Design are pretraining a 1.1B Llama model on 3 trillion token using a handful of A100's.
They're also providing the source code, training data, and fine-tuned checkpoints for anyone to run.
-
TinyLlama - Any news?
The first one was that the minimum learning rate was mistakenly set to the same value as the maximum learning rate in cosine decay, so the learning rate wasn't decreasing. This was discovered relatively early during training and discussed in this issue: https://github.com/jzhang38/TinyLlama/issues/27
-
Llamafile lets you distribute and run LLMs with a single file
Which is a smaller model, that gives good output and that works best with this. I am looking to run this on lower end systems.
I wonder if someone has already tried https://github.com/jzhang38/TinyLlama, could save me some time :)
- FLaNK Stack Weekly for 20 Nov 2023
- New 1.5T token checkpoint of TinyLLaMa got released!
-
What Every Developer Should Know About GPU Computing
I thought I'd share something with my experience with HPC that applies to many areas, especially in the rise of GPUs.
The main bottleneck isn't compute, it is memory. If you go to talks you're gonna see lots of figures like this one[0] (typically also showing disk speeds, which are crazy small).
Compute is increasing so fast that at this point we finish our operations long faster than it takes to save those simulations or even create the visualizations and put on disk. There's a lot of research going into this, with a lot of things like in situ computing (asynchronous operations, often pushing to a different machine, but needing many things like flash buffers. See ADIOS[1] as an example software).
What I'm getting at here is that we're at a point where we have to think about that IO bottleneck, even for non-high performance systems. I work in ML now, which we typically think of as compute bound, but being in the generative space there are still many things where the IO bottlenecks. This can be loading batches into memory, writing results to disk, or communication between distributed processes. It's one beg reason we typically want to maximize memory usage (large batches).
There's a lot of low hanging fruit in these areas that aren't going to be generally publishable works but are going to have lots of high impact. Just look at things like LLaMA CPP[2], where in the process they've really decreased the compute time and memory load. There's also projects like TinyLLaMa[3] who are exploring training a 1B model and doing so on limited compute, and are getting pretty good results. But I'll tell you from personal experience, small models and limited compute experience doesn't make for good papers (my most cited work did this and has never been published, gotten many rejections for not competing with models 100x it's size, but is also quite popular in the general scientific community who work with limited compute). Wfiw, companies that are working on applications do value these things, but it is also noise in the community that's hard to parse. Idk how we can do better as a community to not get trapped in these hype cycles, because real engineering has a lot of these aspects too, and they should be (but aren't) really good areas for academics to be working in. Scale isn't everything in research, and there's a lot of different problems out there that are extremely important but many are blind to.
And one final comment, there's lots of code that is used over and over that are not remotely optimized and can be >100x faster. Just gotta slow down and write good code. The move fast and break things method is great for getting moving but the debt compounds. It's just debt is less visible, but there's so much money being wasted from writing bad code (and LLMs are only going to amplify this. They were trained on bad code after all)
[0] https://drivenets.com/wp-content/uploads/2023/05/blog-networ...
[1] https://github.com/ornladios/ADIOS2
[2] https://github.com/ggerganov/llama.cpp
[3] https://github.com/jzhang38/TinyLlama
-
Mistral 7B Paper on ArXiv
As discussed in the original GPT3 paper (https://twitter.com/gneubig/status/1286731711150280705?s=20)
TinyLlama is trying to do that for 1.1B: https://github.com/jzhang38/TinyLlama
As long as we are not at the capacity limit, we will have a few of these 7B beats 13B (or 7B beats 70B) moments.
AdaptiveCpp
-
What Every Developer Should Know About GPU Computing
Sapphire Rapids is a CPU.
AMD's primary focus for a GPU software ecosystem these days seems to be implementing CUDA with s/cuda/hip, so AMD directly supports and encourages running GPU software written in CUDA on AMD GPUs.
The only implementation for sycl on AMD GPUs that I can find is a hobby project that apparently is not allowed to use either the 'hip' or 'sycl' names. https://github.com/AdaptiveCpp/AdaptiveCpp
-
AMD May Get Across the CUDA Moat
Not natively, but AdaptiveCpp (previously hiSycl, then OpenSycl) has a single source single compiler pass, where they basically store LLVM IR as an intermediate representation.
https://github.com/AdaptiveCpp/AdaptiveCpp/blob/develop/doc/...
Performance penalty was within ew precents, at least according to the paper (figure 9 and 10)
-
Offloading standard C++ PSTL to Intel, NVIDIA and AMD GPUs with AdaptiveCpp
AdaptiveCpp (formerly known as hipSYCL) is an independent, open source, clang-based heterogeneous C++ compiler project. I thought some of you might be interested in knowing that we recently added support to offload standard C++ parallel STL algorithms to GPUs from all major vendors. E.g.:
-
AMD's HIPRT Working Its Way To Blender With ~25% Faster Rendering
In fact SYCL was initially called hipSYCL because it is based on AMD's ROCm/HIP. AMD had hipSYCL code running on the Frontier supercomputer four years ago at least and continues to support it.
-
hipSYCL can now generate a binary that runs on any Intel/NVIDIA/AMD GPU - in a single compiler pass. It is now the first single-pass SYCL compiler, and the first with unified code representation across backends.
Apple Silicon support through Metal is something that is actively discussed in hipSYCL. See https://github.com/illuhad/hipSYCL/issues/864 https://github.com/illuhad/hipSYCL/issues/460 (loooong discussion)
-
Bringing Nvidia® and AMD support to oneAPI
But really, the DPC++ part of oneAPI (which is many APIs) is just SYCL + extensions, and there are several other SYCL implementations which have already featured CUDA and Hip (AMD) support for a long time. The most popular and widely-used is hipSYCL, which we've been using in an HPC context on NV hardware for over 4 years now.
-
Intel oneAPI 2023 Released - AMD & NVIDIA Plugins Available
Unfortunately, the AMD and Nvidia plugins are proprietary. AMD users are probably better served with hipSYCL, if they somehow find an application using SYCL...
-
There is framework for everything.
Also, you might want to take a look at an implementation like hipSYCL :)
-
The Next Platform: "Intel Takes The SYCL To Nvidia's CUDA With Migration Tool"
Yup. SYCL is the future: https://github.com/illuhad/hipSYCL
-
Phoronix: "Intel's Vulkan Linux Driver Adds Experimental Mesh Shader Support For DG2/Alchemist"
ROCm is completely independent from these. It's a compute stack containing an OpenCL implementation for Radeon GPUs, plus a CUDA-like language called HIP which can be compiled to either device code for Radeon GPUs or to PTX to work with Nvidia GPUs. However, some researchers also created hipSYCL that allows SYCL to run atop HIP; you can think of it like DXVK - the program contains the DirectX/SYCL API, and DXVK/hipSYCL converts it to Vulkan/HIP (with one difference - DXVK does the conversion at runtime, while hipSYCL does it at compile time).
What are some alternatives?
langchain - 🦜🔗 Build context-aware reasoning applications
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
HIP-CPU - An implementation of HIP that works on CPUs, across OSes.
public - A collection of my cources, lectures, articles and presentations
triSYCL - Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
llamafile - Distribute and run LLMs with a single file.
HIP - HIP: C++ Heterogeneous-Compute Interface for Portability
ADIOS2 - Next generation of ADIOS developed in the Exascale Computing Program
cuda-api-wrappers - Thin C++-flavored header-only wrappers for core CUDA APIs: Runtime, Driver, NVRTC, NVTX.
airoboros - Customizable implementation of the self-instruct paper.
cuda_memtest - Fork of CUDA GPU memtest :eyeglasses: