Dependencies
llama.cpp
Dependencies | llama.cpp | |
---|---|---|
24 | 775 | |
8,176 | 57,463 | |
- | - | |
0.0 | 10.0 | |
about 1 month ago | 2 days ago | |
C# | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dependencies
-
I can't get Fluidsynth working
I did some digging with Dependencies and found that issue is with libstdc++-6.dll
-
EXE vs MSI
Maybe Dependency Walker can shed some light on that 🤔
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
I did that, basically. Problem is there is a clblast.dll (on windows) that llama.dll depends on, and it llama-cpp-python always failed dependency resolve to find it. I copied the dll to the right folder, loading it manually via CDLL worked fine, and https://github.com/lucasg/Dependencies also confirmed the dll was findable. When loading DLL's in windows, it checks the same folder for dependency dll's (and a few other places).
-
Unable to get Meshroom to accept images
You can uses dependencies walker to detect the exact version of MS c++ runtime required.
-
Every time I try to open the game this error message show up. What am I supposed to do?
If that doesn't work you will have to do it the hard way like i did. Using Dependencies to find the missing dlls
- Kenshi 1.0.60 Crashes & Bug reports
-
FFmpeg 6.0
("Dependencies" is Dependencies.exe from https://github.com/lucasg/Dependencies)
- The game wont launch
-
Does the antivirus detecting files as malware depending on the compiling options make any sense?
If you're using MinGW or Cygwin and link in an arbitrary number of system libraries, then you need to ship those files as well. You can use Dependencies to list all DLLs your program is using, including transient dependencies.
-
Software Dependency Tracker
A couple of them, yes. Someone else linked Dependencies which is much more modern and doesn't have some of the issues these older applications have. Thank you for the suggestions regardless.
llama.cpp
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
What are some alternatives?
SharpUnhooker - C# Based Universal API Unhooker
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
lddtree - Fork of pax-utils' lddtree.sh
gpt4all - gpt4all: run open-source LLMs anywhere
Windows-Auto-Night-Mode - Automatically switches between the dark and light theme of Windows 10 [Moved to: https://github.com/AutoDarkMode/Windows-Auto-Night-Mode]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
clauf - A C interpreter developed live on YouTube
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
deeplabel - A cross-platform desktop image annotation tool for machine learning
ggml - Tensor library for machine learning
HexCtrl - Fully-featured Hex Control written in C++/MFC.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM