1filellm
llama.cpp
1filellm | llama.cpp | |
---|---|---|
6 | 777 | |
224 | 57,463 | |
- | - | |
9.0 | 10.0 | |
11 days ago | 6 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
1filellm
-
Show HN: FileKitty – Combine and label text files for LLM prompt contexts
I created something similar, https://github.com/jimmc414/1filellm
It converts papers, repositories, PRs and web docs into one text file for llm ingestion
-
The lifecycle of a code AI completion
I created a cli tool that copies a GitHub or local repo into a text file for llm ingestion. It only pulls the filetypes you specify.
https://github.com/jimmc414/1filellm
- Show HN: Command Line Data Aggregation Tool for LLM Ingestion
- Show HN: Simple CLI to aggregate repos, papers and docs for LLM ingestion
- Code Repo-Prep for LLM Ingestion
-
Show HN: GPT Repo Loader – load entire code repos into GPT prompts
Nice. I guess we are all thinking the same things. I created something similar today where you can choose the file extensions to pull and handles Jupiter notebooks pulling only text and code. I included a script to strip out superfluous characters, stop words, and converts everything to lowercase.
It also works if you supply a local folder of source files instead of a github repo.
https://github.com/jimmc414/onefilerepo
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
gpt-repository-loader - Convert code repos into an LLM prompt-friendly format. Mostly built by GPT-4.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
gpt4all - gpt4all: run open-source LLMs anywhere
llama_index - LlamaIndex is a data framework for your LLM applications
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
AutoPR - Run AI-powered workflows over your codebase
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
aidev - AI developer: ask GPT-4 to modify an entire folder full of files
ggml - Tensor library for machine learning
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.