askai
llama.cpp
Our great sponsors
askai | llama.cpp | |
---|---|---|
1,746 | 766 | |
86 | 55,117 | |
- | - | |
10.0 | 9.9 | |
over 1 year ago | 6 days ago | |
TypeScript | C++ | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
askai
-
How to build a custom GPT: Step-by-step tutorial
Go to chat.openai.com and log in
- Chat.openai.com no longer requires login
-
Integrating Strapi with ChatGPT and Next.js
In this tutorial, we will learn how to use Strapi, ChatGPT, and Next.js to build an app that displays recipes using AI.
-
GPT-4 Turbo with Vision is a step backwards for coding
Maybe I am bit dim, but how one can choose GPT-4 Turbo? Is this available from https://chat.openai.com/ ?
-
AI Developer Tool Limitations In 2024
With the rise of ChatGPT, Bard Gemini, GitHub Copilot, Devin, and other AI tools1, developers started to fear that AI tooling would replace them. Even though their capabilities are indeed impressive, I don't fear our jobs will go away in 2024.
-
Data-driven customer acquisition: Machine Learning applied to Customer Lifetime Value
To illustrate the core concepts of ML and regression analysis, we’ll start with a simple model. ChatGPT (the free version) creates something that works with this prompt:
-
From 12th Final Project to an ATM Management System: Leveraging ChatGPT 4 for PDF Analysis
Fast forward to my college years. I found myself at IIIT Delhi, a prestigious tier 1 computer science engineering college. Around the same time, ChatGPT emerged, shaking the world more vigorously than COVID-19. As fate would have it, I gained temporary access to ChatGPT 4 which runs on GPT 4, and curiosity piqued my interest.
-
📊 Obsidian: Nutrition
It is worth mentioning that for my use case, I do not require a high level of precision, so I obtain the values with an AI. I describe the recipe and portions to ChatGPT, and it provides me with a very good estimate of the nutritional information of the meal.
- Exploring the Frontiers of AI: An In-Depth Look at ChatGPT-4
-
How to connect ChatGPT to a SQL database for data retrieval and analysis
To be able to work with chatGPT, head over to ChatGPT and sign up if you haven't already. If you have signed up, all you need to do is log in
llama.cpp
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
-
Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.
As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.
[0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors
-
KodiBot - Local Chatbot App for Desktop
KodiBot is a desktop app that enables users to run their own AI chat assistants locally and offline on Windows, Mac, and Linux operating systems. KodiBot is a standalone app and does not require an internet connection or additional dependencies to run local chat assistants. It supports both Llama.cpp compatible models and OpenAI API.
-
Mixture-of-Depths: Dynamically allocating compute in transformers
There are already some implementations out there which attempt to accomplish this!
Here's an example: https://github.com/silphendio/sliced_llama
A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...
Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275
And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...
-
The lifecycle of a code AI completion
For those who might not be aware of this, there is also an open source project on GitHub called "Twinny" which is an offline Visual Studio Code plugin equivalent to Copilot: https://github.com/rjmacarthy/twinny
It can be used with a number of local model services. Currently for my setup on a NVIDIA 4090, I'm running both the base and instruct model for deepseek-coder 6.7b using 5_K_M Quantization GGUF files (for performance) through llama.cpp "server" where the base model is for completions and the instruct model for chat interactions.
llama.cpp: https://github.com/ggerganov/llama.cpp/
deepseek-coder 6.7b base GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GGU...
deepseek-coder 6.7b instruct GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct...
-
More Agents Is All You Need: LLMs performance scales with the number of agents
If I'm reading this correctly, they had to discard Llama 2 answers and only use GPT-3.5 given answers to test the hypothesis.
GPT-3.5 answering questions through the OAI API alone is not an acceptable method of testing problem solving ability across a range of temperatures. OpenAI does some blackbox wizardry on their end.
There are many complex and clever sampling techniques for which temperature is just one (possibly dynamic) component
One example from the llama.cpp codebase is dynamic temperature sampling
https://github.com/ggerganov/llama.cpp/pull/4972/files
Not sure what you mean by whole model state given that there are tens of thousands of possible tokens and the models have billions of parameters in XX,XXX-dimensional space. How many queries across how many sampling methods might you need? Err..how much time? :)
-
Hosting Your Own AI Chatbot on Android Devices
git clone https://github.com/ggerganov/llama.cpp.git
What are some alternatives?
ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
ollama - Get up and running with Llama 2, Mistral, Gemma, and other large language models.
gpt-4chan-model
gpt4all - gpt4all: run open-source LLMs anywhere
openai-cookbook - Examples and guides for using the OpenAI API
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
ai-cli - Get answers for CLI commands from ChatGPT right from your terminal
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
KoboldAI-Client
ggml - Tensor library for machine learning
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM