SaaSHub helps you find the best software and product alternatives Learn more →
Llm-gpt4all Alternatives
Similar projects and alternatives to llm-gpt4all
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
guidance
Discontinued A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance] (by microsoft)
-
exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
-
simpleaichat
Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
multi-gpt
A Clojure interface into the GPT API with advanced tools like conversational memory, task management, and more (by cjbarre)
-
llm-api
Fully typed & consistent chat APIs for OpenAI, Anthropic, Groq, and Azure's chat models for browser, edge, and node environments. (by dzhng)
-
bosquet
Tooling to build LLM applications: prompt templating and composition, agents, LLM memory, and other instruments for builders of AI applications.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
llm-gpt4all reviews and mentions
-
LLM now provides tools for working with embeddings
I'm still iterating on that. Plugins get complete control over the prompts, so they can handle the various weirdnesses of them. Here's some relevant code:
https://github.com/simonw/llm-gpt4all/blob/0046e2bf5d0a9c369...
https://github.com/simonw/llm-mlc/blob/b05eec9ba008e700ecc42...
https://github.com/simonw/llm-llama-cpp/blob/29ee8d239f5cfbf...
I'm not completely happy with this yet. Part of the problem is that different models on the same architecture may have completely different prompting styles.
I expect I'll eventually evolve the plugins to allow them to be configured in an easier and more flexible way. Ideally I'd like you to be able to run new models on existing architectures using an existing plugin.
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
My LLM tool can be used for both. That's what the plugins are for.
It can talk to OpenAI, PaLM 2 and Llama / other models on Replicate via API, using API keys.
It can run local models on your own machine using these two plugins: https://github.com/simonw/llm-gpt4all and https://github.com/simonw/llm-mpt30b
-
The Problem with LangChain
Yeah I haven't figured out how to have it reuse the models from the desktop GPT4All installation yet, issue here: https://github.com/simonw/llm-gpt4all/issues/5
-
A note from our sponsor - SaaSHub
www.saashub.com | 11 May 2024
Stats
simonw/llm-gpt4all is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of llm-gpt4all is Python.
Sponsored