gpt-llama.cpp
chatbot-ui
gpt-llama.cpp | chatbot-ui | |
---|---|---|
12 | 63 | |
587 | 26,308 | |
- | - | |
8.2 | 9.4 | |
11 months ago | 6 days ago | |
JavaScript | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-llama.cpp
-
Attempt to run Llama on a remote server with chatbot-ui
hi! I really like the solution https://github.com/keldenl/gpt-llama.cpp which helps to deploy https://github.com/mckaywrigley/chatbot-ui on the local model. I am running this together with Wizard7b or 13b locally and it works fine, but when I tried to upload to a remote server I met an error.
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
sounds like you’re asking for exactly this? https://github.com/keldenl/gpt-llama.cpp
- LLaMA and AutoAPI?
-
New big update to GPTNicheFinder: better trends analysis and scoring system, cleaned up UI and verbose in the terminal for people who want to see what is going on and to verify the results
I salut you good sir. This is an amazing idea. I don't have time but it will be interesting idea to use this wrapper https://github.com/keldenl/gpt-llama.cpp which simulates GPT endpoint for local lama, so basically we can have amazing tool for completely free use. If somebody test it please let me know underneath my comment!
-
I build an AI powered writing tools, an AI co-author
I would gladly buy your product to run with a local model, like Vicuna ggml , also see https://github.com/keldenl/gpt-llama.cpp/
-
Serge... Just works
possible through fastllama in python or gpt-llama.cpp an API wrapper around llama.cpp
-
Embeddings?
https://github.com/keldenl/gpt-llama.cpp supports embeddings, and it even takes in openai type requests and returns openai compatible responses!
-
I built a completely Local AutoGPT with the help of GPT-llama running Vicuna-13B
https://github.com/keldenl/gpt-llama.cpp
- I build a completely Local and portable AutoGPT with the help of gpt-llama, running on Vicuna-13b
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
There's a (kind of) working Auto-GPT solution that uses Vicuna https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/Auto-GPT-setup-guide.md
chatbot-ui
-
AI programming tools should be added to the Joel Test
One of the first things we did when GPT-4 became available was talk to our Azure rep and get access to the OpenAI models that they'd partnered with Microsoft to host in Azure. Now, we have our own private, not-datamined (so they claim, contractually) API endpoint and we use an OpenAI integration in VS Code[1] to connect to, allowing anyone in the company to use it to help them code.
I also spun up an internal chat UI[2] to replace ChatGPT so people can feel comfortable discussing proprietary data with the LLM endpoint.
The only thing that would make it more secure would be running inference engines internally, but I wouldn't have access to as good of models, and I'd need a _lot_ of hardware to match the speeds.
[1] - https://marketplace.visualstudio.com/items?itemName=AndrewBu...
[2] - https://github.com/mckaywrigley/chatbot-ui (legacy branch)
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
[3] https://github.com/mckaywrigley/chatbot-ui
-
Show HN: I made an app to use local AI as daily driver
Thank you for the work.
Please take this in a nice way: I can't see why I would use this over ChatbotUI+Ollama https://github.com/mckaywrigley/chatbot-ui
Seem the only advantage is having it as MacOS native app and only real distinction is maybe fast import and search - I've yet to try that though.
ChatbotUI (and other similar stuff) are cross-platform, customizable, private, debuggable. I'm easily able to see what it's trying to do.
-
ChatGPT for Teams
You can make a privacy request for OpenAI to not train on your data here: https://privacy.openai.com/
Alternatively, you could also use your own UI/API token (API calls aren't trained on). Chatbot UI just got a major update released and has nice things like folders, and chat search: https://github.com/mckaywrigley/chatbot-ui
- Chatbot UI 2.0
- webui similar to chatgpt
-
They made ChatGPT worse at coding for some reason, and it’s caused me to look at alternative AI options
Also chatbotUI is great https://github.com/mckaywrigley/chatbot-ui it has a ui similar to chatgpt
-
Please Don't Ask If an Open Source Project Is Dead
> The comment I screenshotted is passive-aggressive at best, and there's no really good way to ask "is this repo dead" without being passive-aggressive. My day-to-day job that actually pays me a salary wouldn't ever provide a bulleted list of the reasons I suck, let alone a project I develop in my spare time.
There is nothing passive-aggressive about that comment. There is nothing problematic about it at all. Nobody's calling you slurs or making demands. I see one guy who might as well be a Mormon Boy Scout from Canada. "Is this repo dead" is not passive-aggressive, just ineloquent. Fuck my eyes until the jelly leaks out my ears if a courteous and professionally-written question constitutes "applying pressure and being rude" these days.
I don't know what a "bulleted list of the reasons [you] suck" has to do with anything (I don't see where anybody sent you one) but you're coming across as someone who invites people to your garage sale and then brandishes a shotgun and starts screaming when they set foot on your property.
> I’ve never seen any discussions or articles about whether it’s appropriate to ask if an open source repository is dead. Is there an implicit contract to actively maintain any open source software you publish? Are you obligated to provide free support if you hit a certain star amount on GitHub or ask for funding through GitHub Sponsorships/Patreon? After all, most permissive open source code licenses like the MIT License contain some variant of “the software is provided ‘as is’, without warranty of any kind.”
Here's an example of why everyone should ask if an open source project is dead:
https://github.com/mckaywrigley/chatbot-ui/issues
A number of issues complain about it leaking OpenAI keys. Nobody's figured out how, but it'd be nice to know if anybody's working on it, if it's worth submitting a PR, if it should be forked, if it's worth bothering with at all. This code is a massive liability in its current state. Its creator is absent. It warrants questions being asked about its future. Yeah, it's as-is software, but it's not an affront to your mother's virtue when someone asks if your shit still works or if you have plans to fix it.
> I’ve had an existential crisis about my work in open source AI on GitHub, particularly as there has been both increasingly toxic backlash against AI and because the AI industry has been evolving so rapidly that I flat-out don’t have enough bandwidth to keep up
Herein lies the problem? You sound overwhelmed. I've been there myself. I don't know what your year's been like but you genuinely might want to get away from the screen and get some fresh air. This is a good time of year to do it, since things generally slow down at work.
- I need help with getting an API
- I need help with getting an api
What are some alternatives?
llama_index - LlamaIndex is a data framework for your LLM applications
BetterChatGPT - An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)
Auto-LLM-Local - Created my own python script similar to AutoGPT where you supply a local llm model like alpaca13b (The main one I use), and the script can access the supplied tools to achieve your objective. Code fully works as far as I can tell. Takes me 5 minutes per chain on my slow laptop.
gpt4all - gpt4all: run open-source LLMs anywhere
long_term_memory - A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
Flowise - Drag & drop UI to build your customized LLM flow
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
chatgpt-clone - Enhanced ChatGPT Clone: Features OpenAI, Bing, PaLM 2, AI model switching, message search, langchain, Plugins, Multi-User System, Presets, completely open-source for self-hosting. More features in development [Moved to: https://github.com/danny-avila/LibreChat]
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
langchain - 🦜🔗 Build context-aware reasoning applications
turbogpt.ai