semantic-kernel
gpt-llama.cpp
Our great sponsors
semantic-kernel | gpt-llama.cpp | |
---|---|---|
47 | 12 | |
18,111 | 587 | |
6.4% | - | |
9.9 | 8.2 | |
6 days ago | 11 months ago | |
C# | JavaScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
semantic-kernel
-
#SemanticKernel – 📎Chat Service demo running Phi-2 LLM locally with #LMStudio
There is an amazing sample on how to create your own LLM Service class to be used in Semantic Kernel. You can view the Sample here: https://github.com/microsoft/semantic-kernel/blob/3451a4ebbc9db0d049f48804c12791c681a326cb/dotnet/samples/KernelSyntaxExamples/Example16_CustomLLM.cs
-
Semantic Tests for SemanticKernel Plugins using skUnit
This week, I had the chance to explore the SemanticKernel code base, particularly the core plugins. SemanticKernel comes equipped with these built-in plugins:
- FLaNK Stack for 04 December 2023
- Semantic Kernel
-
Getting Started with Semantic Kernel and C#
In this article we'll look at the high-level capabilities building AI orchestration systems in C# with Semantic Kernel, a rapidly maturing open-source AI orchestration framework.
-
Agency: Pure Go LangChain Alternative
I'm using Semantic Kernel (https://github.com/microsoft/semantic-kernel) and it's really nice. Makes building more complex workflows really simple without sacrificing control.
A bunch of examples (https://github.com/microsoft/semantic-kernel/blob/main/dotne...) for how to handle just about anything you need to do with OAI with a lot less boilerplate.
-
New: LangChain templates – fastest way to build a production-ready LLM app
I haven't tried it but there's Microsoft semantic-kernel.
https://github.com/microsoft/semantic-kernel
-
Overview: AI Assembly Architectures
Semantic Kernel github.com/microsoft/semantic-kernel
-
Automated Routing of Tasks to Optimal Models: A PR for Semantic-Kernel
The need for efficient model routing has been a point of discussion in the community. Addressing this, I've submitted a pull request to Semantic-Kernel that introduces an automated multi-model connector.
gpt-llama.cpp
-
Attempt to run Llama on a remote server with chatbot-ui
hi! I really like the solution https://github.com/keldenl/gpt-llama.cpp which helps to deploy https://github.com/mckaywrigley/chatbot-ui on the local model. I am running this together with Wizard7b or 13b locally and it works fine, but when I tried to upload to a remote server I met an error.
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
sounds like you’re asking for exactly this? https://github.com/keldenl/gpt-llama.cpp
- LLaMA and AutoAPI?
-
New big update to GPTNicheFinder: better trends analysis and scoring system, cleaned up UI and verbose in the terminal for people who want to see what is going on and to verify the results
I salut you good sir. This is an amazing idea. I don't have time but it will be interesting idea to use this wrapper https://github.com/keldenl/gpt-llama.cpp which simulates GPT endpoint for local lama, so basically we can have amazing tool for completely free use. If somebody test it please let me know underneath my comment!
-
I build an AI powered writing tools, an AI co-author
I would gladly buy your product to run with a local model, like Vicuna ggml , also see https://github.com/keldenl/gpt-llama.cpp/
-
Serge... Just works
possible through fastllama in python or gpt-llama.cpp an API wrapper around llama.cpp
-
Embeddings?
https://github.com/keldenl/gpt-llama.cpp supports embeddings, and it even takes in openai type requests and returns openai compatible responses!
-
I built a completely Local AutoGPT with the help of GPT-llama running Vicuna-13B
https://github.com/keldenl/gpt-llama.cpp
- I build a completely Local and portable AutoGPT with the help of gpt-llama, running on Vicuna-13b
-
Adding Long-Term Memory to Custom LLMs: Let's Tame Vicuna Together!
There's a (kind of) working Auto-GPT solution that uses Vicuna https://github.com/keldenl/gpt-llama.cpp/blob/master/docs/Auto-GPT-setup-guide.md
What are some alternatives?
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
llama_index - LlamaIndex is a data framework for your LLM applications
langchain - 🦜🔗 Build context-aware reasoning applications
Auto-LLM-Local - Created my own python script similar to AutoGPT where you supply a local llm model like alpaca13b (The main one I use), and the script can access the supplied tools to achieve your objective. Code fully works as far as I can tell. Takes me 5 minutes per chain on my slow laptop.
guidance - A guidance language for controlling large language models.
long_term_memory - A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
AGiXT - AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.