uniteai
ollama
uniteai | ollama | |
---|---|---|
17 | 221 | |
224 | 69,806 | |
- | 10.3% | |
8.2 | 9.9 | |
5 months ago | 5 days ago | |
Python | Go | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
uniteai
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
I recently went through the same with UniteAI, and had to swap ctransformers back out for llama.cpp
-
Best Local LLM Backend Server Library?
I maintain the uniteai project, and have implemented a custom backend for serving transformers-compatible LLMs. (That file's actually a great ultra-light-weight server if transformers satisfies your needs; one clean file).
-
Show HN: SeaGOAT – local, “AI-based” grep for semantic code search
UniteAI brings together speech recognition and document / code search. The major difference is your UI is your preferred text editor.
https://github.com/freckletonj/uniteai
-
Language Model UXes in 2027
In answer to the same question I built UniteAI https://github.com/freckletonj/uniteai
It's local first, and ties many different AIs into one text editor, any arbitrary text editor in fact.
It does speech recognition, which isn't useful for writing code, but is useful for generating natural language LLM prompts and comments.
It does CodeLlama (and any HuggingFace-based language model)
It does ChatGPT
It does Retrieval Augmented Gen, which is where you have a query that searches through eg PDFs, Youtube transcripts, code bases, HTML, local or online files, Arxiv papers, etc. It then surfaces passages relevant to your query, that you can then further use in conjunction with an LLM.
I don't know how mainstream LLM-powered software looks, but for devs, I love this format of tying in the best models as they're released into one central repo where they can all play off each others' strengths.
-
Can I get a pointer on Kate LSP Clients? I'm trying to add a brand new one.
I'm working on UniteAI, a project to tie different AI capabilities into the editor, and it has a clean LSP Server.
-
UniteAI, collab with AIs in your text editor by writing alongside each other
*TL;DR*: chat with AI, code with AI, speak to AI (voice-to-text + vice versa), have AI search huge corpora or websites for you, all via an interface of collaborating on a text doc together in the editor you use now.
*Motivation*
I find the last year of AI incredibly heartening. Researchers are still regularly releasing SoTA models in disparate domains. Meta is releasing powerful Llama under generous provisions (As is the UAE with Falcon?!). And the Open Source community has shown a tidal wave of interest and effort into building things out of these tools (112k repos on GH mentioning ML!).
Facing this deluge of valuable things that communities are shepherding into the world, I wanted to incorporate them into my workflows, which as a software engineer, means my text editor.
*UniteAI*
So I started *UniteAI* https://github.com/freckletonj/uniteai, an Apache-2.0 licensed tool.
Check out the screencasts: https://github.com/freckletonj/uniteai#some-core-features
This project:
* Ties in to *any editor* via Language Server Protocol. Like collaborating in G-Docs, you collab with whatever AI directly in the document, all of you writing alongside each other concurrently.
* Like Copilot / Cursor, it can write code/text right in your doc.
* It supports *any Locally runnable model* (Llama family, Falcon, Finetunes, the 21k available models on HF, etc.)
* It supports *OpenAI/ChatGPT* via API key.
* *Speech-to-Text*, useful for writing prompts to your LLM
* You can do *Semantic Search* (Retrieval Augmented Generation) on many sources: local files, Arxiv, youtube transcripts, Project Gutenberg books, any online HTML, basically if you give it a URI, it can probably use it.
* You can trigger features easily via [key combos](https://github.com/freckletonj/uniteai#keycombos).
* Written in Python, so, much more generic than writing a bespoke `some_specific_editor` plugin.
*Caveat*
Since it always comes up, *AI is not perfect*. AI is a tool to augment your time, not replace it. It hallucinates, it lies, it bullshits, it writes bad code, it gives dangerous advice.
But can still do many useful things, and for me it is a *huge force multiplier.*
*You need a Human In The Loop*, which is why it's nice to work together iteratively on a text document, per, this project. You keep it on track.
*Why is this interesting*
These tools play well when used together:
* *Code example:* I can Voice-to-Text a function comment then send that to an LLM to write the function.
* *Code example 2:* I can chit chat about project architecture plans, and strategies, and libraries I should consider.
* *Documentation example:* I can retrieve relevant sections of my city's building code with a natural language query, then send that to an LLM to expound upon.
* *Authorship example*: I can have my story arcs and character dossiers in some markdown file, and use that guidance to contextualize an AI as it works with me for writing a story.
* *Entertainment example*: I told my AI it was a Dungeon Master, then over breakfast with friends, used Voice-to-Text and Text-to-Wizened-Wizard-Voice, and played a hillarious game. I still had to drive all this via a text doc, and handy key combos.
*RFC*
Installation instructions are on the repo: https://github.com/freckletonj/uniteai#quickstart-installing...
This is still nascent, and I welcome all feedback, positive or critical.
We have a community linked on the repo which you're invited to join.
I'd love love to chat with people who like this idea, use it, want to see other features, want to contribute their effort, want to file bug reports, etc.
A big part of my motivation in this is to socialize with like-minds, and build something cool.
*Thanks for checking this out!*
- UniteAI: In an editor, self hosted llama, code llama, mic voice transcription, and ai-powered web/document search
- [ UniteAI ]: "your AIs in your editor". I've been bustin my butt, and feel like it's finally worth presenting to the world.
-
Show HN: Use Code Llama as Drop-In Replacement for Copilot Chat
[UniteAI](https://github.com/freckletonj/uniteai) I think fits the bill for you.
This is my project, where the goal is to Unite your AI-stack inside your editor (so, Speech-to-text, Local LLMs, Chat GPT, Retrieval Augmented Gen, etc).
It's built atop a Language Server, so, while no one has made an IntelliJ client yet, it's simple to. I'll help you do it if you make a GH Issue!
-
UniteAI: Your AI-Stack in your Editor
UniteAI (github)
ollama
- Ollama v0.1.34 Is Out
-
Ask HN: What do you use local LLMs for?
- Basic internet search (I start ollama CLI faster than I can start a browser - https://ollama.com)
- Formatting/changing text
- Troubleshooting code, esp. new frameworks/libs
- Recipes
- Data entry
- Organizing thoughts: High-level lists, comparison, classification, synonyms, jargon & nomenclature
- Learning esp. by analogy and example
RAG for:
- Website assistants (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Game NPCs (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Discord/Slack/forum bots (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Character-driven storytelling and creating art in a specific style for video game loading screens, background images, avatars, website art, etc. (https://github.com/bennyschmidt/ragdoll-studio/tree/master/r...)
- FLaNK-AIM Weekly 06 May 2024
-
Introducing Jan
Jan goes a step further by integrating with other local engines like LM Studio and ollama.
- Ollama v0.1.33
-
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
# install the Ollama curl -fsSL https://ollama.com/install.sh | sh # get the llama3 model ollama pull llama2 # install the MLFlow pip install mlflow
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
Ollama for running LLMs locally
-
Setup Llama 3 using Ollama and Open-WebUI
curl -fsSL https://ollama.com/install.sh | sh
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
-
I Said Goodbye to ChatGPT and Hello to Llama 3 on Open WebUI - You Should Too
I’m a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, I’ve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
What are some alternatives?
unsloth - Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
llama.cpp - LLM inference in C/C++
chatcraft.org - Developer-oriented ChatGPT clone
gpt4all - gpt4all: run open-source LLMs anywhere
continue - ⏩ Open-source VS Code and JetBrains extensions that enable you to easily create your own modular AI software development system
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
SeaGOAT - local-first semantic code search engine
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
semantic-code-search - Search your codebase with natural language • CLI • No data leaves your computer
llama - Inference code for Llama models
gw2combat - A GW2 combat simulator using entity-component-system design
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.