llmware
contour
llmware | contour | |
---|---|---|
9 | 20 | |
3,173 | 2,241 | |
6.7% | 2.6% | |
9.8 | 9.7 | |
7 days ago | 6 days ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llmware
-
More Agents Is All You Need: LLMs performance scales with the number of agents
I couldn't agree more. You should check out LLMWare's SLIM agents (https://github.com/llmware-ai/llmware/tree/main/examples/SLI...). It's focusing on pretty much exactly this and chaining multiple local LLMs together.
A really good topic that ties in with this is the need for deterministic sampling (I may have the terminology a bit incorrect) depending on what the model is indended for. The LLMWare team did a good 2 part video on this here as well (https://www.youtube.com/watch?v=7oMTGhSKuNY)
I think dedicated miniture LLMs are the way forward.
Disclaimer - Not affiliated with them in any way, just think it's a really cool project.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: LLMWare – Small Specialized Function Calling 1B LLMs for Multi-Step RAG
I've been building upon the LLMWare project - https://github.com/llmware-ai/llmware - for the past 3 months. The ability to run these models locally on standard consumer CPUs, along with the abstraction provided to chop and change between models and different processes is really cool.
I think these SLIM models are the start of something powerful for automating internal business processes and enhancing the use case of LLMs. Still kinda blows my mind that this is all running on my 3900X and also runs on a bog standard Hetzner server with no GPU.
- Show HN: LLMWare – Integrated Solution for RAG in Finance and Legal
- Llmware.ai – AI Tools for Financial, Legal and Compliance
-
Open Source Advent Fun Wraps Up!
16. LLMWare by Ai Bloks | Github | tutorial
- FLaNK Stack Weekly 16 October 2023
- Strategy for PDF data extraction and Display
contour
-
Neovide – a simple, no-nonsense, cross-platform GUI for Neovim
Another problem is that the cursor moves while the screen is buffer is being rendered. The location is only really known once the cursor settles in the same place for some time, which is unacceptable in terms of latency.
The synchronized output extension could be used to do this, though. https://github.com/contour-terminal/contour/blob/master/docs...
- FLaNK Stack Weekly 16 October 2023
-
Contour: Modern and Fast Terminal Emulator
https://github.com/contour-terminal/contour/issues/382
This apparently does not support the Kitty graphics protocol, just Sixel, which makes it look fairly unattractive to me, personally.
-
Terminal emulators that break from the traditional rendering approach?
contour - https://github.com/contour-terminal/contour. https://github.com/contour-terminal/contour/issues/100 and other modern unicode focused attempts to update the terminal world
- Contour Terminal – A Modern and Cross-Platform C++ Terminal Emulator
-
Name a program that doesn't get enough love!
contour : a terminal application
-
Is there a way to make Dolphin use a terminal app other than Konsole?
https://github.com/contour-terminal/contour Contour has a implementation for this. See this release: https://github.com/contour-terminal/contour/releases/tag/v0.3.6.240
-
Speeding up incremental Rust compilation with dylibs - Robert Krahn
Now that I'm well rested I decided to compile some similar terminal emulators with clang++ and rustc to see how big of a gap we're looking at. For the C++ terminal emulator I compiled contour with ~80k lines of C++ and over 200k lines when accounting for dependencies (not including dynamically linked dependencies), and then I'm using alacritty with ~35k lines of Rust and over 2 million (!) lines for the whole dependency tree when vendoring all dependencies. Because Rust tries to make most software cross platform with conditional compilation and many of these aren't Linux specific libraries I'm gonna assume it's compiling half or two thirds of the lines of dependencies for this experiment with the C++ compiling probably 3/4ths of the dependency tree considering I'm not on windows.
-
What terminal emulator do you use?
I have my eyes on this though.
What are some alternatives?
llm-client-sdk - SDK for using LLM
kitty - Cross-platform, fast, feature-rich, GPU based terminal
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
harfbuzz - HarfBuzz text shaping engine
inference - A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
terminal-unicode-core - Unicode Core specification for Terminal (grapheme clusters, character widths, ...)
openstatus - 🏓 The open-source synthetic & real user monitoring platform 🏓
nchat - Terminal-based Telegram / WhatsApp client for Linux and macOS
megabots - 🤖 State-of-the-art, production ready LLM apps made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵
iTerm2 - iTerm2 is a terminal emulator for Mac OS X that does amazing things.
SimplyRetrieve - Lightweight chat AI platform featuring custom knowledge, open-source LLMs, prompt-engineering, retrieval analysis. Highly customizable. For Retrieval-Centric & Retrieval-Augmented Generation.
terminalpp - A C++ library for interacting with ANSI terminal windows.