pyatv VS exllamav2

Compare pyatv vs exllamav2 and see what are their differences.

exllamav2

A fast inference library for running LLMs locally on modern consumer-class GPUs (by turboderp)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
pyatv exllamav2
15 17
823 2,935
- -
8.9 9.8
20 days ago 6 days ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

pyatv

Posts with mentions or reviews of pyatv. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-31.
  • Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
    9 projects | news.ycombinator.com | 31 Oct 2023
    It's definitely not impossible at least.

    Someone is doing it in python here:

    https://pyatv.dev/

    GPT-4 actually sent me here:

    "Here is an example of a C# library that implements the HAP: CSharp.HomeKit (https://github.com/brutella/hkhomekit). You can use this library as a reference or directly use it in your project."

    Which, to no surprise based on my experiences with LLMs for programming does not exist and doesn't seem to have ever existed.

    I get that they aren't magic, but I guess I am just bad at trying to use LLMs to help in my programming. Apparently all I do are obscure things or something. Or I am just not good enough at prompting. But I feel like that's also a reflection of the weakness of an LLM in that it needs such perfect and specific prompting to get good answers.

  • New Home Architecture Upgrade is Available Again
    2 projects | /r/HomeKit | 27 Mar 2023
    -> https://github.com/postlund/pyatv/issues/1931
  • Is it possible to control AppleTV from a Macbook?
    4 projects | /r/appletv | 9 Feb 2023
    I love Python, but I really wish I didn't have to use it here. The reason I do is that tvOS changed the way network remote control works, and the only library that could do that was / is a fantastic Python library called pyatv: https://pyatv.dev/.
  • pyatv - a client library for Apple TV and AirPlay devices
    1 project | /r/Python | 2 Feb 2023
  • Pyatv: A client library for Apple TV and AirPlay devices
    1 project | news.ycombinator.com | 26 Jan 2023
  • Using AppleTV Play/Pause Status as a Switch for Automations
    1 project | /r/HomeKit | 9 Jan 2023
    1) Install pyatv via command line. The command pip install pyatv is one way to achieve this. The important thing is that path needs to be accessible to homebridge. For me, that was /usr/local/bin Before proceeding, I recommend reading more about the API as this will be helpful if you want to build additional switches that track power status or other attributes. https://pyatv.dev
  • Turn off Apple TV by HomeKit automation?
    1 project | /r/HomeKit | 11 Sep 2022
    You can use pyatv to do this and if MQTT is your thing you can use this plugin.
  • Any one know a good Apple Plugin?
    2 projects | /r/homebridge | 16 Jun 2022
  • Speed Typing on an Apple TV
    3 projects | /r/apple | 18 May 2022
    The video above is just a silly experiment, but the library that's driving it (pyatv) is super useful. For example, on my Mac I can press Cmd+Shift+R to toggle an Apple TV remote control. Really useful for pausing a video or jumping back a few seconds. The cool thing is that it also works on Windows, Linux, etc. People are investigating adding regular text input to the library as well.
  • Speed Typing on Apple TV
    2 projects | /r/appletv | 18 May 2022
    There's not an API provided by Apple. There are 3 protocols that have been reverse engineered by different people. The main one I've found is a python library called pyatv (https://pyatv.dev/) that handles pairing and remote control commands.

exllamav2

Posts with mentions or reviews of exllamav2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Running Llama3 Locally
    1 project | news.ycombinator.com | 20 Apr 2024
  • Mixture-of-Depths: Dynamically allocating compute in transformers
    3 projects | news.ycombinator.com | 8 Apr 2024
    There are already some implementations out there which attempt to accomplish this!

    Here's an example: https://github.com/silphendio/sliced_llama

    A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...

    Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275

    And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...

  • What do you use to run your models?
    14 projects | /r/LocalLLaMA | 7 Dec 2023
    Sorry, I'm somewhat familiar with this term (I've seen it as a model loader in Oobabooga), but still not following the correlation here. Are you saying I should instead be using this project in lieu of llama.cpp? Or are you saying that there is, perhaps, an exllamav2 "extension" or similar within llama.cpp that I can use?
  • I just started having problems with the colab again. I get errors and it just stops. Help?
    1 project | /r/SillyTavernAI | 5 Dec 2023
    EDIT: I reported the bug to the exllamav2 Github. It's actually already fixed, just not on any current built release.
  • Yi-34B-200K works on a single 3090 with 47K context/4bpw
    1 project | /r/LocalLLaMA | 8 Nov 2023
    install exllamav2 from git with pip install git+https://github.com/turboderp/exllamav2.git. Make sure you have flash attention 2 as well.
  • Tested: ExllamaV2's max context on 24gb with 70B low-bpw & speculative sampling performance
    2 projects | /r/LocalLLaMA | 2 Nov 2023
    Recent releases for exllamav2 brings working fp8 cache support, which I've been very excited to test. This feature doubles the maximum context length you can run with your model, without any visible downsides.
  • Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
    9 projects | news.ycombinator.com | 31 Oct 2023
    Without batching, I was actually thinking that's kind of modest.

    ExllamaV2 will get 48 tokens/s on a 4090, which is much slower/cheaper than an H100:

    https://github.com/turboderp/exllamav2#performance

    I didn't test codellama, but the 3090 TI figures are in the ballpark of my generation speed on a 3090.

  • Guide for Llama2 70b model merging and exllama2 quantization
    2 projects | /r/LocalLLaMA | 24 Oct 2023
    First, you need the convert.py script from turboderp's Exllama2 repo. You can read all about the convert.py arguments here.
  • LLM Falcon 180B Needs 720GB RAM to Run
    1 project | news.ycombinator.com | 24 Sep 2023
    > brute aggressive quantization

    Cutting edge quantization like ExLlama's EX2 is far from brute force: https://github.com/turboderp/exllamav2#exl2-quantization

    > The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. Moreover, it's possible to apply multiple quantization levels to each linear layer, producing something akin to sparse quantization wherein more important weights (columns) are quantized with more bits. The same remapping trick that lets ExLlama work efficiently with act-order models allows this mixing of formats to happen with little to no impact on performance. Parameter selection is done automatically by quantizing each matrix multiple times, measuring the quantization error (with respect to the chosen calibration data) for each of a number of possible settings, per layer. Finally, a combination is chosen that minimizes the maximum quantization error over the entire model while meeting a target average bitrate.

    Llama.cpp is also working on a feature that let's a small model "guess" the output of a big model which then "checks" it for correctness. This is more of a performance feature, but you could also arrange it to accelerate a big model on a small GPU.

  • 70B Llama 2 at 35tokens/second on 4090
    1 project | /r/patient_hackernews | 14 Sep 2023

What are some alternatives?

When comparing pyatv and exllamav2 you can also consider the following projects:

homebridge-apple-tv-remote - Plugin for controlling Apple TVs in homebridge.

llama.cpp - LLM inference in C/C++

starcli - :sparkles: Browse trending GitHub projects from your command line

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

Home Assistant - :house_with_garden: Open source home automation that puts local control and privacy first.

SillyTavern - LLM Frontend for Power Users.

homebridge-cmd4 - CMD4 Plugin for Homebridge - Supports ~All Accessory Types & now all Characteristics too

ChatGPT-AutoExpert - 🚀🧠💬 Supercharged Custom Instructions for ChatGPT (non-coding) and ChatGPT Advanced Data Analysis (coding).

homebridge-cmd-television

OmniQuant - [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.

opsdroid - 🤖 An open source chat-ops bot framework

BlockMerge_Gradient - Merge Transformers language models by use of gradient parameters.