Mdev-GPU VS gvt-linux

Compare Mdev-GPU vs gvt-linux and see what are their differences.

Mdev-GPU

A user-configurable utility for GPU vendor drivers enabling the registration of arbitrary mdev types with the VFIO-Mediated Device framework. (by Arc-Compute)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Mdev-GPU gvt-linux
3 23
54 494
- 0.2%
0.0 0.0
over 1 year ago 11 days ago
Haskell C
GNU General Public License v3.0 only GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Mdev-GPU

Posts with mentions or reviews of Mdev-GPU. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-08-24.
  • Libvf.io: Add GPU Virtual Machine (GVM) Support
    3 projects | news.ycombinator.com | 24 Aug 2022
    According to https://docs.linux-gvm.org/

    "GVM ... may be used in combination with KVM or other platform hypervisors such as Xen* to provide a complete virtualization solution via both central processing (CPU) and graphics processing (GPU) hardware acceleration."

  • GVM: A GPU Virtual Machine for Iommu-Capable Computers
    15 projects | news.ycombinator.com | 6 Jul 2022
    > OpenMdev.io is meant for developers, not for users.

    Frankly, it isn't meant for developers, either. Almost every page on that site is either woefully incomplete, or crib notes from docs/talks, which is fine for a high-level overview, but it's not an API reference developers can use either. The sample code is mostly just lifted from other places (such as https://github.com/torvalds/linux/blob/master/samples/vfio-m...), so useless you're better off reading the source (https://openmdev.io/index.php/OpenRM), or just links to other people's APIs which interested devs can find.

    It's fine to collate this, but it's far more like someone's personal aggregator than any kind of reference site.

    > No, it's a Libvirt alternative with convenience functions for VFIO users. Here's the documentation:

    > https://openmdev.io/index.php/LibVF.IO

    I read this before I ever wrote a reply, which you should have guessed because there was no other way to get any information. None of it tells anyone WHY this should use this instead of the bindings which have 100 developers on them, which have been battle tested for years, and for which the original author of VFIO wrote exhaustive, excellent manuals on the blog I linked earlier 7.5 years ago.

    What advantages does your system offer?

    > GVM/Mdev-GPU is unrelated to LibVF.IO which I think is where you're getting confused. LibVF.IO does not actually have any integration with GVM/Mdev-GPU so if you're reading that code you're not going to learn how GVM/Mdev-GPU works. We're planning to integrate the two but it's not done yet.

    I'm not confused either about how GVM/mdev-gpu works or about its relation to libvf.io. It's not hard to read between the missing lines of your project roadmap.

    > GVM/Mdev-GPU creates the mediated devices that are exposed in the mdevctl list. Read this code instead: https://github.com/Arc-Compute/Mdev-GPU/

    I did read that. There's nowhere else I would have said "Haskell bindings to RMAPI". I didn't call it anything else because it doesn't manage any other kind of mediated device, it's a pretty thin shim, and there's no real way to suss out what it's doing other than reading the code or the autogenerated module docs, which don't actually tell any developers where to get the values they need to populate it, which they can only get by reading other API docs (not yours), and if they're going to do that, they may as well just write their own in a language they like better.

    It's not clear from the outset what the advantage is over just submitting a PR to mdevctl to echo into /sys/devices/..../[create|remove], and overall, the README doesn't give any information about it whatsoever, even `--help` output to show the args and defaults.

    > Sure, arcd is a reference Virtual Machine Monitor as it says at the top of this page: https://openmdev.io/index.php/LibVF.IO

    No, it is not. Point blank, it is not. libvirt also isn't. Even qemu isn't for hardware virt, and you're not doing IOMMU operations on emulated CPU calls. kvm is. It's a toolkit to manage virtual machines, maybe.

    This does not answer the question at all of "why not libvirt?"

    > It's actually unrelated to GVM. You can use GVM with whatever you want, including Libvirt/Virsh/Virt-Manager because we wanted to support users of those things with GVM rather than requiring that they use LibVF.IO.

    It's unrelated... for now. And there is zero reason to use this instead of libvirt hooks which were written by and are tested by teams which already do this for libvirt (which virsh and virt-manager are just interfaces to anyway).

    Again, I'm not saying this to be critical. There are plenty of libvirt-based projects out there which would welcome a standardized tool which they could all use as an entrypoint to this, because libvirt strictly does not (and will not, even with modularity) cover this use case rather than everyone re-inventing their own tooling to handle creation. It is unlikely in the extreme that the current state of GVM will work for anyone else's use case, primarily because "give me the UUIDs of existing devices" is already handled by walking /sys, and creating a new one of a given type (or removing it on VM shutdown) in more or less the same way.

    GVM isn't built in a way which is usable by any other project. That's ok, but it does nothing to explain the design decision.

    > Well, we do create mediated devices exposed in mdevctl defined by a user config file, so I would say it goes a fair amount beyond Haskell bindings for the RMAPI. I think it's reasonable to describe a GPU mediated device as a virtual GPU given you get a virtual function that represents a scheduling share and virtual BAR space with a share of the device VRAM (partition of the GPU) which you can pass to one or several guests to allow them to run an unmodified guest GPU driver. I can't really think of a better definition for a vGPU. The Mediated Device Internals article pretty much explains the APIs GVM is dealing with - I believe we even link some sample code: https://openmdev.io/index.php/Mediated_Device_Internals

    You create mediated devices for nVidia devices by sending ioctls exposed via RMAPI. This is potato/potato. The explanation you just gave STILL makes it sound as if this is a novel thing done by GVM/mdev-gpu rather than something common, and talking down to someone who is asking informed questions about why you did it this way by linking to internals (when I was physically there for most of those talks and helped write some of the docs) doesn't paint a pretty picture.

    > Your comment seems kind of trollish so I'm not really sure what benefit continuing this thread has. I think most of the stuff you're asking about is more or less documented and spelled out as openly as we're able to. What we're trying to do here is to make this stuff more open and available to people rather than locked away behind binary blobs. More or less everything we do is put into our wiki with very few exceptions. OpenMdev.io is made to be open to our community of folks working on Mediated Device/IO Virtualization functions on various projects so if you're a developer on this stuff and think anything is lacking you're welcome to contribute or suggest it to us in our IRC or Discord. I'm sure there's always room to improve and we put a ton of effort into trying to listen to feedback and improve upon things ourselves as well as accept contributions from others

    NONE OF THE STUFF I'M ASKING ABOUT IS DOCUMENTED OR SPELLED OUT. That's the point. From someone who was a maintainer, engineering leader, etc on a major open source virtualization platform who literally wrote code which does this kind of scheduling/creation across a cluster, I am telling you that your documentation is opaque, misleading, takes credit for things you did not invent, doesn't explain your use case, doesn't explain why you re-invented the wheel, doesn't explain why there's a gaping "missing middle" between "here are kernel sources/function signatures in drivers and here's a tool" (where that "missing middle" is /sys/devices/.../mdev_supported_types[/...] and "echo|uuidgen"), etc.

    This is, or could be, a great start to a unified ecosystem. You are going to have a very hard time getting a developer/user ecosystem if you do not provide better documentation, "what these tools do", find a way to talk with other virt developers without condescending to them, present usable interfaces other projects can call which are not "here is YAML/JSON to operate on with exec()", etc, and most of all, to acknowledge the work other have done/the knowledge they have rather than presenting any of this like it's brand new or novel. It could be a great utility. Or it could be something no other project ever uses. That's up to you.

    My comments are not intended to be trollish. They are intended to tell you "as someone who has written very similar code and done very similar things for a long time, the only way to figure out what the hell any of this was supposed to do was to literally read the source and make educated guesses". The average developer/user is not going to have the knowledge base to make those guesses at all, but they may see references to "arcd ..." like it's "developer documentation", go find it, and ask "why the hell is this managing qemu directly instead of libvirt", or "why is no libvirt XML/qemu hook provided"?

    These are real problems for the project. Docs, always, for every project. I know yours is new, but these are of unusually low quality for a submission to HN, and doubling down with links to the same inadequate docs like everyone you're talking to is a moron doesn't help your reputation. Additionally, examples. And reach out to others -- proxmox, ovirt, xcp, openstack (nova). See if you can collaborate. This will mean using (or at least providing) libvirt bindings/XML snippets like everyone else. It will be worth it.

gvt-linux

Posts with mentions or reviews of gvt-linux. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-07-06.
  • N4020 IGPU passthru
    1 project | /r/Proxmox | 10 Sep 2022
    Yeah i read that too, I also found this.. https://github.com/intel/gvt-linux/issues/64
  • 19 August 2022 - Daily Chat Thread
    1 project | /r/indonesia | 19 Aug 2022
  • WAN Show - Ryan Shrout & Tom Petersen talk with Linus about Arc GPU and other hardware
    1 project | /r/hardware | 16 Jul 2022
  • GVM: A GPU Virtual Machine for Iommu-Capable Computers
    15 projects | news.ycombinator.com | 6 Jul 2022
    Intel has already confirmed that GVT-g is essentially dead and not supported on their Iris/Xe or anything newer graphics.. We can also confirm this via their own drivers source..

    https://github.com/intel/gvt-linux/blob/gvt-staging/drivers/...

  • Laptop GPU for Host use in PCI OVMF pass thru? + confusion re using iGPU
    2 projects | /r/VFIO | 16 May 2022
    On Intel iGPUs, there are two methods: GVT-g and GVT-d. GVT-g is basically creating virtual instances of the iGPU for use in VMs, while GVT-d is passing through an entire iGPU to the guest in the same way you would do with a normal GPU.
  • list of gvt-d supported cpu? thx
    1 project | /r/intel | 19 Apr 2022
    From the Intel GVTg_Setup_Guide;
  • Kholia/OS X-KVM: Run macOS on QEMU/KVM
    8 projects | news.ycombinator.com | 2 Dec 2021
    Not really pass through, no. If CONFIG_DRM_I915_GVT is enabled in your kernel, you can use Intel's graphics virtualization system... basically a virtio style virtual device that shares the GPU between VM and host. IMO this is way more convenient than real passthrough, where the device is only available either to the VM or the host. The downside is that you don't get full performance in the VM.

    "Intel GVT-g is a full GPU virtualization solution with mediated pass-through (VFIO mediated device framework based), starting from 5th generation Intel Core(TM) processors with Intel Graphics processors. GVT-g supports both Xen and KVM (a.k.a XenGT & a.k.a KVMGT). A virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability."

    https://github.com/intel/gvt-linux/wiki/GVTg_Setup_Guide

  • Full passthrough / GVT-d of 11th gen iGPU (Rocket Lake) to Windows 10 guest - logging my attempt.
    6 projects | /r/VFIO | 22 Nov 2021
    Wait a minute. GVT-g with 11th gen iGPUs upwards does work in linux guest? Are you sure about that? See this github issue for reference.
  • Show HN: VGPU and SR-IOV on Consumer GPUs
    7 projects | news.ycombinator.com | 21 Oct 2021
    To be clear, I never said it was dead, only a dead end.

    As for GVT-g and Xe, according to a post in this[0] issue by one of the Intel devs, Rocket Lake (Xe) is not getting support and only does GVT-d.

    Also in the same issue, someone pointed out that Intel themselves have states as much here[1].

    I hope I am proven wrong in the end and GVT-g comes to then entire Xe and ARC lineup. Intel's communication on this matter has been...lacking.

    0: https://github.com/intel/gvt-linux/issues/190

    1: https://www.intel.com/content/www/us/en/support/articles/000...

  • GVT-D setup
    1 project | /r/intel | 13 Sep 2021
    After days of trial and error, I could not get it to work, maybe one of you knows it. Currently, I try to setup GVT-d with KVM on my Dell XPS 13 2 in 1 7390, which has a i7-1065G7. AFAIK GVT-g is not supported, so I gave GVT-d a chance. The virtual machine is booting without any errors, but the display stays black. I only found this guide, but couldn't get it to work...

What are some alternatives?

When comparing Mdev-GPU and gvt-linux you can also consider the following projects:

linux-intel-lts

Single-GPU-Passthrough

LibVF.IO - A vendor neutral GPU multiplexing tool driven by VFIO & YAML.

jellyfin-ffmpeg - FFmpeg for Jellyfin

GVM-user - GVM-user.

i915ovmfPkg - VBIOS for Intel GPU Passthrough

VFIO-Mdev_Samples - Sample code for creating a VFIO Mediated Device. GPLv2 sources mirrored from elixir.bootlin.com with simple makefile changes.

UEFITool - UEFI firmware image viewer and editor

quickemu - Quickly create and run optimised Windows, macOS and Linux virtual machines

vgpu_unlock - Unlock vGPU functionality for consumer grade GPUs.