Whisper VS tensil

Compare Whisper vs tensil and see what are their differences.

Whisper

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model (by Const-me)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
WorkOS - The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com
featured
Whisper tensil
32 12
7,182 319
- 0.0%
6.5 0.0
7 months ago over 1 year ago
C++ Scala
Mozilla Public License 2.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Whisper

Posts with mentions or reviews of Whisper. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-17.
  • Nvidia Speech and Translation AI Models Set Records for Speed and Accuracy
    1 project | news.ycombinator.com | 18 Apr 2024
    I've been using WhisperDesktop ( https://github.com/Const-me/Whisper ) with great success on a 3090 for fast & accurate transcription of often poor quality euro-english hours long multispeaker audio files. If there's an easy way to compare I'm certainly going to give this a try.
  • AMD's CDNA 3 Compute Architecture
    7 projects | news.ycombinator.com | 17 Dec 2023
    Why would you want OpenCL? Pretty sure D3D11 compute shaders gonna be adequate for a Torch backend, and they even work on Linux with Wine: https://github.com/Const-me/Whisper/issues/42 Native Vulkan compute shaders would be even better.

    Why would you want unified address space? At least in my experience, it’s often too slow to be useful. DMA transfers (CopyResource in D3D11, copy command queue in D3D12, transfer queue in VK) are implemented by dedicated hardware inside GPUs, and are way more efficient.

  • Amazon Bedrock Is Now Generally Available
    2 projects | news.ycombinator.com | 28 Sep 2023
    https://github.com/ggerganov/whisper.cpp

    https://github.com/Const-me/Whisper

    I had fun with both of these. They will both do realtime transcription. Bit you will have to download the training data sets…

  • Why Nvidia Keeps Winning: The Rise of an AI Giant
    3 projects | news.ycombinator.com | 6 Jul 2023
    Gamers don’t care about FP64 performance, and it seems nVidia is using that for market segmentation. The FP64 performance for RTX 4090 is 1.142 TFlops, for RTX 3090 Ti 0.524 TFlops. AMD doesn’t do that, FP64 performance is consistently better there, and have been this way for quite a few years. For example, the figure for 3090 Ti (a $2000 card from 2022) is similar to Radeon RX Vega 56, a $400 card from 2017 which can do 0.518 TFlops.

    And another thing: nVidia forbids usage of GeForce cards in data centers, while AMD allows that. I don’t know how specifically they define datacenter, whether it’s enforceable, or whether it’s tested in courts of various jurisdictions. I just don’t want to find out answers to these questions at the legal expenses of my employer. I believe they would prefer to not cut corners like that.

    I think nVidia only beats AMD due to the ecosystem: for GPGPU that’s CUDA (and especially the included first-party libraries like BLAS, FFT, DNN and others), also due to the support in popular libraries like TensorFlow. However, it’s not that hard to ignore the ecosystem, and instead write some compute shaders in HLSL. Here’s a non-trivial open-source project unrelated to CAE, where I managed to do just that with decent results: https://github.com/Const-me/Whisper That software even works on Linux, probably due to Valve’s work on DXVK 2.0 (a compatibility layer which implements D3D11 on top of Vulkan).

  • Ask HN: What is your recommended speech to text/audio transcription tool?
    1 project | news.ycombinator.com | 12 Jun 2023
    Currently, I use a GUI for Whisper AI (https://github.com/Const-me/Whisper) to upload MP3s of interviews to get text transcripts. However, I'm hoping to find another tool that would recognize and split out the text per speaker.

    Does such a thing exist?

  • Da audio a testo, consigli?
    1 project | /r/Universitaly | 8 Jun 2023
  • Ask HN: Any recommendations for cheap, high-quality transcription software
    2 projects | news.ycombinator.com | 29 May 2023
    I just used Whisper over the weekend to transcribe 5 hours of meeting, worked nicely and it can be run on a single GPU locally. https://github.com/ggerganov/whisper.cpp

    There are a few wrappers available with GUI like https://github.com/Const-me/Whisper

  • Voice recognition software for German
    2 projects | /r/software | 20 May 2023
  • Const-me/Whisper: High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
    1 project | /r/thirdbrain | 15 May 2023
  • I built a massive search engine to find video clips by spoken text
    3 projects | /r/videos | 10 May 2023

tensil

Posts with mentions or reviews of tensil. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-06.
  • Tensil
    1 project | news.ycombinator.com | 22 Jun 2023
  • Introduction to FPGAs
    9 projects | news.ycombinator.com | 6 Feb 2023
  • ML projects for FPGA
    2 projects | /r/FPGA | 9 Nov 2022
    This is an example project on the higher side of complexity: https://github.com/tensil-ai/tensil.
  • Implementing Deep Convolution Neural Network on FPGA
    1 project | /r/FPGA | 30 Jun 2022
    You might be interested to checkout www.tensil.ai, an open source ML accelerator for FPGA. We don't officially support Stratix yet but you should be able to adapt it quite easily. Reach out on our Discord if you want to talk about it!
  • What do think of Chisel HDL? is it worth learning over Verilog/SystemVerilog?
    4 projects | /r/FPGA | 29 Jun 2022
    www.tensil.ai
  • NN Inference on PYNQ-Z2
    1 project | /r/FPGA | 23 May 2022
    You should check out Tensil. That's what i had the most success with. You can just follow the tutorial for pynq-z1, only diffrence is that you need to define pynq-z2 board files instead of the ones listed in the tutorial when making your vivado project. The developers are also very active and helpful on discord and github. You can find them at www.tensil.ai
  • Launch HN: Tensil (YC S19) – Open-Source ML Accelerators
    3 projects | news.ycombinator.com | 11 Mar 2022
    Hello HN! I'm Tom, co-founder at Tensil (https://www.tensil.ai/). We design free and open source machine learning accelerators that anyone can use.

    A machine learning inference accelerator is a specialized chip that can run the operations used in ML models very quickly and efficiently. It can be either an ASIC or an FPGA, with ASIC giving better performance but FPGA being more flexible.

    Custom accelerators offer dramatically better performance per watt than existing GPU and CPU options. Massive companies like Google and Facebook use them to make training and inference cheaper. However, everyone else has been left out: small and mid-sized companies, students and academics, hobbyists and tinkerers currently have no chance of getting ML hardware that perfectly suits their needs. We aim to change that, starting with ML inference on embedded and edge FPGA platforms. Our dream is that our accelerators help people make new applications possible that simply weren't feasible before.

    We believe that advances in AI go hand in hand with advances in computing hardware. As a couple of software and ML engineers hoping to live in a world alongside intelligent machines, we wanted to know why those hardware advances were taking so long! We taught ourselves digital design and gradually realized that the next generation of hardware will need to be finely customized to enable state of the art ML models at the edge, that is, running on your devices and not in the cloud. In the CPU world, the RISC-V RocketChip implementation has proven the value of customizable compute hardware. The problem was that no-one was building that kind of capability for ML acceleration. We started Tensil to build customizable ML accelerators and see what kind of applications people can create with them.

    Tensil is a set of tools for running ML models on custom accelerator architectures. It includes an RTL generator, a model compiler, and a set of drivers. It enables you to create a custom accelerator, compile an ML model targeted at it, and then deploy and run that compiled model. To see how to do this and get it running on an FPGA platform, check out our tutorial at https://www.tensil.ai/docs/tutorials/resnet20-ultra96v2/.

    We developed an accelerator generator in Chisel and then wrote a parameterizable graph compiler in Scala. (Fun fact: unlike in software, formal verification is actually a totally viable way to test digital circuits and we have made great use of this technique.) The accelerator generator takes in the desired architecture parameters and produces an instance of the accelerator which can be synthesized using standard EDA tools. The compiler implements ML models using the accelerator’s instruction set and can target any possible instance of the accelerator.

    Currently, the accelerator architecture is based around a systolic array, similar to well-known ML ASICs. You can view the architecture spec in our documentation. The compiler performs a wide variety of tasks but is optimized for convolutional neural networks. There are also drivers for each supported platform, currently limited to FPGAs running bare-metal or with a host OS.

    When you tell the driver to run your ML model, it sets up the input data and then streams the compiled model into the accelerator. The accelerator independently accesses host memory during execution. When the accelerator is done, the driver is notified and looks for the output in the pre-assigned area of host memory.

    How are we different from other accelerator options? There are many ML ASICs out there but they are all locked into a single architecture, whereas we have customization at the core of our technology. This offers the potential for a better trade-off between performance/price/watts/accuracy. Compared with other FPGA options, Xilinx DPU is great but it’s closed source and can be difficult to work with if your model is in any way customized. By going open source, we aim to support the widest possible range of models. FINN is a very cool project but requires big changes to your model in order to work, and also typically requires large FPGAs which are unsuitable for edge deployments. We work out of the box with any model (no need to quantize), and on small edge FPGAs. For embedded systems, tflite/tfmicro are great for deploying very small ML models on extremely constrained edge devices, but they are limited in terms of the performance and accuracy that can be achieved. Our tools allow you to work with full size state of the art models at high accuracy and speed.

    Currently we're focused on the edge and embedded ML inference use case. If you

  • Tensil - Open source machine learning inference accelerators on FPGA
    1 project | /r/realtech | 9 Mar 2022
    1 project | /r/technology | 9 Mar 2022
    1 project | /r/tech | 9 Mar 2022

What are some alternatives?

When comparing Whisper and tensil you can also consider the following projects:

whisper.cpp - Port of OpenAI's Whisper model in C/C++

VexRiscv - A FPGA friendly 32 bit RISC-V CPU implementation

whisper - Robust Speech Recognition via Large-Scale Weak Supervision

SpinalHDL - Scala based HDL

TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.

Rosebud - Framework for FPGA-accelerated Middlebox Development

just-an-email - App to share files & texts between your devices without installing anything

chisel-book - Digital Design with Chisel

ggml - Tensor library for machine learning

DFiant - DFiant: A Dataflow Hardware Descripition Language

beaker - An experimental peer-to-peer Web browser

edalize - An abstraction library for interfacing EDA tools