Wiki_MiSTer
Whisper
Wiki_MiSTer | Whisper | |
---|---|---|
15 | 32 | |
99 | 7,182 | |
- | - | |
10.0 | 6.5 | |
over 1 year ago | 7 months ago | |
C++ | ||
- | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Wiki_MiSTer
-
OpenFPGA. The future of video game preservation
I think it's a bit rich to describe this as the 'future of video game preservation'.
The MiSTer project https://github.com/MiSTer-devel/Wiki_MiSTer/wiki more rightfully deserves that title. It's got a huge range of systems (across consoles, arcade and micro computers) and it's all GPL licenced. The base board is a Terasic DE10 Nano which is proprietary but all other hardware required is open source.
The MiSTeX project aims to make MiSTer portable across different FPGA platforms https://github.com/MiSTeX-devel so a DE10 Nano won't be mandatory enabling a new ecosystem of open hardware and commercial for profit solutions.
I take no issue with people wanting to make money in this space. I take great issue with trying to gatekeep system preservation behind a mostly closed system you stamp an 'open' moniker on.
-
New!
Go to the source https://github.com/MiSTer-devel/Wiki_MiSTer/wiki that should have all the information you need and also https://misterfpga.org/ and https://discord.gg/4xKVg4XVYn
-
Minimig v1.97itx 6MB
Cute, but I don't see the point relative to miSTer.
-
How Does an FPGA Work?
The MiSTer project[0] is a wonderful introduction to a practical use case for FPGAs. It uses verilog to describe how the DE10-Nano chip should be set up to resemble various classic computers, arcade machines, and video game consoles. With a single device you can have an Apple II+, Super Street Fighter II Turbo, and a SNES. Currently it supports up to the PlayStation for console cores, which is probably the upper bounds for the DE10-Nano.
The entire project feels perfectly in line with hacker mentality and is exciting to watch grow. There's nothing like playing Super Metroid with an original SNES controller on a CRT at the end of the day.
[0] https://github.com/MiSTer-devel/Wiki_MiSTer/wiki
- straightforward question Regarding direct video
-
An argument for a new standalone FPGA-based Amiga aimed at the retro community
I'd advice anyone interested to look at miSTer instead. That is open hardware proper and has a very mature ecosystem of cores, including Amiga.
-
I really like the idea of the Amiga 500 Mini. Is it frustrating to side load programs on it? Do the companies frown on you putting roms on it (I can't imagine Mortal Kombat is fretting my Amiga rom)
Consider the miSTer as an open source hardware (in FPGA) alternative (minimig FPGA core).
-
Loading games from USB drive connected to my ASUS router with SMB enabled
Have you followed the steps here https://github.com/MiSTer-devel/Wiki_MiSTer/wiki/Samba funily enoough google pointed me here.
-
Advice on MiSTer arcade cabinet setup
Here https://github.com/MiSTer-devel/Wiki_MiSTer/wiki and https://misterfpga.org/ I assume you know about these links?
-
Introduction to FPGAs
You can find a lot of old computers and game consoles implemented in FPGA here:
- https://github.com/MiSTer-devel/Wiki_MiSTer/wiki/Cores
Whisper
-
Nvidia Speech and Translation AI Models Set Records for Speed and Accuracy
I've been using WhisperDesktop ( https://github.com/Const-me/Whisper ) with great success on a 3090 for fast & accurate transcription of often poor quality euro-english hours long multispeaker audio files. If there's an easy way to compare I'm certainly going to give this a try.
-
AMD's CDNA 3 Compute Architecture
Why would you want OpenCL? Pretty sure D3D11 compute shaders gonna be adequate for a Torch backend, and they even work on Linux with Wine: https://github.com/Const-me/Whisper/issues/42 Native Vulkan compute shaders would be even better.
Why would you want unified address space? At least in my experience, it’s often too slow to be useful. DMA transfers (CopyResource in D3D11, copy command queue in D3D12, transfer queue in VK) are implemented by dedicated hardware inside GPUs, and are way more efficient.
-
Amazon Bedrock Is Now Generally Available
https://github.com/ggerganov/whisper.cpp
https://github.com/Const-me/Whisper
I had fun with both of these. They will both do realtime transcription. Bit you will have to download the training data sets…
-
Why Nvidia Keeps Winning: The Rise of an AI Giant
Gamers don’t care about FP64 performance, and it seems nVidia is using that for market segmentation. The FP64 performance for RTX 4090 is 1.142 TFlops, for RTX 3090 Ti 0.524 TFlops. AMD doesn’t do that, FP64 performance is consistently better there, and have been this way for quite a few years. For example, the figure for 3090 Ti (a $2000 card from 2022) is similar to Radeon RX Vega 56, a $400 card from 2017 which can do 0.518 TFlops.
And another thing: nVidia forbids usage of GeForce cards in data centers, while AMD allows that. I don’t know how specifically they define datacenter, whether it’s enforceable, or whether it’s tested in courts of various jurisdictions. I just don’t want to find out answers to these questions at the legal expenses of my employer. I believe they would prefer to not cut corners like that.
I think nVidia only beats AMD due to the ecosystem: for GPGPU that’s CUDA (and especially the included first-party libraries like BLAS, FFT, DNN and others), also due to the support in popular libraries like TensorFlow. However, it’s not that hard to ignore the ecosystem, and instead write some compute shaders in HLSL. Here’s a non-trivial open-source project unrelated to CAE, where I managed to do just that with decent results: https://github.com/Const-me/Whisper That software even works on Linux, probably due to Valve’s work on DXVK 2.0 (a compatibility layer which implements D3D11 on top of Vulkan).
-
Ask HN: What is your recommended speech to text/audio transcription tool?
Currently, I use a GUI for Whisper AI (https://github.com/Const-me/Whisper) to upload MP3s of interviews to get text transcripts. However, I'm hoping to find another tool that would recognize and split out the text per speaker.
Does such a thing exist?
- Da audio a testo, consigli?
-
Ask HN: Any recommendations for cheap, high-quality transcription software
I just used Whisper over the weekend to transcribe 5 hours of meeting, worked nicely and it can be run on a single GPU locally. https://github.com/ggerganov/whisper.cpp
There are a few wrappers available with GUI like https://github.com/Const-me/Whisper
- Voice recognition software for German
- Const-me/Whisper: High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
- I built a massive search engine to find video clips by spoken text
What are some alternatives?
icesugar-nano - iCESugar-nano FPGA board (base on iCE40LP1K)
whisper.cpp - Port of OpenAI's Whisper model in C/C++
edalize - An abstraction library for interfacing EDA tools
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
oss-cad-suite-build - Multi-platform nightly builds of open source digital design and verification tools
TransformerEngine - A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
tensil - Open source machine learning accelerators
just-an-email - App to share files & texts between your devices without installing anything
make_for_vivado - experimentation with gnu make for Xilinx Vivado compilation. dependencies can be complicated.
ggml - Tensor library for machine learning
fpga-tamagotchi - Tamagotchi P1 for Analogue Pocket and MiSTer
beaker - An experimental peer-to-peer Web browser