STT
ydotool
STT | ydotool | |
---|---|---|
11 | 63 | |
2,144 | 1,279 | |
1.9% | - | |
0.6 | 5.3 | |
about 2 months ago | about 1 month ago | |
C++ | C | |
Mozilla Public License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
STT
-
Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
What has happened since then? I know Common Voice has come and gone https://en.wikipedia.org/wiki/Common_Voice https://github.com/coqui-ai/STT
And I've seen some neural approaches too
No idea where to look for comparisons though.
-
Numen - FOSS voice control for handsfree computing
I basically just used coqui stt https://github.com/coqui-ai/STT
-
Are there any OCR and Speech-to-Text services that are privacy friendly?
This speech-to-text works well: https://github.com/coqui-ai/STT. openai's "whisper" is probably better but I haven't tried it: https://towardsdatascience.com/transcribe-audio-files-with-openais-whisper-e973ae348aa7
-
Introducing Whisper
I use two SST to live-translate audio that I listen to so I can look back (in paragraph form) to see things that I or the youtube has previously said: https://github.com/coqui-ai/STT https://github.com/ratwithacompiler/OBS-captions-plugin
-
You can now tether any prod Vector to Wire's Open Source Escape Pod • thedroidyouarelookingfor
I did have to install Coqui STT and go-asticoqui manually before i was able to run Chipper.
-
Currently working on a custom Virtual Assistant ('Randy') to help automate things in my shed (mainly CNC equipment) and also perform basic tasks. This morning I was able to get it to publish events on my google calendar.
What do you use as STT? I have heard good things about coqui (https://github.com/coqui-ai/STT) and will use it for my Assistant-build.
- Speech to Text Best Resource
-
I put together a tutorial and overview on how to use DeepSpeech to do Speech Recognition in Python
If anyone is looking for a maintained version of DeepSpeech, checkout Coqui's repositories for STT and TTS. Coqui is lead by the engineers that used to work on DeepSpeech at Mozilla.
-
CoquiTTS: 🐸💬 - Open Source Text-to-Speech framework.
Link: https://github.com/coqui-ai/STT
- Mozilla Common Voice Adds 16 New Languages and 4,600 New Hours of Speech
ydotool
- Show HN: Bonk, a command-line tool for X11 window management
-
Improving cursor rendering on Wayland
Wayland provides little by design, so this is quite typical. For example:
Screensharing is handled by pipewire [0], changing keyboard layouts aren't defined [1] by wayland, and generally anything Wayland devs think would 'corrupt' their protocol.
They leave most things to the compositor to implement, which leads to significant fragmentation as every compositor implements it differently.
Long gone are the days of xset and xdotool working across nearly every distro due to a common base, now the best you'll get is running a daemon as root to directly access `/dev/uinput` [2] or implementing each compositors accessibility settings (if they have them) as a workaround.
[0] https://superuser.com/questions/1221333/screensharing-under-...
[1] https://unix.stackexchange.com/questions/292868/how-to-custo...
[2] https://github.com/ReimuNotMoe/ydotool
-
how hard is it to program pinch zoom for my touchpad in linux?
I personally use libinput-gestures to call commands using touchpad gestures. You can also combine it with ydotool to bind macros and such to your gestures, e.g. 4 fingers swipe down closes the current window, 3 fingers swipe left or right changes workspace, etc
-
ydotoold background process?
Have you tried using the systemd unit file supplied with ydotool? It's probably installed somewhere on your system. Else you can get it here and just change the install location of ydotoold.
-
KDE-Connect keyboard input works on Wayland now!!
For simulated keyboard there are tools such as dotool or ydotool and KeePass extensions such as KPUInput that work by giving the user access to /dev/uinput. That works, but it's a bit inelegant; I guess in the future a Wayland protocol for simulated keyboard input will emerge, like wlroots already has, also for virtual pointers.
-
Out of curiosity, I tried to use Wayland earlier and compared to X11, everything seems to load faster which really surprised me. However, I've also noticed some things that confused me, that's why I'm posting this. To ask what I'm missing or what I did wrong. Thanks as always!
ydotool is the generic equivalent. It works on both X11 and Wayland environments.
-
Curious to know what are your general experiences on using keyboard and mouse input automations on Wayland...
Autokey does not work yet, but there is Hawck and Espanso that you could play around with. And there is ydotool if all you need is simulating basic input (as in ydotool mousemove -x -10 -y -10, ydotool type 'Hello world!' and so on).
-
Asahi Linux To Users: Please Stop Using X.Org
Does ydotool do what you need? I haven't even tried Wayland in years. I'm sure someday I'll find the need.
- Somehow AutoHotKey is kinda good now
- How to emulate mouse clicks with keyboard shortcuts
What are some alternatives?
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
xdotool - fake keyboard/mouse input, window management, and more
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
wtype - xdotool type for wayland
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
AutoKey - AutoKey, a desktop automation utility for Linux and X11.
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
evsieve - A utility for mapping events from Linux event devices.
TTS - :robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)
sway - i3-compatible Wayland compositor
OBS-captions-plugin - Closed Captioning OBS plugin using Google Speech Recognition
key-mapper - 🎮 An easy to use tool to change the mapping of your input device buttons. [Moved to: https://github.com/sezanzeb/input-remapper]