WebCodecsOpusRecorder
SSMLParser
Our great sponsors
WebCodecsOpusRecorder | SSMLParser | |
---|---|---|
19 | 9 | |
10 | 33 | |
- | - | |
2.8 | 10.0 | |
about 1 month ago | over 3 years ago | |
JavaScript | JavaScript | |
Do What The F*ck You Want To Public License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WebCodecsOpusRecorder
-
[AskJS] Do you think we need an Automatic Code Documentation Generator, especially after Github Co-pilot?
Take for example https://github.com/guest271314/WebCodecsOpusRecorder. There was no roadmap anywhere in the wild for how to write Opus encoded packets produced by WebCodecs AudioEncoder to a single file, including the capability to include media metadata such as artist, album, artwork in the file, for use with Media Session API - without a media container - and play back the file in the browser. So how would the documentation be automatically generated?
-
Sleekiest JavaScript Trick you know?
We can write a Uint32Array, JSON, ArrayBuffer adjacent to the other in a Blob. That means we can write our own algorithm for stroing arbitrary data and reading the data back. E.g., we can write the length of JSON containing configuration metadata, for example image artwork and offsets of ArrayBuffers before the JSON, then after the JSON write a series of ArraBuffers next to each other, then read the length of the JSON stored in the first 4 bytes as a Uint32Array, get the variable length of the JSON following the Uint32Array, read the offsets in an array in JSON configuration, then read each offset of ArrayBuffers stored in the file. Kind of how the Native Messaging protocol works, extended for the capability to write arbitrary data to a file with decoding instruction set encoded within the file itself, so we can, for example, write Opus encoded audio from WebCodecs, image artwork, artists, title, album data to a file, and then read the file, display images, artist, album data written therein in using Media Session API, and stream the audio using Medis Source Extensions, or decode the audio to a WAV file from Opus compression in the browser. E.g., https://github.com/guest271314/WebCodecsOpusRecorder. Bonus: The resulting file size, excluding the images serialized in the file, audio for audio, is less than Opus encoded in WebM file, the default container for MediaRecorder output on Chrome on Linux.
-
MP4 File and the Range Request Header
Not at all. Here I encoded Opus audio output by WebCodecs AudioEncoder, write the encoded chunks to a single file, preceded by JSON configuration and indexes of the discrete encoded chunks, optioally included media metadata such as artists, album, artwork, so we can fetch the first 4 bytes to read the Uint32Array at the beginning of the file to get the offsets information, then make separate range requests for the given timeslice(s) or media and playback that media https://github.com/guest271314/WebCodecsOpusRecorder.
-
JSON with multiline strings
As long as the encoder and decoder are on the same page, and you keep track of offsets, you can do whatever you want. Particularly using a Blob. Here https://github.com/guest271314/WebCodecsOpusRecorder/blob/main/WebCodecsOpusRecorder.js I write a Uint32Array, JSON, and ArrayBuffers containing WebCodecs Opus encoded audio, and optionally images and metadata for Media Session API to the same file, and play the file back in the browser, in pertinent part
-
Have some basic python, time to turn up the heat and learn web app development on JavaScript
Another fun project was encoding Opus packets output by WebCodecs AudioEncoder to a single file, and playing the file back in the browser https://github.com/guest271314/WebCodecsOpusRecorder. There was no road map to do that.
-
[AskJS] Why are TextEncoder and TextDecoder classes?
I never had an issue encoding and decoding Opus packets using the above approaches https://github.com/guest271314/WebCodecsOpusRecorder.
-
Yo - instead of making fun of people's ideas - HELP THEM OUT and give them feedback!
I carried on an developed a way to do just that, and save all packets to a single file and play back that file several ways. The resulting file winds up being more compact than Opus encoded ina WebM container. I then added a way to include images in the file to support Media Session metadata https://github.com/guest271314/WebCodecsOpusRecorder. Et al.
-
How do I append to an array inside a json file in node?
Recording raw Opus packets produced by WebCodecs AudioEncoder to a single file - without a media container such as Matroska, WebM, MP3, AAC, etc. - then playing back the file. You can test for yourself on Chrome or Chromium here https://guest271314.github.io/WebCodecsOpusRecorder/webcodecs-opus-recorder-mse-wav-player.html. Record your microphone or other device remapped as a microphone, save the file, then upload the file and play it back. I included the ability to also store an image in the file for media session metadata support, so we get to see same or similar image you see at global media controls when playing for example a YouTube video.
-
At what point in your programming journey do you step back and learn Data Structures and Algorithms?
There was no roadmap for how to write Opus packets produced by Chrome's WebCodecs AudioEncoder to a single file - without writing the Opus packs to a media container such as Matroska or WebM. I just know it could be done, and used my experience testing Native Messaging to use the concept of preceding the data with a Uint32Array containing the length of the file, in this case, including the offsets of each packet to JSON array, then writing the algorith to extrack that data for playback https://github.com/guest271314/WebCodecsOpusRecorder.
-
Trying to record off a canvas, but bitrate is very low; high values are ignored.
This is how I write Opus packets to a file without a container and playback using Media Source Extension or as WAV file https://github.com/guest271314/WebCodecsOpusRecorder
SSMLParser
-
IAMA senior javascript dev, ask me anything
I even wrote an SSML parser in JavaScript https://github.com/guest271314/SSMLParser to prove the lack of SSML support in Web Speech API is not a technical difficult, rather a failure in the specification and a lack of will to implement by browser vendors How is a complete SSML document expected to be parsed when set once at .text property of SpeechSynthesisUtterance instance?.
-
speechSynthesis.getVoices() is broken on Firefox on Linux
Navigate to https://guest271314.github.io/SSMLParser/, click "Start speech synthesis". What happens?
-
Web Speech API is (still) broken on Linux circa 2023
I implemented SSML parsing using JavaScript https://github.com/guest271314/SSMLParser just to demonstrate the requirement is possible.
-
Build a Text-to-Speech component in React
I can run this page https://guest271314.github.io/SSMLParser/ without an issue.
-
[AskJS] You have mastered writing JavaScript from scratch, why use TypeScript?
I implemented SSML parsing in JavaScript by hand for Web Speech API per SSML specification https://github.com/guest271314/SSMLParser, where the Web Speech API nor Firefox nor Chrome or Chromium browsers (Google does implement SSML parsing as a service https://github.com/guest271314/GoogleNetworkSpeechSynthesis) have implemented SSML parsing.
-
W3C’s transfer from MIT to non-profit going poorly
I did using JavaScript https://github.com/guest271314/SSMLParser.
-
At what point in your programming journey do you step back and learn Data Structures and Algorithms?
For parsing SSML https://github.com/guest271314/SSMLParser there is the specification, which I implemented, to demonstrate the requirement is possible; there was and still is in that matter, simply a lack of will to implement in the browser. Google would rather try to get you to sign up for their cloud products.
-
'The best thing we can do today to JavaScript is to retire it,' says JSON creator Douglas Crockford • DEVCLASS
Thus, I wrote https://github.com/guest271314/SSMLParser and https://github.com/guest271314/native-messaging-espeak-ng.
What are some alternatives?
webm-writer-js - JavaScript-based WebM video encoder for Google Chrome
speechd - Common high-level interface to speech synthesis
worker-dom - The same DOM API and Frameworks you know, but in a Web Worker.
common-voice - Common Voice is part of Mozilla's initiative to help teach machines how real people speak.
AudioWorkletStream - fetch() => ReadableStream => AudioWorklet
speech-api - Web Speech API
text-encoding - Polyfill for the Encoding Living Standard's API
native-messaging-espeak-ng - Native Messaging => eSpeak NG => MediaStreamTrack
encoding - Encoding Standard
GoogleNetworkSpeechSynthesis - Google's Network Speech Synthesis: Bring your own Google API key and proxy
native-messaging-bash - Bash Native Messaging host.
NumPy - The fundamental package for scientific computing with Python.