Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Help getting spotify running on rpi4 4GB on PiOS
2 projects | reddit.com/r/raspberry_pi | 19 Jan 2022
The pi shelf is filling up. Got the pi zero running pi hole. The pi 3 is running docker with some apps. Pi 4 is my Plex and storage server with a 2 TB hdd. And the old pi b is going to be a Spotify device.
1 project | reddit.com/r/RASPBERRY_PI_PROJECTS | 11 Jan 2022
Is there any benefit to a Streaming component if I already have a good DAC and I’m comfortable using a computer as a streaming source?
1 project | reddit.com/r/audiophile | 15 Dec 2021
You're not going to beat the laptop on quality, but for the same quality you could beat it on size and power, if you wanted. A Raspberry Pi running https://github.com/dtcooper/raspotify will be equally controllable from your phones, and quite compact.
Spotify Connect on my old HiFi
1 project | reddit.com/r/spotify | 10 Dec 2021
I use a raspberry pi 4 with raspotify connected to a usb dac, it works super well! The config file lets you set the device name, type, and music bitrate.
Raspberry pi WiFi streaming to HiFi system
2 projects | reddit.com/r/audiophile | 2 Dec 2021
I don't have any control or library software built into my setup. I'm only running https://github.com/dtcooper/raspotify for Spotify Connect and https://github.com/mikebrady/shairport-sync for AirPlay streaming, so I have no need to interact directly with the Pi during normal usage. It's all controlled from my phone or PC natively.
Headless music player - using Spotify?
1 project | reddit.com/r/raspberry_pi | 15 Nov 2021
https://github.com/dtcooper/raspotify Use raspotify on the PI directly, it turns it into a Spotify Connect target. I use that on my PI that runs alot of other crap that is headless. For the amplifier I'd rather get a cheap auto head-unit that has an Aux-in, as those are made to run on 12V directly and are, well, cheap.
Which Spotify library to use for headless Linux audio player box?
6 projects | reddit.com/r/spotifyapi | 27 Sep 2021
I found a way to use Chromecast with Spotify
4 projects | reddit.com/r/CalyxOS | 8 Sep 2021
Install Spotify on the Ubuntu machine, or use Spotify via the browser, or use something like spotifyd/raspotify.
My humble little setup.
2 projects | reddit.com/r/Ubiquiti | 3 Jul 2021
Lätt som en plätt
1 project | reddit.com/r/sweden | 4 Jun 2021
Finding Your Home in Game Graphics Programming
11 projects | news.ycombinator.com | 31 Dec 2021
On Windows, the best way is often Direct2D https://docs.microsoft.com/en-us/windows/win32/direct2d/dire...
On Linux, you have to do that yourself. The best approach depends on requirements and target hardware.
The simplest case is when your lines are straight segments or polylines of them, you have decent GPU, and you don’t have weird requirements about line caps and joins. In that case, simply render a quad per segment, using 8x or 16x MSAA. Quality-wise, the results at these MSAA levels are surprisingly good. Performance-wise, modern PC-class GPUs (including thin laptops and CPU-integrated graphics in them) are usually OK at that use case even with 16x MSAA.
If MSAA is too slow on your hardware but you still want good quality AA, it’s harder to achieve but still doable. Here’s a relevant documentation from my graphics library for Raspberry Pi4: https://github.com/Const-me/Vrmac/blob/master/Vrmac/Draw/VAA...
Why Is C Faster Than Java (2009)
8 projects | news.ycombinator.com | 26 Dec 2021
> unless you're doing fun patterns like 'where TComparer : IEqualityComparer,struct`
These fun patterns are precisely generic type constraints I mentioned in my comment. I do use them when performance matters, here’s an open-source example: https://github.com/Const-me/Vrmac/blob/1.2/Vrmac/Draw/Main/I... That code is from a 2D vector graphics library, the uploadIndices() function may be called at 10 kHz frequency or more. Displays are often 60 Hz, that function is called 1-2 times for every vector path being rendered.
> If you poke around at the internals of System.Linq you'll see there's a lot of checking to use specialized types depending on the collection in order to minimize costs.
Linq is awesome, but I’m pretty sure it was designed for usability first, performance second. I tend to avoid Linq (and dynamic memory allocations in general; delegates are using the heap) on performance-critical paths. YMMV but in most of the code I write, these performance-critical paths are taking way under 50% of my code bases.
> 'new' generic constraint is definitely not zero cost
If you mean the overhead of Activator.CreateInstance when generic code calls new() with the generic type, I’m not 100% certain but I think it’s fixed now. According to https://source.dot.net/, that standard library method is marked with [Intrinsic] attribute, the runtime and JIT probably have optimizations for value types.
My Negative Views on Rust
2 projects | news.ycombinator.com | 23 Dec 2021
> a message queue type of service, where the desire was to minimize latency and to have consistent performance over prolonged use.
Your requirements are probably similar to this C# queues class: https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Aud... That library decodes and plays realtime video + audio, both low latency and consistent performance over prolonged use were rather important. BTW that code runs on Raspberry Pi4, CPU performance is a fraction of what you’d expect on modern desktops or servers.
Image File Formats That Didn’t Make It
7 projects | news.ycombinator.com | 10 Nov 2021
Agree about TGA, it is trivially easy to write. Example in C# for grayscale version, a single page of code: https://github.com/Const-me/Vrmac/blob/1.2/VrmacInterop/Util...
BMP is more complicated, unfortunately. The header structure is more complex. And then there’s a requirement for rows to be 4-bytes aligned, might need to insert padding bytes between the rows.
Microsoft's UWP = Unwanted Windows Platform?
2 projects | news.ycombinator.com | 27 Oct 2021
> The most depressive thing in managed langauges is that nobody figured out how to use low-level APIs or at least how to bring some abstractions for graphics (2d,3d) that would fit in 99% of the scenarios.
A novel technological approach to vector graphics (“Vector Graphics Complexes”)
1 project | news.ycombinator.com | 9 Sep 2021
> I think you can tessellation on the CPU in an adaptive fashion (e.g. based on curvature or similar) and update that on a per frame (add, remove tessellation points) basis rather than re-tesselating from scratch each frame.
The GPU’s hardware-implemented tessellation is (a) not compatible enough. It’s OK on Windows because Microsoft requires them for GPU vendors to declare they supporting Direct3D 11. On the rest of the platforms, support across GPU vendors varies. And (b) doesn’t help much for 2D vector graphics. The hardware tessellation can be good for terrain, trees, or other triangle meshes in 3D space. Doesn’t help much for these Bezier curves/elliptical arcs for 2D shapes. Especially so for stroked paths.
Counter-intuitively, stroked paths are harder to render than filled ones. The offset of Bezier spline is not representable as another Bezier spline. Also, strokes have more parameters on input: line caps, join types, dashes, miter limit, etc.
> The main limitation in JS is the lack of really great multithreading
Also lack of SIMD. Also, the code in general is slow compared to C++, C# and many other statically typed languages. It’s incredibly hard to generate fast code from very dynamic languages like JS or Python, where everything is a hash map.
> I do not think you need it here.
Here’s my code which offloads CPU-bound pieces of 2D rendering to other CPU cores: https://github.com/Const-me/Vrmac/tree/master/Vrmac/Draw/Tes... Multithreading helped a lot.
It's Time for Operating Systems to Rediscover Hardware – ATC/OSDI 2021 Keynote
1 project | news.ycombinator.com | 1 Sep 2021
This project https://github.com/Const-me/Vrmac implements unified keyboard+mouse input API over 3 distinct lower-level APIs: Win32 messages, XCB packets, and Linux raw input. Here’s for raw input: https://github.com/Const-me/Vrmac/tree/master/Vrmac/Input/Li... All 3 have unique quirks, yet I would not say they significantly affected the rest of the library.
I think for the USB in that OS, a C API with intentionally very narrow scope would be OK for the job.
The main reason why the real-life code is so complex is that scope being very wide. We have USB 2 and 3, mass storage, two-way audio, cameras and GPUs, hubs and composite devices, numerous wireless protocols on top, OTG, power delivery, power saving features, and more.
AVX512/VBMI2: A Programmer’s Perspective
5 projects | news.ycombinator.com | 15 Aug 2021
> have you needed any non-vertical ops that are not in that list?
Yes indeed. I rarely using SIMD for vertical-only ops, for such use cases GPUs are very often better than CPUs.
I’ve already wrote an example in my previous comment. It’s possible to emulate with the stuff you have, however _mm256_blend_pd is very fast instruction, a single cycle of latency. The highway’s emulation is going to be way more expensive. You probably compiling your UpperHalf() into _mm256_extractf128_pd and Combine() into _mm256_insertf128_pd, that’s 2 instructions and (on Skylake) 6 cycles of latency instead of 1 cycle.
6 cycles instead of 1 cycle is a large overhead in that context. That particular small matrix multiplication is called rather often. I only optimizing code when the profiler tells me so. For the majority of CPU bound code in that project, Eigen’s implementation is actually good enough.
I’ve searched the source code of that project (CAM/CAE software). Here’s the list of the shuffle intrinsics I use, some of them a lot: _mm256_blend_pd, _mm_blend_ps, _mm_blend_epi32, _mm256_permute2f128_pd, _mm256_permute_ps, _mm256_permute4x64_pd, _mm256_permutevar8x32_ps, _mm256_permutevar8x32_epi32, _mm_permute_ps, _mm_permute_pd, _mm_insert_ps, _mm_movehdup_ps, _mm_moveldup_ps, _mm_loaddup_pd, _mm_extract_ps, _mm_dp_ps, _mm_extract_epi32, _mm_extract_epi64, _mm_shuffle_epi32.
A similar list for this project https://github.com/Const-me/Vrmac (a GPU-centric library for 3D and 2D graphics, not using any AVX): _mm_shuffle_epi8, _mm_alignr_epi8, _mm_shuffle_epi32, _mm_shuffle_ps, _mm_addsub_ps (that one is vertical but still missing from highway), _mm_insert_epi32, _mm_insert_ps, _mm_extract_ps, _mm_extract_epi16, _mm_movehdup_ps, _mm_dp_ps. BTW the project is portable between AMD64 and ARMv7, I have tons of #ifdef there to support NEON which differs substantially, there’s stuff like vrev64q_f32 and vextq_f32, 64-bit SIMD vectors, and quite a few other instructions missing on AMD64.
Even if you expose all the missing horizontal stuff in highway — won’t be much better than intrinsics. Such code ain’t gonna use AVX512 when available. Only going to inflate the software complexity for no good reason, by adding an unneeded layer of abstraction between the application’s code and the actual hardware.
So you want to write a GUI framework
13 projects | news.ycombinator.com | 11 Aug 2021
BGFX is a general-purpose 3D graphics engine, not a GUI nor vector graphics framework.
Nanovg is an awesome vector graphics library, but has limitations. (1) no ClearType, I fixed in my fork: https://github.com/Const-me/nanovg (2) The only way to get AA is hardware MSAA, unfortunately many popular platforms like Raspberry Pi don’t have good enough hardware to do it fast enough. Nanogui is built on top of Nanovg, shares the limitations.
I agree with the OP that Cairo and Skia are the only viable ones for Linux.
It’s sad because Windows has Direct2D for decades now (introduced in Vista), and unlike 2006, now in 2021 Linux actually has all the lower-level pieces to implement a comparable equivalent. Here’s a proof of concept: https://github.com/Const-me/Vrmac#vector-graphics-engine13 projects | news.ycombinator.com | 11 Aug 2021
> it is genuinely hard to write an abstraction that provides adequate control of advanced GPU features (such as the compute capabilities) across subtly different low-level APIs.
That’s a solved problem in C++, see this library: http://diligentgraphics.com/diligent-engine/
> The rasterization techniques used in 3D are poorly suited to 2D tasks like clipping to vector paths or antialiasing
That’s subjective, I think these techniques are awesome fit for 2D. See this library: https://github.com/Const-me/Vrmac#2d-graphics BTW I have recently documented my antialiasing algorithm: https://github.com/Const-me/Vrmac/blob/master/Vrmac/Draw/VAA...
> these traditional techniques can start to perform very badly in 2D once there are lots of blend groups or clip regions involved, since each needs its own temporary buffer and draw call.
One doesn’t necessarily need temporary buffers or draw calls for that, can also do in the shaders, merging into larger draw calls.
> What this comes down to is instructing the operating system to embed a video or 3D view in some region of our window, and this means interacting with the compositor.
That indeed works in practice, but I don’t believe the approach is good. Modern GUI frameworks are using 3D GPUs exclusively. They don’t need much efforts to integrate 3D-rendered content.
As for the video, one only needs a platform API to decode and deliver frames in GPU textures. Microsoft has such API in the OS: https://docs.microsoft.com/en-us/windows/win32/api/mfmediaen... Once upon a time wanted to do the same on embedded Linux, wasn’t easy but still doable on top of V4L2 kernel calls: https://github.com/Const-me/Vrmac/tree/master/VrmacVideo
What are some alternatives?
librespot - Open Source Spotify client library
spotifyd - A spotify daemon
cspot - A Spotify Connect player targeting, but not limited to embedded devices (ESP32).
spotify-connect - Reverse Engineering of Spotify Connect
RPiPlay - An open-source AirPlay mirroring server for the Raspberry Pi. Supports iOS 9 and up.
Mopidy MusicBox - Web Client for Mopidy Music Server and the Pi MusicBox
Snapcast - Synchronous multiroom audio player
kwin-lowlatency - X11 full-screen unredirection and lots'a settings for KWin
neutralinojs - Portable and lightweight cross-platform desktop application development framework
AirConnect - Use AirPlay to stream to UPnP/Sonos & Chromecast devices
mkchromecast - Cast macOS and Linux Audio/Video to your Google Cast and Sonos Devices
spotipy - A light weight Python library for the Spotify Web API