Our great sponsors
-
bitmappers-companion
Discontinued zine/book about bitmap drawing algorithms and math with code examples in Rust
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Vrmac
Vrmac Graphics, a cross-platform graphics library for .NET. Supports 3D, 2D, and accelerated video playback. Works on Windows 10 and Raspberry Pi4.
-
Raylib-CsLo
autogen bindings to Raylib 4.x and convenience wrappers on top. Requires use of `unsafe`
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I'd be curious to hear others chime in, but I feel like the situation is very similar to this article talking about games. Do you want to get pixels to the screen/file? Shaders and materials (authoring or implementing)? How commercial renderers are organized? My job is mostly using commercial tools, but a lot of us have made toy renderers, read books, and taken classes to reimplement the fundamentals.
It's been awhile, but a few common, imho approachable, sources are:
https://github.com/ssloy/tinyrenderer/wiki - a rasterizer that starts with drawing a line
https://raytracing.github.io/ - a basic raytracer that incrementally adds features
https://www.pbrt.org/ - I've heard good things from people who have gone through the whole book. I haven't taken the dive, but thumbed through it and jumped around.
I wouldn't dismiss realtime stuff, either. Often, the concepts are similar but the feedback loop is much faster. I liked the UE4 docs on shaders talking about pbrt and the simplifications they chose when implementing it. There's a bunch of resources out there. I don't think single source is comprehensive. I say, start with something simple and find resources on specific things you want to know more about.
Check out my free open source book "A bitmappers companion" its a reference book about 2d graphics algorithms with code examples in Rust. https://github.com/epilys/bitmappers-companion
On Windows, the best way is often Direct2D https://docs.microsoft.com/en-us/windows/win32/direct2d/dire...
On Linux, you have to do that yourself. The best approach depends on requirements and target hardware.
The simplest case is when your lines are straight segments or polylines of them, you have decent GPU, and you don’t have weird requirements about line caps and joins. In that case, simply render a quad per segment, using 8x or 16x MSAA. Quality-wise, the results at these MSAA levels are surprisingly good. Performance-wise, modern PC-class GPUs (including thin laptops and CPU-integrated graphics in them) are usually OK at that use case even with 16x MSAA.
If MSAA is too slow on your hardware but you still want good quality AA, it’s harder to achieve but still doable. Here’s a relevant documentation from my graphics library for Raspberry Pi4: https://github.com/Const-me/Vrmac/blob/master/Vrmac/Draw/VAA...
If you are wanting to dip your toes in gamedev and don't want to use Unity, consider Raylib, or the C# Raylib Wrapper I wrote.
Unity render pipeline source is available here: https://github.com/Unity-Technologies/Graphics
All the C# code running in the editor and runtimes is here:
https://github.com/Unity-Technologies/UnityCsReference
The code that interfaces directly with the platform API at the C++ level is restricted (you can get access but it's not really a viable option for a beginning graphics programmer :)). For many platforms the APIs themselves are proprietary and therefore that code cannot be shared easily.
There's pretty good tooling support around graphics debugging:
That sounds like a fun challenge. If you're constraining yourself to use as few libraries as possible, I'd go with OBJ [1] for the 3d mesh and PPM [2] for writing images. It's easy to implement a bare bones reader/writer and some OSes (like macOS) can show them in the file browser. Raytracing in One Weekend goes over PPM. There are a bunch of header-only libraries that handle different file formats like stb_image [3]. I usually end up using those when I start dealing with textures or UI elements. I don't use Windows so I haven't used their APIs for projects like this. I'd usually go for imgui or SDL (like you mentioned). tinyracaster, a sibling project of tinyrenderer, touches on those [4]. I liked LazyFoo's SDL tutorial [5]. Good luck!
[1] https://en.wikipedia.org/wiki/Wavefront_.obj_file
[2] https://en.wikipedia.org/wiki/Netpbm#PPM_example
[3] https://github.com/nothings/stb
That sounds like a fun challenge. If you're constraining yourself to use as few libraries as possible, I'd go with OBJ [1] for the 3d mesh and PPM [2] for writing images. It's easy to implement a bare bones reader/writer and some OSes (like macOS) can show them in the file browser. Raytracing in One Weekend goes over PPM. There are a bunch of header-only libraries that handle different file formats like stb_image [3]. I usually end up using those when I start dealing with textures or UI elements. I don't use Windows so I haven't used their APIs for projects like this. I'd usually go for imgui or SDL (like you mentioned). tinyracaster, a sibling project of tinyrenderer, touches on those [4]. I liked LazyFoo's SDL tutorial [5]. Good luck!
[1] https://en.wikipedia.org/wiki/Wavefront_.obj_file
[2] https://en.wikipedia.org/wiki/Netpbm#PPM_example
[3] https://github.com/nothings/stb