ps_mem
mold
ps_mem | mold | |
---|---|---|
6 | 179 | |
1,507 | 13,302 | |
- | - | |
0.0 | 9.7 | |
over 1 year ago | 8 days ago | |
Python | C++ | |
GNU Lesser General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ps_mem
-
Why is it using that much ram? Is that a trojan? Is that a feature of the linux-tkg kernel? (nothing else is running in the background)
I use a script I call memtop10.sh that uses a combination of ps and ps_mem.py which you can find here: https://github.com/pixelb/ps_mem/blob/master/ps_mem.py
-
PSA: the way the “free” command calculates unused memory changed significantly between Bullseye and Bookworm
Do you mean something like this: https://github.com/pixelb/ps_mem
-
Some of my computer's RAM "disappears" over time. Where does it go?
ps_mem https://github.com/pixelb/ps_mem
-
Tauri 1.0 – Electron Alternative Powered by Rust
Just as a reference, the application I'm building features a lot things inside the final binary, that might affect ram usage, so this is not a "hello-world" example but a real application, with a SPA built-into the binary and loaded into RAM, together with a HTTP API and more (fuller list here: https://news.ycombinator.com/item?id=31765186).
With that said, `ps_mem` (https://github.com/pixelb/ps_mem) reports that the memory usage is 58.7 MiB after starting the Tauri application. If I run just the HTTP API, memory usage ends up being 19.4 MiB. So I guess in that sense, the overhead of Tauri is about ~39.3 MiB.
-
Memory Available Unaccounted For
I'm not sure if it's still working since I've not used it for a while but you can get accurate reports with ps_mem
- Measuring memory usage: virtual versus real memory
mold
-
I reduced (incremental) Rust compile times by up to 40%
I think this is unlikely to gain traction. I say that no to discourage you, just to explain.
- The community has an instinctive distrust of closed source or a compiler from an untrusted source. If you’re familiar with the Trusting Trust attack you’ll understand why.
- Dev tools in every language ecosystem are almost always free, unless they involve some kind of hosting. People aren’t used to opening their wallets. Look the experience of the guy who built the mold linker(https://github.com/rui314/mold). Far superior to the state of art, improves incremental compiles a lot, widely applicable across ecosystems (C, C++, Rust), CPU architectures and Operating Systems. You don’t even have to modify your compiler, just need to point to his linker. He’s even giving it away for free for personal use. But still, almost no one uses it. The inertia of the established options is really high.
- It’s not complex enough. Think about the complexity involved in the cranelift backend. No one can seriously recreate the efforts of bjorn3. If we could have, we would have. But the idea idea here can be recreated, especially by the experts who already built incremental compilation into rustc.
- But if your solution is truly complex, like the parallel frontend, the burden of maintaining a fork would be too high. You’d have to spend all your time rebasing.
Again I’m not trying to discourage you, just stating the difficulties of making a business in the dev tools space. You would be better off contributing this excellent work to the community and trying a different tack.
-
Mold Course
I initially thought this would be about the mold linker (https://github.com/rui314/mold)
-
Monetizing Developer Tools
I assume this submission is trying to highlight the specific message (2023-01-24) : https://github.com/rui314/mold/issues/190#issuecomment-14028...
Fyi... the author wrote a more expansive blog post about selling dev tools a few months later (2023-06-06) and there was a related HN thread about it: https://news.ycombinator.com/item?id=36225016
-
mold 2.1.0 - rui314/mold
Loongson's LoongArch CPU has been supported. (03b1a1c)
-
Mold 2.0.0
I'm amazed at how quickly the author responds to requests: https://github.com/rui314/mold/issues/1057
From the report to the fix in less than two days.
I'm not sure how competitive it will be with lld, especially if we consider ThinLTO (which takes multiple minutes on 64-core machine) - it can make the advantages of mold insignificant.
- Mold 2.0 released - MIT license
-
Linking many files significantly increases build time. Is there an editor that allows you to write a single file but present the file to the screen as multiple 'virtual' files for better organization?
What other solutions have you tried for the problem of slow linking? You haven't even said which linker and what flags you're using. I haven't actually tried it, but the author of gold has an even faster linker called mold: https://github.com/rui314/mold
- Design and Implementation of the Mold Linker
-
Apple's new library format combines the best of dynamic and static
> Mold did it first, though: https://github.com/rui314/mold
Before LLD?
What are some alternatives?
TempOSD - On Screen Display for cpu and gpu temperatures, ram and swap usages statistics.
zld - A faster version of Apple's linker
volatility - An advanced memory forensics framework
wasmtime - A fast and secure runtime for WebAssembly
awesome-tauri - 🚀 Awesome Tauri Apps, Plugins and Resources
osxcross - Mac OS X cross toolchain for Linux, FreeBSD, OpenBSD and Android (Termux)
psutil - Cross-platform lib for process and system monitoring in Python
zig - General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
tauri - Build smaller, faster, and more secure desktop applications with a web frontend.
chibicc - A small C compiler
sccache - Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible. Sccache has the capability to utilize caching in remote storage environments, including various cloud storage options, or alternatively, in local storage.
gccrs - GCC Front-End for Rust