llvm-project
gcc
llvm-project | gcc | |
---|---|---|
356 | 85 | |
26,431 | 8,875 | |
3.3% | 1.6% | |
10.0 | 10.0 | |
3 days ago | 3 days ago | |
LLVM | C++ | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llvm-project
-
Compilers Are (Too) Smart
The background here is that "ctpop < 2" or "ctpop == 1" (depending on zero behavior) is LLVM's canonical representation for a "power of two" check. It is used on the premise that the backend will expand it back into a cheap bitwise check and not use an actual ctpop operation. However, due to complex interactions in the backend, this does not actually happen in this case (https://github.com/llvm/llvm-project/issues/94829).
-
What errors are lurking in LLVM code?
The checked project version is LLVM 18.1.0.
-
Qualcomm's Oryon LLVM Patches
I think they should probably set LoopMicroOpBufferSize to a non-zero value even if its not microarchitecturally accurate. This value is used in LLVM to control whether partial and runtime loop unrolling are enabled (actually only for that). Although some targets override this default behaviour, AArch64 only overrides it to enable partial and runtime unrolling for in-order models. I've left a review comment https://github.com/llvm/llvm-project/pull/91022/files#r16026... and as I note there, the setting seems to have become very divorced from microarchitectural reality if you look at how and why different scheduling models set it in-tree (e.g. all the Neoverse cores, set it to 16 with a comment they just copied it from the A57).
-
Yes, Ruby is fast, but…
In conclusion, none of the proposed changes to the Ruby version of the code makes a dent in the Crystal version. This is not entirely Crystal's doing: it uses the LLVM backend, which generates very optimized binaries.
-
Qt and C++ Trivial Relocation (Part 1)
As far as I know, libstdc++'s representation has two advantages:
First, it simplifies the implementation of `s.data()`, because you hold a pointer that invariably points to the first character of the data. The pointer-less version needs to do a branch there. Compare libstdc++ [1] to libc++ [2].
[1]: https://github.com/gcc-mirror/gcc/blob/065dddc/libstdc++-v3/...
[2]: https://github.com/llvm/llvm-project/blob/1a96179/libcxx/inc...
Basically libstdc++ is paying an extra 8 bytes of storage, and losing trivial relocatability, in exchange for one fewer branch every time you access the string's characters. I imagine that the performance impact of that extra branch is tiny, and massively confounded in practice by unrelated factors that are clearly on libc++'s side (e.g. libc++'s SSO buffer is 7 bytes bigger, despite libc++'s string object itself being smaller). But it's there.
The second advantage is that libstdc++ already did it that way, and to change it would be an ABI break; so now they're stuck with it. I mean, obviously that's not an "advantage" in the intuitive sense; but it's functionally equivalent to an advantage, in that it's a very strong technical answer to the question "Why doesn't libstdc++ just switch to doing it libc++'s way?"
-
Playing with DragonRuby Game Toolkit (DRGTK)
This Ruby implementation is based on mruby and LLVM and it’s commercial software but cheap.
- Add support for Qualcomm Oryon processor
-
Ask HN: Which books/resources to understand modern Assembler?
'Computer Architeture: A Quantitative Apporach" and/or more specific design types (mips, arm, etc) can be found under the Morgan Kaufmann Series in Computer Architeture and Design.
"Getting Started with LLVM Core Libraries: Get to Grips With Llvm Essentials and Use the Core Libraries to Build Advanced Tools "
"The Architecture of Open Source Applications (Volume 1) : LLVM" https://aosabook.org/en/v1/llvm.html
"Tourist Guide to LLVM source code" : https://blog.regehr.org/archives/1453
llvm home page : https://llvm.org/
llvm tutorial : https://llvm.org/docs/tutorial/
llvm reference : https://llvm.org/docs/LangRef.html
learn by examples : C source code to 'llvm' bitcode : https://stackoverflow.com/questions/9148890/how-to-make-clan...
-
Flang-new: How to force arrays to be allocated on the heap?
See
https://github.com/llvm/llvm-project/issues/88344
https://fortran-lang.discourse.group/t/flang-new-how-to-forc...
- The LLVM Compiler Infrastructure
gcc
-
gcc VS lambda-mountain - a user suggested alternative
2 projects | 10 Jun 2024
-
Project Stage 1: Preparation(part-2)
GCC github-mirror GCC Documentation GCC Internals Mannual Installing GCC
-
Qt and C++ Trivial Relocation (Part 1)
As far as I know, libstdc++'s representation has two advantages:
First, it simplifies the implementation of `s.data()`, because you hold a pointer that invariably points to the first character of the data. The pointer-less version needs to do a branch there. Compare libstdc++ [1] to libc++ [2].
[1]: https://github.com/gcc-mirror/gcc/blob/065dddc/libstdc++-v3/...
[2]: https://github.com/llvm/llvm-project/blob/1a96179/libcxx/inc...
Basically libstdc++ is paying an extra 8 bytes of storage, and losing trivial relocatability, in exchange for one fewer branch every time you access the string's characters. I imagine that the performance impact of that extra branch is tiny, and massively confounded in practice by unrelated factors that are clearly on libc++'s side (e.g. libc++'s SSO buffer is 7 bytes bigger, despite libc++'s string object itself being smaller). But it's there.
The second advantage is that libstdc++ already did it that way, and to change it would be an ABI break; so now they're stuck with it. I mean, obviously that's not an "advantage" in the intuitive sense; but it's functionally equivalent to an advantage, in that it's a very strong technical answer to the question "Why doesn't libstdc++ just switch to doing it libc++'s way?"
-
GCC 14.1 Release
Upd: searching in the github mirror by the commit hash from the issue, found that https://github.com/gcc-mirror/gcc/commit/1e3312a25a7b34d6e3f... is in fact in the 'releases/gcc-14.1.0' tag.
Even weirder that this one got swept under the changelog rug, it's a pretty major issue.
-
C++ Safety, in Context
> It's true, this was a CVE in Rust and not a CVE in C++, but only because C++ doesn't regard the issue as a problem at all. The problem definitely exists in C++, but it's not acknowledged as a problem, let alone fixed.
Can you find a link that substantiates your claim? You're throwing out some heavy accusations here that don't seem to match reality at all.
Case in point, this was fixed in both major C++ libraries:
https://github.com/gcc-mirror/gcc/commit/ebf6175464768983a2d...
https://github.com/llvm/llvm-project/commit/4f67a909902d8ab9...
So what C++ community refused to regard this as an issue and refused to fix it? Where is your supporting evidence for your claims?
- Std: Clamp generates less efficient assembly than std:min(max,std:max(min,v))
-
Converting the Kernel to C++
Somewhat related: In 2020 gcc bumped the requirement for bootstrapping to be a C++11 compiler [0]. Would have been fun to see the kernel finally adopt C++14 as the author suggested.
I don't think that Linus will allow this since he just commented that he will allow rust in drivers and major subsystems [1].
I do found it pretty funny that even Linus is also not writing any rust code, but is reading rust code.
I would have hoped see more answers or see something in here from actual kernel developers.
0: https://github.com/gcc-mirror/gcc/commit/5329b59a2e13dabbe20...
-
Understanding Objective-C by transpiling it to C++
> They’re saying that a lot of the restrictions makes things much harder than other languages. Hence the general problem rust has where a lot of trivial tasks in other languages are extremely challenging.
Like what? So far the discussion has revolved around rewriting a linked list, which people generally shouldn't ever need to do because it's included in the standard lib for most languages. And it's a decidedly nontrivial task to do as well as the standard lib when you don't sacrifice runtime overhead to be able to handwave object lifecycle management.
- C++: https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-...
- Rust: https://doc.rust-lang.org/beta/src/alloc/collections/linked_...
> No need to get defensive, no one is arguing that rust doesn’t do a lot of things well.
That's literally what bsaul is arguing in another comment. :)
> You’re talking up getting a safe implementation in C, but what matters is “can I get the same level of safety with less complexity in any language”, and the answer is yes: Java and c# implementations of a thread safe linked list are trivial.
Less perceived complexity. In Java and C# you're delegating the responsibility of lifecycle management to garbage collectors. For small to medium scale web apps, the added complexity will be under the hood and you won't have to worry about it. For extreme use cases, the behavior and overhead of the garbage collector does became relevant.
If you factor in the code for the garbage collector that Java and C# depend on, the code complexity will tilt dramatically in favor of C++ or Rust.
However, it's going to be non-idiomatic to rewrite a garbage collector in Java or C# like it is to rewrite a linked list in Rust. If we consider the languages as they're actually used, rather than an academic scenario which mostly crops up when people expect the language to behave like C or Java, the comparison is a lot more favorable than you're framing it as.
> If I wanted I could do it in c++ though the complexity would be more than c# and Java it would be easier than rust.
You can certainly write a thread-safe linked list in C++, but then the enforcement of any assumptions you made about using it will be a manual burden on the user. This isn't just a design problem you can solve with more code - C++ is incapable of expressing the same restrictions as Rust, because doing so would break compatibility with C++ code and the language constructs needed to do so don't exist.
So it's somewhat apples and oranges here. Yes, you may have provided your team with a linked list, but it will either
-
Committing to Rust for Kernel Code
GCC is also written in C++, and has had C++ deps since 2013:
https://github.com/gcc-mirror/gcc/blob/master/gcc/c/c-parser...
- Spitbol 360: an implementation of SNOBOL4 for IBM 360 compatible computers
What are some alternatives?
zig - General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
CMake - Mirror of CMake upstream repository
Lark - Lark is a parsing toolkit for Python, built with a focus on ergonomics, performance and modularity.
rtl8192eu-linux-driver - Drivers for the rtl8192eu chipset for wireless adapters (D-Link DWA-131 rev E1 included!)
SDL - Simple Directmedia Layer
STL - MSVC's implementation of the C++ Standard Library.
cosmopolitan - build-once run-anywhere c library
cobol-on-wheelchair - Micro web-framework for COBOL
windmill - Open-source developer platform to turn scripts into workflows and UIs. Fastest workflow engine (5x vs Airflow). Open-source alternative to Airplane and Retool.
busybox - The Swiss Army Knife of Embedded Linux - private tree
qemu