42nd-at-threadmill
dashmap
Our great sponsors
42nd-at-threadmill | dashmap | |
---|---|---|
4 | 12 | |
56 | 2,717 | |
- | - | |
0.0 | 5.5 | |
over 1 year ago | 27 days ago | |
Common Lisp | Rust | |
BSD 2-clause "Simplified" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
42nd-at-threadmill
- Share a hash table with SBCL and Allegro Serve
-
Revisiting Prechelt’s paper comparing Java, Lisp, C/C++ and scripting languages
I agree that "C/C++" isn't a good sign, though it was more forgivable when C++ just meant C++98... For speed, nowadays, you use X. That is, many languages can be fast, especially if they have access to intrinsics, the real question is how much special knowledge and herculean effort do you have to spend? It's still true that for many problems idiomatic modern C++ will give a very nice result that may be hard to beat (though be careful if you forgot cin.sync_with_stdio(false) or you may be slower than Python!). But it's also true that C, Rust, Java, Common Lisp, and even JS all do very well out of the box these days, while languages like Python and Ruby have lagged. For this problem, the author had to spend quite a bit of effort in a followup post to get Rust to match some 20 year old unoptimized CL.
If one wanted to optimize Lisp, a simple place to start is with some type declarations (at least with SBCL). It can even be kind of fun to have the compiler yell at you because e.g. it couldn't infer a type somewhere and was forced to do a generic add, so you tell it some things are fixnums, see the message go away, and verify if you want with DISASSEMBLE that it's now using LEA instead of a CALL. For an example of going to quite a bit of trouble (relatively) with adding inline hints, removing unneeded array bounds checks, and SIMD instructions, see https://github.com/telekons/42nd-at-threadmill which I believe is quite a bit faster than a similar "blazing fast" Rust project featured on HN not long ago. But my point again isn't so much that CL is the fastest or has some fast projects, just that it can be made as fast as you're probably going to want. This applies to a lot of other languages too, though CL still feels pretty unique in supporting both high level and low level constructs.
Your GC comment is weird, since any publications looking at total performance invariable include the GC overhead, nothing is hidden. The TIME macro for micro-benchmarking even typically reports on allocated memory, which can be a proxy for estimating GC pressure, and both SBCL and CCL report the actual time (if any) spent in GC. Why not complain that C++ benchmarks hide the indirect costs of memory fragmentation, which is a real bane for long-running C++ programs like application servers? But I'll admit that the GC can be a big weakness and it's no fun to be forced to code around its limitations, and historically some GCs used by some Lisps were really bad (that is, huge pause times). I've been looking at the herculean GC work being done in the Java ecosystem for years with a jealous eye, and even the newer Nim with its swappable GCs when you want to make certain tradeoffs without having to code them in.
-
Software drag racing
Having beaten performance out of SBCL before, it seems...unlikely that there would be a benefit to fixnums when everything is inline and unboxed, except for isqrt, but AIUI we only compute it once per sieve so I'm not terribly bothered by that.
- A SIMD-accelerated lock-free concurrent hash table.
dashmap
- StupidAlloc: what if memory allocation was bad actually
-
dashmap VS scalable-concurrent-containers - a user suggested alternative
2 projects | 13 Apr 2023
-
Samsara, a safe Rust concurrent cycle collector
The problem is, every single one of these half-dozen crates has at least one known major issue (including UAF), exactly like C++ implementations (which isn't surprising since it's the kind of things where the ownership isn't clear and then the borrow checker can't help us).
-
Rust vs Go
Deadlocks and leaks are easy as other languages.
-
Shared mutable state is bad... so how do I create a global cache in a multi-threaded app?
Have you considered https://github.com/xacrimon/dashmap ?
-
Announcing Leapfrog, a faster concurrent HashMap
Dashmap made some api changes compared to the stdlibs hashmap, which leads to some oddities, as highlighted here: https://github.com/xacrimon/dashmap/issues/175
-
Writing a concurrent LRU cache
Some additional notes are in this slide deck and the implementation javadoc. You'd probably want to use something like DashMap for the hash table.
-
HashMap-based cache for async programs
You can look at existing concurrent maps like Dashmap https://github.com/xacrimon/dashmap or Cashmap https://gitlab.redox-os.org/redox-os/chashmap
-
How does one avoid lock of locks? or use the technique of latch crabbing of databases
Also dashmap
-
Noteworthy concurrent data structures?
The only one I've used is Dashmap, it's a concurrent interior-mutability hashmap. Very convenient crate in the case you need that.
What are some alternatives?
cl-trello-clone - A Trello clone demo app in Common Lisp
hashbrown - Rust port of Google's SwissTable hash map
luckless - Lockless data structures for Common Lisp
moka - A high performance concurrent caching library for Rust
concurrent-hash-tables - A "portability" library for concurrent hash tables in Common Lisp
HashMap - An open addressing linear probing hash table, tuned for delete heavy workloads
numericals - CFFI enabled SIMD powered simple-math numerical operations on arrays for Common Lisp [still experimental]
crossbeam - Tools for concurrent programming in Rust
leapfrog - Lock-free concurrent and single-threaded hash map implementations using Leapfrog probing. Currently the highest performance concurrent HashMap in Rust for certain use cases.
megahash - A super-fast C++ hash table with Node.js wrapper, tested up to 1 billion keys.
stretto - Stretto is a Rust implementation for Dgraph's ristretto (https://github.com/dgraph-io/ristretto). A high performance memory-bound Rust cache.
sharded - Safe, fast, and obvious concurrent collections in Rust.