gleam
are-we-fast-yet
Our great sponsors
gleam | are-we-fast-yet | |
---|---|---|
95 | 18 | |
14,761 | 314 | |
60.0% | - | |
9.9 | 8.8 | |
6 days ago | 2 months ago | |
Rust | Java | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gleam
-
Release Radar • March 2024 Edition
Want a friendly language for building safe systems at scale? Gleam is here for you. It features modern and familiar syntax, that's reliable and scalable. Gleam runs on an Erlang virtual machine, and can run plenty of concurrent tasks. It comes with a compiler, build tool, formatter, editor integrations, and package manager all built in so you can get started right away. Congrats to the team on shipping your first major version 🙌.
-
The Current State of Clojure's Machine Learning Ecosystem
While I love Clojure, I have to agree about tooling. I recently started using Gleam* and was impressed at how easy it was to get up and running with the CLI tool. I think this is an important part of getting people to adopt a language.
-
Show HN: I open-sourced the in-memory PostgreSQL I built at work for E2E tests
If you use languages that compile to WASM (such as Gleam https://gleam.run), and can also run Postgres via WASM, then it opens very interesting offline scenarios with codebases which are similar on both the client and the server, for instance.
-
Why the number of Gleam programmers is growing so fast?
Recently, Gleam has gained more popularity, and a lot of developers (including me) are learning it. At the time of this writing, it has exceeded 14k stars on GitHub; it grew really fast for the last month.
- Cranelift code generation comes to Rust
- Gleam v1.0.0
- Gleam has a 1.0 release candidate
-
Welcome to the Gleam Language Tour
Oh, strange that github had a date of 2016 on this one: https://github.com/gleam-lang/gleam/issues/2
I was just going by that, though I do remember checking out gleam 5 years ago or so.
Re: macros, I really do think they’re a big deal and all the other newer languages I’ve used, such as Rust have some kind of macros or powerful meta programming features.
For older languages, a few, like Ruby have enough meta programmability to make nice DSLs, but many others don’t. Given the choice, I’d much rather have Elixir/Clojure style macros than other meta-programming facilities I’ve seen so far.
-
Inko Programming Language
I had been only following this language with some interest, I guess this was born in gitlab not sure if the creator(s) still work there. This is what I'd have wanted golang to be (albeit with GC when you do not have clear lifetimes).
But how would you differentiate yourself from https://gleam.run which can leverage the OTP, I'd be more interested if we can adapt Gleam to graalvm isolates so we can leverage the JVM ecosystem.
-
Switching to Elixir
I don't think the implementation itself is at fault, but yes, I do think that the design of dialyzer makes it an (at times) faulty type checker. The unfortunate reality of a type checker that fails sometimes is that it makes it mostly useless because you can never trust that it'll do the job.
To be clear, I've had it fail in a function where I've literally specced that very function to return a `binary` but I'm returning an `integer` in one of the cases. This is a very shallow context but it can still fail. Now add more functions, maybe one more `case`.
I think an entire rethink of type checking on the BEAM had to be done and that's why eqWalizer[0] was created and why Elixir is looking to add an actual sound, well-developed type checker. Gleam[1] I would assume is just a Hindley-Milner system so that's completely solid. `purerl`[2] is just PureScript for the BEAM so that's also Hindley-Milner, meaning it's solid. `purerl` has some performance issues caused by it compiling down to closures everywhere but if you can pay that cost it's actually pretty fantastic. With that said my bet for the best statically typed experience right now on the BEAM would be `gleam`.
are-we-fast-yet
-
Boehm Garbage Collector
> Sure there's a small overhead to smart pointers
Not so small, and it has the potential to significantly speed down an application when not used wisely. Here are e.g. some measurements where the programmer used C++11 and did everything with smart pointers: https://github.com/smarr/are-we-fast-yet/issues/80#issuecomm.... There was a speed down between factor 2 and 10 compared with the C++98 implementation. Also remember that smart pointers create memory leaks when used with circular references, and there is an additional memory allocation involved with each smart pointer.
> Garbage collection has an overhead too of course
The Boehm GC is surprisingly efficient. See e.g. these measurements: https://github.com/rochus-keller/Oberon/blob/master/testcase.... The same benchmark suite as above is compared with different versions of Mono (using the generational GC) and the C code (using Boehm GC) generated with my Oberon compiler. The latter only is 20% slower than the native C++98 version, and still twice as fast as Mono 5.
-
A C++ version of the Are-we-fast-yet benchmark suite
See https://github.com/smarr/are-we-fast-yet/blob/master/docs/guidelines.md.
-
The Bitter Truth: Python 3.11 vs. Cython vs. C++ Performance for Simulations
That's a very interesting article, thanks. Interesting to note that Cython is only about twice as fast as Python 3.10 and only about 40% faster than Python 3.11.
The official Python site advertises a speedup of 25% from 3.10 to 3.11; in the article a speedup of 60% was measured. It therefore usually makes sense to measure different algorithms. Unfortunately there is no Python or C++ implementation yet for https://github.com/smarr/are-we-fast-yet.
- Comparing Language Implementations with Objects, Closures, and Arrays
- Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
-
.NET 6 vs. .NET 5: up to 40% speedup
> Software benchmarks are super subjective.
No, they are not, but they are just a measurement tool, not a source of absolute thruth. When I studied engineering at ETH we learned "Who measures measures rubbish!" ("Wer misst misst Mist!" in German). Every measurement has errors and being aware of these errors and coping with it is part of the engineering profession. The problem with programming language benchmarks is often that the goal is to win by all means; to compare as fairly and objectively as possible instead, there must be a set of suitable rules adhered to by all benchmark implementations. Such a set of rules is e.g. given for the Are-we-fast-yet suite (https://github.com/smarr/are-we-fast-yet).
-
Is CoreCLR that much faster than Mono?
I am aware of the various published test results where CoreCLR shows fantastic speed-ups compared to Mono, e.g. when calculating MD5 or SHA hash sums.
But my measurements based on the Are-we-fast-yet benchmark suite (see https://github.com/smarr/are-we-fast-yet and https://github.com/rochus-keller/Oberon/tree/master/testcases/Are-we-fast-yet) show a completely different picture. Here the difference between Mono and CoreCLR (both versions 3 and 5) is within +/- 10%, so nothing earth shattering.
Here are my measurement results:
https://github.com/rochus-keller/Oberon/blob/master/testcases/Are-we-fast-yet/Are-we-fast-yet_results_linux.pdf comparing the same benchmark on the same machine run under LuaJIT, Mono, Node.js and Crystal.
https://github.com/rochus-keller/Oberon/blob/master/testcases/Are-we-fast-yet/Are-we-fast-yet_results_windows.pdf comparing Mono, .Net 4 and CoreCLR 3 and 5 on the same machine.
Here are the assemblies of the Are-we-fast-yet benchmark suite used for the measurements, in case you want to reproduce my results: http://software.rochus-keller.ch/Are-we-fast-yet_CLI_2021-08-28.zip.
I was very surprised by the results. Perhaps it has to do with the fact that I measured on x86, or that the benchmark suite used includes somewhat larger (i.e. more representative) applications than just micro benchmarks.
What are your opinions? Do others have similar results?
-
Is CoreCLR really that much faster than Mono?
There is a good reason for this; have a look at e.g. https://github.com/smarr/are-we-fast-yet/blob/master/docs/guidelines.md.
-
Why most programming language performance comparisons are most likely wrong
Then apparently the SOM nbody program is taken as the basis of a new Java nbody program.
What are some alternatives?
web3.js - Collection of comprehensive TypeScript libraries for Interaction with the Ethereum JSON RPC API and utility functions.
crystal - The Crystal Programming Language
Rustler - Safe Rust bridge for creating Erlang NIF functions
fast-ruby - :dash: Writing Fast Ruby :heart_eyes: -- Collect Common Ruby idioms.
ponyc - Pony is an open-source, actor-model, capabilities-secure, high performance programming language
PyCall.jl - Package to call Python functions from the Julia language
nx - Multi-dimensional arrays (tensors) and numerical definitions for Elixir
Oberon - Oberon parser, code model & browser, compiler and IDE with debugger
hamler - Haskell-style functional programming language running on Erlang VM.
Smalltalk - Parser, code model, interpreter and navigable browser for the original Xerox Smalltalk-80 v2 sources and virtual image file
otp - 📫 Fault tolerant multicore programs with actors
.NET Runtime - .NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.