Crystal Lang 1.0 Release

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • crystal

    The Crystal Programming Language

  • https://github.com/crystal-lang/crystal/issues/5430

    this issue tracks Windows progress, it hasn't been really updated in a while. I've been following Crystal Windows port for over a year, nothing changed yet.

  • lucky

    A full-featured Crystal web framework that catches bugs for you, runs incredibly fast, and helps you write code that lasts.

  • I'm new to Crystal - have been using it the past three months on a new web API. (I'm using the Lucky framework - https://luckyframework.org/)

    It's fantastic. I can think in it. Thank you to all the devs behind this wonderful language!

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • amber

    A Crystal web framework that makes building applications fast, simple, and enjoyable. Get started with quick prototyping, less bugs, and blazing fast performance.

  • https://amberframework.org/

    There are two pretty big web frameworks in the Crystal world right now, one is Kemal which strives to be the Sinatra or Flask of the Crystal world (lightweight, supporting plugins), and the other is Crystal which is trying to be the Ruby on Rails or Django of the Crystal world (full-featured, opinionated).

  • awesome-crystal

    :gem: A collection of awesome Crystal libraries, tools, frameworks and software

  • There are also other options: https://github.com/veelenga/awesome-crystal#web-frameworks.

    https://athenaframework.org is pretty unique. It's one of the more flexible frameworks IMO. It comes from a Symfony/Spring style framework.

  • benchmarks

    Some benchmarks of different languages

  • You can find some benchmarks here: https://github.com/kostya/benchmarks

    With the caveat of course being that benchmarks don't always reflect real world performance.

  • FrameworkBenchmarks

    Source for the TechEmpower Framework Benchmarks project

  • Also Crystal based web frameworks tested here (key: Cry) same caveat as above. https://www.techempower.com/benchmarks/

  • athena

    An ecosystem of reusable, independent components

  • There's also https://athenaframework.org, which takes its inspiration from more of a Spring/Symfony based approach. Brings some new ideas & unique features into the space mostly dominated by Rubyesque frameworks.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • caramel

    :candy: a functional language for building type-safe, scalable, and maintainable applications

  • Then Caramel just might end your search!

    https://caramel.run/

  • crlocator

  • Crystal has been my language for my passion project (https://gitlab.com/maxpert/crlocator). I can tell you the speed and magical ruby syntax is unbelievably good (not for everyone’s taste). I just wish a better IDE support now. Since I’ve used Kotlin I’ve been spoiled by the IDE. But I assume it should be relatively straightforward because it’s all static typed.

  • Vrmac

    Vrmac Graphics, a cross-platform graphics library for .NET. Supports 3D, 2D, and accelerated video playback. Works on Windows 10 and Raspberry Pi4.

  • In my case the ARMs were Raspberry Pi3, Pi4, and RK3288. Linux was Debian in all cases, and .NET was 2.1 and 2.2. Worked great for my use cases, only crashed when I screwed up with unsafe or unmanaged code.

    If your environment is anywhere similar, and you OK with .NET 2.1.18, you can try my package: https://github.com/Const-me/Vrmac/releases/tag/1.0 Sources: https://github.com/Const-me/Vrmac/tree/master/net-core

  • babashka

    Native, fast starting Clojure interpreter for scripting

  • This information was true a few years ago. Now, with modular JVM you do not ship a VM with the program. JVM by itself is lean and starts up pretty fast. You can also compile to native with GraalVM - this is a viable option if you want to write lots of tiny command line tools.

    See Babashka[0] for an example scripting toolkit written in Clojure.

    [0]: https://github.com/babashka/babashka

  • php-spx

    A simple & straight-to-the-point PHP profiling extension with its built-in web UI

  • (See also my other comment, which makes a totally different point that I decided to note separately because this got big and would have buried it)

    Well, I have ADHD. I've found the most effective approach (on top of treatment) that helps me retain focus is reexec-on-save, a la `while :; do tput clear; $thing; inotifywait -q -e moved_to .; done`. I usually have a dozen of those in old shell histories (^R FTW). (Ha, my laptop actually has exactly 12, and my other machine has 23 - although ignoredups is off...)

    $thing might be `bash ./script.sh` (because my text editor's atomic rename doesn't understand execute bits >.>), `php script.php` or `gcc -O0 script.c && ./script`. (Also, as an aside I used to use `-e close_write $file` until I realized watching even giant directories is equivalently efficient to watching a file.)

    Shell scripts (the small kind that run few subprocesses) are typically fast. Likewise, small C programs of <1000-2000 lines compile just about instantly on modern hardware; and where modern hardware isn't available and what I'm trying to do doesn't leverage too many libraries or whatnot, tcc has been able to swing the balance firmly in my favor in the past, which has been great.

    But for better or worse, PHP is currently the language I use the most. Because it's faster than Python and Ruby.

    A while back I wanted to do a bit of analysis on a dataset of information that was only published as a set of PDF documents... yayyy. But after timidly gunzipping the stream blocks and googling random bits of PDF's command language ("wat even is this"), I discovered to my complete surprise that it was trivial to interpret the text coordinate system and my first "haha let's see how bad this is" actually produced readable text on pretty much the first go. (To be pedantic, step #-1 was "draw little boxes", step #0 was "how to x,y correctly" and step #1 was "replace boxes with texWHAT it worked?!")

    With rendering basically... viable (in IIRC 300-500 LOC O.o), the next step was the boring stir-the-soup-for-8-hours bespoke state machine that cross-correlated text coordinates with field meanings ("okay, that's a heading, and the next text instruction draws the field value underneath. OK, assert that the heading is bold, the value is not, and they're both exactly the same (floating-point) Y position.")

    While that part took a while, it was mostly extremely easy, because I was pretty much linearly writing the script "from start to finish", ie just chipping away at the rock face of the task at hand until I processed an entire document, then the next document ("oh no"), then the next one ("ugh") and so forth ("wait, the edge cases are... decreasing? :D"). My workflow was pretty much founded entirely on the above-noted method.

    Loading/gunzipping a given PDF and getting to the point where the little pipeline would crash would typically complete in the span of time it would take me to release the CTRL key after hitting CTRL+S. So while the process was objectively quite like stirring soup, it did not feel like that at all and I was able to kind of float a bit as my brain cohesively absorbed the mental model of the architecture I was building without any distractions, pauses or forced context switches getting jammed in the mental encoding process like so many wrenches.

    Soon 15 documents were handled correctly, then 20, then 30, then 100 ("oooh, if all the items on the page add up exactly right it pushes line 2 of the summary heading down to the second page! Hmmm... how on earth to special-case that without refactoring to look at more than 1 page at a time..."), and then I hit some sort of threshold and it suddenly just started ticking through PDFs like crazy without asserting. Which was both awesome and a Problem™: the thing ran at something like ~60 PDFs/sec, and while jumping to just after the last successfully-processed PDF on restart worked great when the code crashed constantly, now I was sitting spinning for tens of seconds, getting distracted as I anticipated the next crash. ADHD(R)(TM).

    I wasn't surprised to learn from htop that the script was disk-bound; for some reason my ZFS mirror setup will happily read sequentially at 200MB/s, but thousands-of-tiny-files situations are... suffice to say apt unconditionally takes 60 seconds to install the smallest thing, unless the entire package db is in the FS cache. I'm not sure why. The PDFs were sharded sanely, but they were still in separate files. So I decided to pack them all into a giant blob, and since there weren't too many PDFs and they were numbered sequentially I used a simple offset-based index at the front of the blob where `fseek(data_start + ( * 4)); $o = fread(4); fseek($o);` would give me random seeking.

    Reading the blob instead promptly pegged a single CPU core (yay!), and gave me IIRC ~150+ PDFs/sec. This was awesome. But I was still just a tiny bit curious, so after googling around for a profiler and having a small jawdrop moment about SPX (https://github.com/NoiseByNorthwest/php-spx), I had a tentative look at what was actually using the most CPU (via `SPX_ENABLED=1 php ./script.php`, which will automatically print a one-page profile trace to stdout at graceful exit or ^C).

    Oh. The PDF stack machine interpreter is what's taking all the CPU time. That tiny 100 line function was the smallest in the whole script. lol

    So, I moved that function to the preprocessor/packer, then (after some headscratching) serialized the array of tokenized commands/strings into the blob by prefixing commands with \xFF and elements with \xFF\xFE\xFF so I could explode() on \xFF and tell commands from strings by checking if the previous entry was \xFE (and just skip entries of '\xFE' when I found them) :D. Then I reran the preprocessor to regenerate the pack file.

      $ php convert_dlcache.php

  • entendrepreneur

    program for generating funny portmanteaus and rhymes

  • These are the python scripts: https://github.com/jonadsimon/entendrepreneur/tree/master/pr...

    I don’t do enough python to judge them. But, then again, I had even less experience in Crystal back when I worked on this.

  • Arrow Meta

    Functional companion to Kotlin's Compiler

  • Thanks for the example, that's interesting.

    Note that Kotlin has union types through this compiler plugin: https://github.com/arrow-kt/arrow-meta/issues/570

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts