rslint VS diagnostics

Compare rslint vs diagnostics and see what are their differences.

rslint

A (WIP) Extremely fast JavaScript and TypeScript linter and Rust crate (by rslint)

diagnostics

Diagnostic tools for timely dataflow computations (by TimelyDataflow)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
rslint diagnostics
3 1
2,661 41
0.1% -
0.0 0.0
about 1 year ago almost 2 years ago
Rust Rust
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

rslint

Posts with mentions or reviews of rslint. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-12-24.

diagnostics

Posts with mentions or reviews of diagnostics. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-01-22.
  • Why isn't differential dataflow more popular?
    13 projects | news.ycombinator.com | 22 Jan 2021
    I've been using DD in production usage for just over a year now for low latency(sub second from event IRL to pipeline CDC output) processing in a geo-distributed environment(100's of locations globally coordinating) some days at the TB per day level of event ingest.

    DD for me was one of the final attempts to find something, anything, that could handle the requirements I was working with, because Spark, Flink, and others just couldn't reasonably get close to what I was looking for. The closest 2nd place was Apache Flink.

    Over the last year I've read through the DD and TD codebases about 5-7 times fully. Even with that, I'm often in a position where I go back to my own applications to see how I had already solved a type of problem. I liken the project to taking someone use to NASCAR and dropping them into a Formula One vehicle. You've seen it work so much faster, and the tech and capabilities are clearly designed for so much more than you can make it do right now.

    A few learning examples that I consider funny:

    1. I had a graph that was on the order of about 1.2 trillion edges with about 90 million nodes. I was using serde derived structs for the edge and node structs(not simplified numerical types), which means I have to implement(or derive) a bunch of traits myself. I spent way more time than I'd like to admit trying to get .reduce() to work to remove 'surplus' edges that have already been processed from the graph to shrink the working dataset. Finally in frustration and reading through the DD codebase again, I 'rediscovered' .consolidate() which 'just worked' taking the 1.2 trillion edges down into the 300 million edges. For instance, some of the edge values I need to work with have histograms for the distributions, and some of the scoring of those histograms is custom. Not usually an issue, except having to figure out how to implement a bunch of the traits has been a significant hurdle.

    2. I get to constantly dance between DD's runtime and trying to ergonomically connect the application into the tonic gRPC and tokio interfaces. Luckily I've found a nice pattern where I create my inter-thread communication constructs, then start up 2 rust threads, and start tokio based interfaces in one, and DD runtime and workers in the other. On bigger servers(packet.net has some great gen3 instances) I usually pin tokio to 2-8 cores, and leave the rest of the cores to DD.

    3. Almost every new app I start, I run into the gotcha where I want to have a worker that runs only once 'globally' and it's usually the thread that I'd want to use to coordinate data ingestion. Super simple to just have a guard for if worker.index() == 0, but when deep in thought about an upcoming pipeline, it's often forgotten.

    4. For diagnostics, there is: https://github.com/TimelyDataflow/diagnostics which has provided much needed insights when things have gotten complex. Usually it's been 'just enough' to point into the right direction, but only once was the output able to point exactly to the issue I was running into.

    5. I have really high hopes for materialize.io That's really the type of system I'd want to use in 80% of the cases I'm using DD right now. I've been following them for about a year now, and the progress is incredible, but my use cases seem more likely to be supported in the 0.8->1.3 roadmap range.

    6. I've wanted to have a way to express 'use no more than 250GB of ram' and have some way to get a compile time feedback that a fixed dataset won't be able to process the pipeline with that much resources. It'd be far better if the system could adjust its internal runtime approach in order to stay within the limits.

What are some alternatives?

When comparing rslint and diagnostics you can also consider the following projects:

ESLint - Find and fix problems in your JavaScript code.

differential-datalog - DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.

deno_lint - Blazing fast linter for JavaScript and TypeScript written in Rust

timely-dataflow - A modular implementation of timely dataflow in Rust

napi-rs - A framework for building compiled Node.js add-ons in Rust via Node-API

sliding-window-aggregators - Reference implementations of sliding window aggregation algorithms

quick-lint-js - quick-lint-js finds bugs in JavaScript programs

blog - Some notes on things I find interesting and important.

ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.

lambdo - Feature engineering and machine learning: together at last!

differential-dataflow - An implementation of differential dataflow using timely dataflow on Rust.