clusterfuzz
mutant
clusterfuzz | mutant | |
---|---|---|
3 | 5 | |
5,203 | 1,925 | |
0.5% | - | |
9.8 | 8.2 | |
1 day ago | 10 days ago | |
Python | Ruby | |
Apache License 2.0 | Nonstandard |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clusterfuzz
-
Fuzzing Ladybird with tools from Google Project Zero
https://github.com/google/clusterfuzz
At least Chromium has integrated multiple different fuzzers into their regular development workflow and found lots of bugs even before going public.
-
An ex-Googler's guide to dev tools
Then it is clear that the behavior of this for loop is either not important or not being tested. This could mean that the tests that you do have are not useful and can be deleted.
> For most non-trivial software the possible state-space is enormous and we generally don't/can't test all of it. So "not testing the (full) behaviour of your application is the default for any test strategy", if we could we wouldn't have bugs... Last I checked most software (including Google's) has plenty of bugs.
I have also used (setup, fixed findings) using https://google.github.io/clusterfuzz/ which uses coverage + properties to find bugs in the way C++ code handles pointers and other things.
> The next question would be let's say I spend my time writing the tests to resolve this (could be a lot of work) is that time better spent vs. other things I could be doing? (i.e. what's the ROI)
That is something that will depend largely on the team and the code you are on. If you are in experimental code that isn't in production, is there value to this? Likely not. If you are writing code that if it fails to parse some data correctly you'll have a huge headache trying to fix it? Likely yes.
The SRE workbook goes over making these calculations.
> Even ignoring that is there data to support that the quality of software where mutation testing was added improved measurably (e.g. less bugs files against the deployed product, better uptime, etc?)
I know that there are studies that show that tests reduce bugs but I do not know of studies that say that higher test coverage reduces bugs.
The goal of mutation testing isn't to drive up coverage though. It is to find out what cases are not being exercised and evaluating if they will cause a problem. For example mutation testing tools have picked up cases like this:
if (debug) print("Got here!");
- ClusterFuzz is a scalable fuzzing infrastructure
mutant
-
An ex-Googler's guide to dev tools
There's a pretty good Ruby gem I've used for this before:
https://github.com/mbj/mutant
-
Code coverage vs mutation testing.
You should only really care about mutation testing if your code coverage is relatively high. If your code coverage is 20% then mutation testing should not be your priority. We use mutation testing (mutant for Ruby, pitest for Java). mutant is pretty hassle-free but only works when running under MRI so if you use jruby you are out of luck. pitest was far less easy to integrate.
- Mutant – Automated code reviews via mutation testing – semantic code coverage
- Automated code reviews via mutation testing - semantic code coverage.
-
Semantic blind spot in Ruby case statement
mutant shows redundant semantics, why we'd like to reduce them is perhaps better explained at https://github.com/mbj/mutant#what-is-mutant
What are some alternatives?
rules_js - High-performance Bazel rules for running Node.js tools and building JavaScript projects
Ruby-JMeter - A Ruby based DSL for building JMeter test plans
rules_pycross - Bazel + Python rules for cross-platform external dependencies
Spring - Rails application preloader
anchore-engine - A service that analyzes docker images and scans for vulnerabilities
Parallel Tests - Ruby: 2 CPUs = 2x Testing Speed for RSpec, Test::Unit and Cucumber
oss-fuzz - OSS-Fuzz - continuous fuzzing for open source software.
vcr - Record your test suite's HTTP interactions and replay them during future test runs for fast, deterministic, accurate tests.
peafl64 - Static Binary Instrumentation tool for Windows x64 executables
rspec-side_effects - RSpec extension for checking the side effects of your specifications.
pyfuzzer - Fuzz test Python modules with libFuzzer
timecop - A gem providing "time travel", "time freezing", and "time acceleration" capabilities, making it simple to test time-dependent code. It provides a unified method to mock Time.now, Date.today, and DateTime.now in a single call.