Fuzzie
clusterfuzz
Fuzzie | clusterfuzz | |
---|---|---|
1 | 3 | |
5 | 5,213 | |
- | 0.7% | |
6.0 | 9.8 | |
2 months ago | 1 day ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Fuzzie
clusterfuzz
-
Fuzzing Ladybird with tools from Google Project Zero
https://github.com/google/clusterfuzz
At least Chromium has integrated multiple different fuzzers into their regular development workflow and found lots of bugs even before going public.
-
An ex-Googler's guide to dev tools
Then it is clear that the behavior of this for loop is either not important or not being tested. This could mean that the tests that you do have are not useful and can be deleted.
> For most non-trivial software the possible state-space is enormous and we generally don't/can't test all of it. So "not testing the (full) behaviour of your application is the default for any test strategy", if we could we wouldn't have bugs... Last I checked most software (including Google's) has plenty of bugs.
I have also used (setup, fixed findings) using https://google.github.io/clusterfuzz/ which uses coverage + properties to find bugs in the way C++ code handles pointers and other things.
> The next question would be let's say I spend my time writing the tests to resolve this (could be a lot of work) is that time better spent vs. other things I could be doing? (i.e. what's the ROI)
That is something that will depend largely on the team and the code you are on. If you are in experimental code that isn't in production, is there value to this? Likely not. If you are writing code that if it fails to parse some data correctly you'll have a huge headache trying to fix it? Likely yes.
The SRE workbook goes over making these calculations.
> Even ignoring that is there data to support that the quality of software where mutation testing was added improved measurably (e.g. less bugs files against the deployed product, better uptime, etc?)
I know that there are studies that show that tests reduce bugs but I do not know of studies that say that higher test coverage reduces bugs.
The goal of mutation testing isn't to drive up coverage though. It is to find out what cases are not being exercised and evaluating if they will cause a problem. For example mutation testing tools have picked up cases like this:
if (debug) print("Got here!");
- ClusterFuzz is a scalable fuzzing infrastructure
What are some alternatives?
boofuzz - A fork and successor of the Sulley Fuzzing Framework
rules_js - High-performance Bazel rules for running Node.js tools and building JavaScript projects
FDsploit - File Inclusion & Directory Traversal fuzzing, enumeration & exploitation tool.
rules_pycross - Bazel + Python rules for cross-platform external dependencies
gateCracker
anchore-engine - A service that analyzes docker images and scans for vulnerabilities
frelatage - Coverage-based fuzzer for python applications
oss-fuzz - OSS-Fuzz - continuous fuzzing for open source software.
dirsearch - Web path scanner
peafl64 - Static Binary Instrumentation tool for Windows x64 executables
pyfuzzer - Fuzz test Python modules with libFuzzer
mutant - Automated code reviews via mutation testing - semantic code coverage.