pygrype
clusterfuzz
pygrype | clusterfuzz | |
---|---|---|
1 | 3 | |
3 | 5,213 | |
- | 0.7% | |
6.6 | 9.8 | |
about 1 month ago | 4 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pygrype
clusterfuzz
-
Fuzzing Ladybird with tools from Google Project Zero
https://github.com/google/clusterfuzz
At least Chromium has integrated multiple different fuzzers into their regular development workflow and found lots of bugs even before going public.
-
An ex-Googler's guide to dev tools
Then it is clear that the behavior of this for loop is either not important or not being tested. This could mean that the tests that you do have are not useful and can be deleted.
> For most non-trivial software the possible state-space is enormous and we generally don't/can't test all of it. So "not testing the (full) behaviour of your application is the default for any test strategy", if we could we wouldn't have bugs... Last I checked most software (including Google's) has plenty of bugs.
I have also used (setup, fixed findings) using https://google.github.io/clusterfuzz/ which uses coverage + properties to find bugs in the way C++ code handles pointers and other things.
> The next question would be let's say I spend my time writing the tests to resolve this (could be a lot of work) is that time better spent vs. other things I could be doing? (i.e. what's the ROI)
That is something that will depend largely on the team and the code you are on. If you are in experimental code that isn't in production, is there value to this? Likely not. If you are writing code that if it fails to parse some data correctly you'll have a huge headache trying to fix it? Likely yes.
The SRE workbook goes over making these calculations.
> Even ignoring that is there data to support that the quality of software where mutation testing was added improved measurably (e.g. less bugs files against the deployed product, better uptime, etc?)
I know that there are studies that show that tests reduce bugs but I do not know of studies that say that higher test coverage reduces bugs.
The goal of mutation testing isn't to drive up coverage though. It is to find out what cases are not being exercised and evaluating if they will cause a problem. For example mutation testing tools have picked up cases like this:
if (debug) print("Got here!");
- ClusterFuzz is a scalable fuzzing infrastructure
What are some alternatives?
anchore-engine - A service that analyzes docker images and scans for vulnerabilities
rules_js - High-performance Bazel rules for running Node.js tools and building JavaScript projects
opencve - CVE Alerting Platform
rules_pycross - Bazel + Python rules for cross-platform external dependencies
ochrona-cli - A command line tool for detecting vulnerabilities in Python dependencies and doing safe package installs
cve-bin-tool - The CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 200 common, vulnerable components (openssl, libpng, libxml2, expat and others), or if you know the components used, you can get a list of known vulnerabilities associated with an SBOM or a list of components and versions.
oss-fuzz - OSS-Fuzz - continuous fuzzing for open source software.
pip-rating - Check the health of your project's requirements and get a score for each dependency.
peafl64 - Static Binary Instrumentation tool for Windows x64 executables
pyfuzzer - Fuzz test Python modules with libFuzzer
mutant - Automated code reviews via mutation testing - semantic code coverage.