anchore-engine
clusterfuzz
anchore-engine | clusterfuzz | |
---|---|---|
3 | 3 | |
1,529 | 5,203 | |
- | 0.5% | |
4.0 | 9.8 | |
over 1 year ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
anchore-engine
-
A Tool To Advise What Apps Are Out Of Date Per Cluster?
There's also Anchore. - Also another thread w/ resources - https://www.reddit.com/r/kubernetes/comments/bx4w2h/track_outdated_images/.
-
How to Secure Your Kubernetes Clusters With Best Practices
Enable container image scanning in your CI/CD phase to catch known vulnerabilities using tools like clair or Anchore.
- What Vulnerability Scanning Services do you use?
clusterfuzz
-
Fuzzing Ladybird with tools from Google Project Zero
https://github.com/google/clusterfuzz
At least Chromium has integrated multiple different fuzzers into their regular development workflow and found lots of bugs even before going public.
-
An ex-Googler's guide to dev tools
Then it is clear that the behavior of this for loop is either not important or not being tested. This could mean that the tests that you do have are not useful and can be deleted.
> For most non-trivial software the possible state-space is enormous and we generally don't/can't test all of it. So "not testing the (full) behaviour of your application is the default for any test strategy", if we could we wouldn't have bugs... Last I checked most software (including Google's) has plenty of bugs.
I have also used (setup, fixed findings) using https://google.github.io/clusterfuzz/ which uses coverage + properties to find bugs in the way C++ code handles pointers and other things.
> The next question would be let's say I spend my time writing the tests to resolve this (could be a lot of work) is that time better spent vs. other things I could be doing? (i.e. what's the ROI)
That is something that will depend largely on the team and the code you are on. If you are in experimental code that isn't in production, is there value to this? Likely not. If you are writing code that if it fails to parse some data correctly you'll have a huge headache trying to fix it? Likely yes.
The SRE workbook goes over making these calculations.
> Even ignoring that is there data to support that the quality of software where mutation testing was added improved measurably (e.g. less bugs files against the deployed product, better uptime, etc?)
I know that there are studies that show that tests reduce bugs but I do not know of studies that say that higher test coverage reduces bugs.
The goal of mutation testing isn't to drive up coverage though. It is to find out what cases are not being exercised and evaluating if they will cause a problem. For example mutation testing tools have picked up cases like this:
if (debug) print("Got here!");
- ClusterFuzz is a scalable fuzzing infrastructure
What are some alternatives?
grype - A vulnerability scanner for container images and filesystems
rules_js - High-performance Bazel rules for running Node.js tools and building JavaScript projects
dagda - a tool to perform static analysis of known vulnerabilities, trojans, viruses, malware & other malicious threats in docker images/containers and to monitor the docker daemon and running docker containers for detecting anomalous activities
rules_pycross - Bazel + Python rules for cross-platform external dependencies
quay - Build, Store, and Distribute your Applications and Containers
oss-fuzz - OSS-Fuzz - continuous fuzzing for open source software.
aura - Python source code auditing and static analysis on a large scale
peafl64 - Static Binary Instrumentation tool for Windows x64 executables
jellyfin-session-kicker - Session kicker after X amount of watch time for Jellyfin
pyfuzzer - Fuzz test Python modules with libFuzzer
docker-bench-security - The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production.
mutant - Automated code reviews via mutation testing - semantic code coverage.