Stryker.NET
Verify
Our great sponsors
Stryker.NET | Verify | |
---|---|---|
14 | 5 | |
1,711 | 2,327 | |
1.8% | 2.4% | |
9.3 | 9.8 | |
2 days ago | 2 days ago | |
C# | C# | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stryker.NET
-
Stryker.NET alternatives - Testura.Mutation, visualmutator, fettle, and Faultify
5 projects | 9 Jun 2023
-
Do you guys mock everything in your Unit Tests?
Bogus - For creating fake data Verify - Snapshot testing for .NET MELT - For testing ILogger usage Stryker - Mutation Testing for .NET TestContainers - run docker programmatically in integration tests
-
Scope of unit testing (karma/Jas) Boss wants unreasonable testing?
This is called mutation testing btw.
-
Don't target 100% coverage
Let's try it on our small example using Stryker.
- PhD'ers, what are you working on? What CS topics excite you?
-
Killing mutants to improve your tests
There are tools that do this automatically, stryker[2] is one of them. When you run stryker, it will create many mutant versions of your production code, and run your tests for each mutant (that's how mutations are called in stryker's' documentation) version of the code. If your tests fail then the mutant is killed. If your tests passed, the mutant survived. Let's have a look at the the result of runnning stryker against reffects-store's code:
-
Not sure if popular opinion: Greenfield projects should have 100% test coverage.
Mutation testing is pretty solid. Better than code coverage for sure. Using Stryker personally.
-
Seriously what are they and why does everyone hate them?
A mutation testing tool (like Stryker) runs your unit tests to verify they all pass then makes a small change (mutation) to your code and reruns the tests. At least one test should fail because the modified code should behave differently.
-
Relesed v1.0.0 of my pet javasscript project yesterday after hitting 100% coverage- a gesture detection library
I haven't tried it yet, but last time I researched it, this is the library that looked most promising: https://stryker-mutator.io/
-
Mutation Testing in NodeJS
Website: https://stryker-mutator.io/
Verify
-
Do you guys mock everything in your Unit Tests?
Bogus - For creating fake data Verify - Snapshot testing for .NET MELT - For testing ILogger usage Stryker - Mutation Testing for .NET TestContainers - run docker programmatically in integration tests
-
organizing testing projects
Are you familiar with "snapshot testing" tools such as Verify that store expected output in files. It's still unit testing.
-
Add persisted parameters to CLI applications in .NET
We can use Verify to perform snapshot testing and check for the correct output of the program. In order to make things easier and simplify working with process output capturing and invocation, I used CliWrap.
-
In EF Core every foreach is a potential runtime error that can't be properly fixed
You will have to write extra code to set up a code base to get it started (there's always a large initial cost in getting things set up, and you'll be writing code that helps get the state of your application setup), but I can assure you that our team paid the initial tax and the only reason our tests change now is because of requirements changes (and maybe sometimes because the testing tools we use like Verify have some breaking changes in behavior when we upgrade). Otherwise, it helps us identify issues in our code, particularly when we do library upgrades or change to a different library. Again, our tests do not change when we completely reimplement anything, just when the external contract changes. We just get to refactor/reimplement and have confidence that the old behavior stays the same. And then you get to hook up a benchmark to your tests and, if your reason for refactoring was performance reasons, you can show that it was effective.
-
Perfect Replayability
I assume this means you can take something like this, combine it with Snapshot/Approval testing (link to a library I have used), and then you have some quick-to-generate tests that help guard against regressions (even visual ones) by say:
What are some alternatives?
xUnit - xUnit.net is a free, open source, community-focused unit testing tool for .NET.
snapshooter - Snapshooter is a snapshot testing tool for .NET Core and .NET Framework
sharpfuzz - AFL-based fuzz testing for .NET
Shouldly - Should testing for .NET—the way assertions should be!
Moq - Repo for managing Moq 4.x [Moved to: https://github.com/moq/moq]
Fluent Assertions - A very extensive set of extension methods that allow you to more naturally specify the expected outcome of a TDD or BDD-style unit tests. Targets .NET Framework 4.7, as well as .NET Core 2.1, .NET Core 3.0, .NET 6, .NET Standard 2.0 and 2.1. Supports the unit test frameworks MSTest2, NUnit3, XUnit2, MSpec, and NSpec3.
MSTest - MSTest framework and adapter
Bogus - :card_index: A simple fake data generator for C#, F#, and VB.NET. Based on and ported from the famed faker.js.
should - Should Assertion Library