Compare-Net-Objects
Verify
Compare-Net-Objects | Verify | |
---|---|---|
1 | 5 | |
1,027 | 2,340 | |
- | 1.0% | |
7.0 | 9.8 | |
2 months ago | 3 days ago | |
C# | C# | |
Microsoft Public License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Compare-Net-Objects
-
How do I capture multiple asserts?
Instead of doing multiple assertions, why not check for structural (or full) equality over the entire DTO? Compare-Net-Objects can help you here.
Verify
-
Do you guys mock everything in your Unit Tests?
Bogus - For creating fake data Verify - Snapshot testing for .NET MELT - For testing ILogger usage Stryker - Mutation Testing for .NET TestContainers - run docker programmatically in integration tests
-
organizing testing projects
Are you familiar with "snapshot testing" tools such as Verify that store expected output in files. It's still unit testing.
-
Add persisted parameters to CLI applications in .NET
We can use Verify to perform snapshot testing and check for the correct output of the program. In order to make things easier and simplify working with process output capturing and invocation, I used CliWrap.
-
In EF Core every foreach is a potential runtime error that can't be properly fixed
You will have to write extra code to set up a code base to get it started (there's always a large initial cost in getting things set up, and you'll be writing code that helps get the state of your application setup), but I can assure you that our team paid the initial tax and the only reason our tests change now is because of requirements changes (and maybe sometimes because the testing tools we use like Verify have some breaking changes in behavior when we upgrade). Otherwise, it helps us identify issues in our code, particularly when we do library upgrades or change to a different library. Again, our tests do not change when we completely reimplement anything, just when the external contract changes. We just get to refactor/reimplement and have confidence that the old behavior stays the same. And then you get to hook up a benchmark to your tests and, if your reason for refactoring was performance reasons, you can show that it was effective.
-
Perfect Replayability
I assume this means you can take something like this, combine it with Snapshot/Approval testing (link to a library I have used), and then you have some quick-to-generate tests that help guard against regressions (even visual ones) by say:
What are some alternatives?
Fluent Assertions - A very extensive set of extension methods that allow you to more naturally specify the expected outcome of a TDD or BDD-style unit tests. Targets .NET Framework 4.7, as well as .NET Core 2.1, .NET Core 3.0, .NET 6, .NET Standard 2.0 and 2.1. Supports the unit test frameworks MSTest2, NUnit3, XUnit2, MSpec, and NSpec3.
snapshooter - Snapshooter is a snapshot testing tool for .NET Core and .NET Framework
Shouldly - Should testing for .NET—the way assertions should be!
xUnit - xUnit.net is a free, open source, community-focused unit testing tool for .NET.
AutoFixture - AutoFixture is an open source library for .NET designed to minimize the 'Arrange' phase of your unit tests in order to maximize maintainability. Its primary goal is to allow developers to focus on what is being tested rather than how to setup the test scenario, by making it easier to create object graphs containing test data.
Moq - Repo for managing Moq 4.x [Moved to: https://github.com/moq/moq]
Bogus - :card_index: A simple fake data generator for C#, F#, and VB.NET. Based on and ported from the famed faker.js.
NUnit - NUnit Framework
MSTest - MSTest framework and adapter