CrossHair
pynguin
Our great sponsors
CrossHair | pynguin | |
---|---|---|
8 | 11 | |
948 | 1,197 | |
- | 1.3% | |
9.2 | 8.2 | |
6 days ago | 1 day ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CrossHair
-
Try CrossHair while working other Python projects
Writing some Python for Hacktoberfest? Try out CrossHair while you do that and get credit for a blog post too! https://github.com/pschanely/CrossHair/issues/173
-
What are some amazing, great python external modules, libraries to explore?
CrossHair, Hypothesis, and Mutmut for advanced testing.
-
Formal Verification Methods in industry
When you say "formal verification methods", what kind of techniques are you interested in? While using interactive theorem provers will most likely not become very widespread, there are plenty of tools that use formal techniques to give more correctness guarantees. These tools might give some guarantees, but do not guarantee complete functional correctness. WireGuard (VPN tunnel) is I think a very interesting application where they verified the protocol. There are also some tools in use, e.g. Mythril and CrossHair, that focus on detecting bugs using symbolic execution. There's also INFER from Facebook/Meta which tries to verify memory safety automatically. The following GitHub repo might also interest you, it lists some companies that use formal methods: practical-fm
-
Klara: Python automatic test generations and static analysis library
The main difference that Klara bring to the table, compared to similar tool like pynguin and Crosshair is that the analysis is entirely static, meaning that no user code will be executed, and you can easily extend the test generation strategy via plugin loading (e.g. the options arg to the Component object returned from function above is not needed for test coverage).
-
Pynguin – Allow developers to generate Python unit tests automatically
Just in case you are looking for an alternative approach: if you write contracts in your code, you might also consider crosshair [1] or icontract-hypothesis [2]. If your function/method does not need any pre-conditions then the the type annotations can be directly used.
(I'm one of the authors of icontract-hypothesis.)
[1] https://github.com/pschanely/CrossHair
[2] https://github.com/mristin/icontract-hypothesis
-
Programming in Z3 by learning to think like a compiler
There's a tool for verification of Python programs based on contracts which uses Z3: https://github.com/pschanely/CrossHair
You can use it as part of your CI or during the development (there's even a neat "watch" mode, akin to auto-correct).
- Diff the behavior of two Python functions
-
Finding Software Bugs Using Symbolic Execution
Looking at some of your SMT-based projects, I'd love to compare your SMT solver notes with my mine from working on https://github.com/pschanely/CrossHair
Sadly, there aren't a lot of resources on how to use SMT solvers well.
pynguin
-
There is framework for everything.
https://swagger.io/specification/ https://github.com/se2p/pynguin
-
Supposed to create tests for a massive project, how should I go about it?
I would use black to reformat this, then, if you can't refactor/rewrite (which is a lot of work!) I would try automated test generation via something like pynguin or fuzzing. I mean … this is not going to be a reliable solution anyways if the codebase is like that. So I would go in a direction that I find interesting to learn about and that could be helpful for the project. That would be generating tests and doing fuzzing. In the end you should run some linters anyways so that you can justify your results and show that the task is not in the scope of an internship and needs extensive refactoring.
-
Klara: Python automatic test generations and static analysis library
The main difference that Klara bring to the table, compared to similar tool like pynguin and Crosshair is that the analysis is entirely static, meaning that no user code will be executed, and you can easily extend the test generation strategy via plugin loading (e.g. the options arg to the Component object returned from function above is not needed for test coverage).
-
Does anybody know a simple algorithm for generating unit tests given a function's code?
Automated White-box test generation software: * https://github.com/EMResearch/EvoMaster -- for integration tests. * https://github.com/se2p/pynguin, https://pynguin.readthedocs.io/en/latest/user/quickstart.html -- unit test generation for python
- se2p/pynguin Pynguin, the PYthoN General UnIt test geNerator, is a tool that allows developers to generate unit tests automatically.
-
Hacker News top posts: Jun 1, 2021
Pynguin – Generate Python unit tests automatically\ (60 comments)
- Pynguin – Generate Python unit tests automatically
- Pynguin – Allow developers to generate Python unit tests automatically
What are some alternatives?
icontract-hypothesis - Combine contracts and automatic testing.
EvoMaster - The first open-source AI-driven tool for automatically generating system-level test cases (also known as fuzzing) for web/enterprise applications. Currently targeting whitebox and blackbox testing of Web APIs, like REST, GraphQL and RPC (e.g., gRPC and Thrift).
angr - A powerful and user-friendly binary analysis platform!
klara - Automatic test case generation for python and static analysis library
alive2 - Automatic verification of LLVM optimizations
klee - KLEE Symbolic Execution Engine
methods2test - methods2test is a supervised dataset consisting of Test Cases and their corresponding Focal Methods from a set of Java software repositories
miasm - Reverse engineering framework in Python
code - Example application code for the python architecture book
boofuzz - A fork and successor of the Sulley Fuzzing Framework
wily - A Python application for tracking, reporting on timing and complexity in Python code