picka
Schemathesis
picka | Schemathesis | |
---|---|---|
- | 23 | |
111 | 2,250 | |
- | 1.3% | |
0.0 | 9.8 | |
about 5 years ago | 5 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
picka
We haven't tracked posts mentioning picka yet.
Tracking mentions began in Dec 2020.
Schemathesis
-
Ask HN: Any Good Fuzzer for gRPC?
I am not aware of any tools like that, but eventually, I plan to add support for gRPC fuzzing to Schemathesis. There were already some discussions and it is more or less clear how to move forward. See https://github.com/schemathesis/schemathesis/discussions/190...
-
Show HN: Auto-generate load tests/synthetic test data from OpenAPI spec/HAR file
Why is AI needed for this at all? Have you heard about Schemathesis (https://github.com/schemathesis/schemathesis)?
-
A Tale of Two Kitchens - Hypermodernizing Your Python Code Base
SchemaThesis is a powerful tool, especially when working with web APIs, and here's how it can enhance your testing capabilities:
- Hurl 4.0.0
-
OpenAPI v4 Proposal
I'm sorry, but you have completely misunderstood the purpose of Open API.
It is not a specification to define your business logic classes and objects -- either client or server side. Its goal is to define the interface of an API, and to provide a single source of truth that requests and responses can be validated against. It contains everything you need to know to make requests to an API; code generation is nice to have (and I use it myself, but mainly on the server side, for routing and validation), but not something required or expected from OpenAPI
For what it's worth, my personal preferred workflow to build an API is as follows:
1. Build the OpenAPI spec first. A smaller spec could easily be done by hand, but I prefer using a design tool like Stoplight [0]; it has the best Web-based OpenAPI (and JSON Schema) editor I have encountered, and integrates with git nearly flawlessly.
2. Use an automated tool to generate the API code implementation. Again, a static generation tool such as datamodel-code-generator [1] (which generates Pydantic models) would suffice, but for Python I prefer the dynamic request routing and validation provided by pyapi-server [2].
3. Finally, I use automated testing tools such as schemathesis [3] to test the implementation against the specification.
[0] https://stoplight.io/
[1] https://koxudaxi.github.io/datamodel-code-generator/
[2] https://pyapi-server.readthedocs.io
[3] https://schemathesis.readthedocs.io
-
Faster time-to-market with API-first
Consolidating the API specification with OpenAPI was a turning point for the project. From that moment we were able to run mock servers to build and test the UI before integrating with the backend, and we were able to validate the backend implementation against the specification. We used prism to run mock servers, and Dredd to validate the server implementation (these days I’d rather use schemathesis).
- Show HN: Step CI – API Testing and Monitoring Made Simple
-
API-first development maturity framework
In this approach, you produce an API specification first, then you build the API against the specification, and then you validate your implementation against the specification using automated API testing tools. This is the most reliable approach for building API servers, since it’s the only one that holds the server accountable and validates the implementation against the source of truth. Unfortunately, this approach isn’t as common as it should be. One of the reasons why it isn’t so common is because it requires you to produce the API specification first, which, as we saw earlier, puts off many developers who don’t know how to work with OpenAPI. However, like I said before, generating OpenAPI specifications doesn’t need to be painful since you can use tools for that. In this approach, you use automated API testing tools to validate your implementation. Tools like Dredd and schemathesis. These tools work by parsing your API specification and automatically generating tests that ensure your implementation complies with the specification. They look at every aspect of your API implementation, including use of headers, status codes, compliance with schemas, and so on. The most advanced of these tools at the moment is schemathesis, which I highly encourage you to check out.
-
How do you manage microservices API versions and branching strategies?
Keep all API versions in the code Another strategy is to have all the different API versions in the same code. So you may have a folder structure that looks like this: api ├── v1 └── v2 Within the API folder, you have one folder for v1 and another one for v2. Each folder has its own schemas and routes as required by the API version they implement. If you use URL-based versioning, v1 is accessible through the example.com/v1 endpoint or the v1.example.com subdomain (whichever strategy you use), and same for v2. Deprecating a version is a simple as its corresponding folder. In any case, I'd recommend you also validate your API implementations in the CI using something like schemathesis. Schemathesis looks at the API documentation and automatically generates hundreds of tests to make sure you're using the right schemas, status codes, and so on. It works best if you design and document the API before implementing, which allows you to include OpenAPI links and other features.
-
This Week in Python
schemathesis – Run generated test scenarios based on your OpenAPI specification
What are some alternatives?
faker - Faker is a Python package that generates fake data for you.
dredd - Language-agnostic HTTP API Testing Tool
fake2db - create custom test databases that are populated with fake data
Robot Framework - Generic automation framework for acceptance testing and RPA
radar
pytest - The pytest framework makes it easy to write small tests, yet scales to support complex functional testing
FauxFactory - Generates random data for your tests.
coverage
Mimesis - Mimesis is a robust data generator for Python that can produce a wide range of fake data in multiple languages.
drf-openapi-tester - Test utility for validating OpenAPI documentation
PyRestTest - Python Rest Testing
tox - Command line driven CI frontend and development task automation tool.