Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 17 Python Code Quality Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
flake8-bugbear
A plugin for Flake8 finding likely bugs and design problems in your program. Contains warnings that don't belong in pyflakes and pycodestyle.
-
betterscan-ce
Code Scanning/SAST/Static Analysis/Linting using many tools/Scanners + OpenAI GPT with One Report (Code, IaC) - Betterscan Community Edition (CE)
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
treeage
Expose aging code by listing contents of repository in a tree-like format with eye-catching age metric.
-
flake8-warnings
Python linter (flake8, pylint, CLI) that warns you about using deprecated modules, classes, and functions
-
Pixeebot
Pixeebot finds security and code quality issues in your code and inbound pull requests and creates merge-ready pull requests with recommended fixes. Pixeebot integrates with third party security tools such as Sonar, Semgrep, and CodeQL to automatically fix findings from each tool's scans.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
A little introduction about pylint. Pylint is a static code analyzer, it analyses your code without actually running it. Pylint looks for potential errors, gives suggestions on coding standards that your code is not adhering to, potential places where refactoring might help, and also warnings about smelly code.
Project mention: A Tale of Two Kitchens - Hypermodernizing Your Python Code Base | dev.to | 2023-11-12Bugbear is not specifically a security tool but serves as an effective guard against common coding errors and pitfalls. It pinpoints and rectifies frequent mistakes like setting a list as a default value for a parameter and cautions against such practices, enhancing code robustness.
Project mention: Translating Python Docstrings with Gpt4docstrings | news.ycombinator.com | 2023-11-27
Project mention: Large Language Models Are State-of-the-Art Evaluators of Code Generation | /r/BotNews | 2023-04-28Recent advancements in the field of natural language generation have facilitated the use of large language models to assess the quality of generated text. Although these models have shown promising results in tasks such as machine translation and summarization, their applicability in code generation tasks remains limited without human involvement. The complexity of programming concepts required for such tasks makes it difficult to develop evaluation metrics that align with human judgment. Token-matching-based metrics, such as BLEU, have demonstrated weak correlations with human practitioners in code generation tasks. Moreover, the utilization of human-written test suites to evaluate functional correctness can be challenging in domains with low resources. To overcome these obstacles, we propose a new evaluation framework based on the GPT-3.5 (\texttt{GPT-3.5-turbo}), for code generation assessments. Our framework addresses the limitations of existing approaches by achieving superior correlations with functional correctness and human preferences, without the need for test oracles or references. We evaluate the efficacy of our framework on two different tasks and four programming languages, comparing its performance with the state-of-the-art CodeBERTScore metric, which relies on a pre-trained model. Our results demonstrate that our framework surpasses CodeBERTScore, delivering high levels of accuracy and consistency across various programming languages and tasks. We also make our evaluation framework and datasets available to the public at \url{https://github.com/terryyz/llm-code-eval}, encouraging further research in the evaluation of code generation.
Project mention: Check out pynalyzer - easy to use meta static code analysis bundle | /r/learnpython | 2023-07-06Here are the links: pypi: https://pypi.org/project/pynalyzer/ github: https://github.com/Devourian/pynalyzer Feel free to ask anything about it here and / or report an issue on GitHub, if something doesn't seem to work :)
Project mention: Show HN: Pixeebot – a GitHub App that fixes your Sonar findings (Java/Python) | news.ycombinator.com | 2024-03-25https://github.com/pixee/pygoat/pull/2/files
The changes aren't all super fancy, but we're orienting towards solving real problems and remediating issues -- grunt work you don't want to have to do, but compliance says you should (and you probably should)!
Right now, we fix around 25 of the things that Sonar commonly finds (and a lot more that it doesn't find!). You can see the complete list of things we fix here:
https://docs.pixee.ai/codemods/overview/
I'll tell you, it's so much nicer to receive PRs than tool warnings.
To try it out:
1. Install the Pixeebot GitHub App on a Sonar-monitored GitHub repository
- https://github.com/apps/pixeebot
Python Code Quality related posts
- Xcode debugging cheatsheet
- W1203: logging-fstring-interpolation (Solved)
- Translating Python Docstrings with Gpt4docstrings
- gpt4docstrings: Automatically generate docstrings for entire projects using ChatGPT
- gpt4docstrings to write docstrings for your Python code using GPT-3.5
- Gpt4docstrings: A GPT-based Python library to generate multi-style docstrings
- A Python package to automatically generate docstrings using GPT
-
A note from our sponsor - InfluxDB
www.influxdata.com | 26 Apr 2024
Index
What are some of the best open-source Code Quality projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | chisel | 9,088 |
2 | Pylint | 5,110 |
3 | wemake-python-styleguide | 2,426 |
4 | flake8-bugbear | 1,037 |
5 | betterscan-ce | 683 |
6 | PEP 8 Speaks | 603 |
7 | ocstyle | 255 |
8 | gpt4docstrings | 101 |
9 | ice-score | 60 |
10 | treeage | 38 |
11 | flake8-todos | 26 |
12 | ContrXT | 23 |
13 | flake8-length | 22 |
14 | typeforce | 20 |
15 | flake8-warnings | 11 |
16 | pynalyzer | 2 |
17 | Pixeebot | - |
Sponsored