dotfiles VS CPython

Compare dotfiles vs CPython and see what are their differences.


💻 macOS / Ubuntu dotfiles (by alrra)


The Python programming language (by python)
Our great sponsors
  • WorkOS - The modern API for authentication & user identity.
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • - Learn 300+ open source libraries for free using AI.
dotfiles CPython
2 1288
1,423 58,451
- 1.7%
7.3 9.9
4 days ago 3 days ago
Shell Python
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.


Posts with mentions or reviews of dotfiles. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-14.


Posts with mentions or reviews of CPython. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-14.
  • PySimpleGUI 4 will be sunsetted in Q2 2024
    7 projects | | 14 Feb 2024
    You missed that they gave an example that does work—Java Swing is bundled with the JVM, making it more or less part of the standard library. Python itself also has Tkinter, which exists inside the cpython repo and is installed with Python [0].

    C++ may not work, but most other languages (especially VM-based) can and many do.


  • Memray – A Memory Profiler for Python
    10 projects | | 10 Feb 2024
    I collected a list of profilers (also memory profilers, also specifically for Python) here:

    Currently I actually need a Python memory profiler, because I want to figure out whether there is some memory leak in my application (PyTorch based training script), and where exactly (in this case, it's not a problem of GPU memory, but CPU memory).

    I tried Scalene (, which seems to be powerful, but somehow the output it gives me is not useful at all? It doesn't really give me a flamegraph, or a list of the top lines with memory allocations, but instead it gives me a listing of all source code lines, and prints some (very sparse) information on each line. So I need to search through that listing now by hand to find the spots? Maybe I just don't know how to use it properly.

    I tried Memray, but first ran into an issue (, but after using some workaround, it worked now. I get a flamegraph out, but it doesn't really seem accurate? After a while, there don't seem to be any new memory allocations at all anymore, and I don't quite trust that this is correct.

    There is also Austin (, which I also wanted to try (have not yet).

    Somehow this experience so far was very disappointing.

    (Side node, I debugged some very strange memory allocation behavior of Python before, where all local variables were kept around after an exception, even though I made sure there is no reference anymore to the exception object, to the traceback, etc, and I even called frame.clear() for all frames to really clear it. It turns out, frame.f_locals will create another copy of all the local variables, and the exception object and all the locals in the other frame still stay alive until you access frame.f_locals again. At that point, it will sync the f_locals again with the real (fast) locals, and then it can finally free everything. It was quite annoying to find the source of this problem and to find workarounds for it.

  • Setting Up the Environment
    2 projects | | 15 Jan 2024
    Python, a versatile and powerful programming language, can be easily installed from its official website: This section will guide you through the installation process on various operating systems, including Windows, macOS, and Linux.
  • Python 3.13 Gets a JIT
    11 projects | | 9 Jan 2024
    The PR message with a riff off the Night Before Christmas is gold.

  • The browsers biggest TLS mistake
    2 projects | | 8 Jan 2024
    Related: There is 10+ years old Python issue about implementing "AIA chaising" to handle server misconfigurations as described in this article: The article mentions this approach in the last paragraph.

    There is at least one 3rd party Python lib that does that, if you are interested in details on how this works:

  • Server side(Backend) programming languages
    4 projects | | 5 Jan 2024
  • 50 Algorithms Every Programmer Should Know (Second Edition)
    3 projects | | 3 Jan 2024
    Python has its own set type not based on hashmap/dict, and that's been the case for years.

    The set implementation points out some of the differences, at :

       Unlike the dictionary implementation, the lookkey function can return
  • You can't do that because I hate you
    9 projects | | 28 Dec 2023
    Except they didn't -- the result of a statement is turned into a string, and the string is printed. There are standard ways of turning objects into strings, and the `__repr__` function on the `exit` object returns that string. If you call that object then it raises an exception that triggers a REPL to cleanly quit.

    The code is here:

  • A copy-and-patch JIT compiler for CPython
    4 projects | | 26 Dec 2023
    It's all explained, including a 50 minutes talk, in the linked issue:
    4 projects | | 26 Dec 2023
    > What I wonder is if the current approach, stated as "copy-and-patch auto-generated code for each opcode", can ever reach that point without being replaced by a completely different design along the way.

    Of course this approach produces a worse code than a full compiler by definition---stencils would be too rigid to be further optimized. A stencil conceptually maps to a single opcode, so the only way to break out of this restriction is to add more opcodes. And there are only so many opcodes and stencils you can prefare. But I think you are thinking too much about a possibility to make Python as fast as, say, C for at least some cases. I believe that it won't happen at all, and the current approach clearly points why.

    Let's consider a simple CPython opcode named `BINARY_ADD` which has a stack effect of `(a b -- sum)`. Ideally it should eventually be compiled down to a fully specialized machine code something like `add rax, r12`, plus some guards. But the actual implementation (`PyNumber_Add` [1]) is far more complex: it may call at most 3 "slot" calls that may add or concatenate arguments, some of them may call back to a Python code.

    So let's assume that we have done type specialization and arguments are known to be integers. That will result in a single slot call to `PyLong_Add` [2], which again is still complex because CPython has two integer representations. Even when they are both "compact", i.e. at most 31/63 bits long, it may still have to switch to another representation when the resulting sum is no longer compact. So a fully specialized machine code would be only possible when both arguments are known to be integers and compact and have one more spare bit to prevent an overflow. That sounds way more restrictive.



    An uncomfortable truth is that all these explanations also almost perfectly apply to JavaScript---the slot resolution would be the `[[ToNumber]]` internal function and multiple representations will be something like V8's Smi. Modern JS engines do exploit most of them, but at the expense of extremely large codebase with tons of potential attack surfaces. It is really expensive to maintain, and people don't really realize that no performant JS engine is developed by a small group of developers. You have to cut some corners.

    In comparison, CPython's approach is essentially inside out. Any JIT implementation will require you to split all those subtasks into small bits that can be either optimized out or baked into a generated machine code. So what if we start with subtasks without thinking about JIT in the first place? This is what a specializing adaptive interpreter [3] did. The current CPython already has two tiers of interpreters, and micro-opcodes can only appear in the second tier. With them we can split larger opcodes into smaller ones, possibly with optimizations, but its performance is limited by the dispatch logic. The copy-and-patch JIT is not as powerful, but it does eliminate the dispatch logic without large design changes and it's a good choice for this purpose.


What are some alternatives?

When comparing dotfiles and CPython you can also consider the following projects:

RustPython - A Python Interpreter written in Rust

ipython - Official repository for IPython itself. Other repos in the IPython organization contain things like the website, documentation builds, etc.

Vulpix - Fast, unopinionated, minimalist web framework for .NET core inspired by express.js

Visual Studio Code - Visual Studio Code

Automatic-Udemy-Course-Enroller-GET-PAID-UDEMY-COURSES-for-FREE - Do you want to LEARN NEW STUFF for FREE? Don't worry, with the power of web-scraping and automation, this script will find the necessary Udemy coupons & enroll you for PAID UDEMY COURSES, ABSOLUTELY FREE!

Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more

Camunda BPM - Flexible framework for workflow and decision automation with BPMN and DMN. Integration with Quarkus, Spring, Spring Boot, CDI.

Django - The Web framework for perfectionists with deadlines.

go - The Go programming language

Plex-Meta-Manager - Python script to update metadata information for items in plex as well as automatically build collections and playlists. The Wiki Documentation is linked below.

git - A fork of Git containing Windows-specific patches.

node - Node.js JavaScript runtime ✨🐢🚀✨