|6 days ago||4 months ago|
|MIT License||BSD 3-clause "New" or "Revised" License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Python’s “Type Hints” are a bit of a disappointment to me
15 projects | news.ycombinator.com | 21 Apr 2022
(I've chopped it down a bit to emphasise the important bits)
But then I looked at upstream and I notice they've added type annotations, which do greatly improve things:
def _new_conn(self) -> socket.socket:
I updated all my Python modules using pip and now I keep getting an ImportError from the 'collections' module. Any fixes to this issue?
2 projects | reddit.com/r/learnpython | 14 Apr 2022
You have somehow installed an extremely old version of urllib3. It looks like the selectors module was deleted in 2018.
httpx worked fine for me... any reason to consider urllib3?
2 projects | reddit.com/r/Python | 24 Mar 2022
I've found httpx to be very approachable for my API consumption tasks and less wordy than urllib3.
Open source package urllib3 raised $15,000 in 2021
2 projects | reddit.com/r/programming | 30 Dec 2021
Here are the docs if you're interested: https://urllib3.readthedocs.io2 projects | reddit.com/r/programming | 30 Dec 2021
Some context on David Lord's entry, he was our first attempt to pay a community member to contribute a PR and get paid for the contribution. Here's the PR we merged and paid him for: https://github.com/urllib3/urllib3/pull/2257
HTTP Calls in Python Without requests or Other External Dependencies
6 projects | dev.to | 7 Mar 2021
urllib3 is the dependency for many other tools, including requests. By itself, urllib3 is quite usable. It may be all you need.
Weird architectures weren't supported to begin with
4 projects | news.ycombinator.com | 28 Feb 2021
Alright, let's do some digging...
On 2013-03-21, urllib3 added an optional dependency to pyopenssl for SNI support on python2 - https://github.com/urllib3/urllib3/pull/156
On 2013-12-29, pyopenssl switched from opentls to cryptography - https://github.com/pyca/pyopenssl/commit/6037d073
On 2016-07-19, urllib3 started to depend on a new pyopenssl version that requires cryptography - https://github.com/urllib3/urllib3/commit/c5f393ae3
On 2016-11-15, requests started to depend on a new urllib3 version that now indirectly requires cryptography - https://github.com/psf/requests/commit/99fa7bec
On 2018-01-30, portage started to enable the +rsync-verify USE flag by default, which relies on the gemato python library maintained by mgorny himself, and gemato depended on requests. So 5-6 levels of indirection at this point? I lost count.
On 2020-01-01, python2 was sunset. A painful year to remember, and a painful migration to forget. And just when the year was about to end...
On 2020-12-22, cryptography started to integrate rust in the build process, and all hell broke loose - https://github.com/pyca/cryptography/commit/c84d6ee0
Ultimately, I think mgorny only has himself to blame here, by injecting his own library into the critical path of gentoo, without carefully taking care of its direct and indirect dependencies. (But of course it is also fair game to blame it on the 2to3 migration)
In comparison, few months before this, the librsvg package went through a similar change where it started to depend on rust, and it was swift and painless without much drama - https://bugs.gentoo.org/739820 and https://wiki.gentoo.org/wiki/Project:GNOME/3.36-notes
Best way to run parallel async http requests
3 projects | reddit.com/r/learnpython | 23 Aug 2021
I found examples of running parallel async http requests using grequests, but in its GitHub page it recommends using requests-threads or requests-futures instead. Which of them would be the most straightforward tool for optimizing a sequence of GET requests against an API. Case scenario: API endpoint provides paginated responses. With first response, I get the total of itens, which allow me to prepare all the remaining urls. The API allows for 25 simultaneous requests from a single user session (JWT token).
Scraping info from multiple links (from inside each link) that are listed on a webpage?
1 project | reddit.com/r/webscraping | 20 Jun 2021
FastAPI on a VPS - what will be the bottleneck
1 project | reddit.com/r/FastAPI | 13 Jan 2021
This video demonstrates Hosting FastAPI on Azure VM with Ubuntu. There are many factors to take into consideration on RAM and bandwidth. But to help you analyze the performance of your deployment, you can use grequests.
Celery vs Threads
1 project | reddit.com/r/django | 12 Jan 2021
If you're doing something in a view that is i/o bound, doesn't take very long, and you need to do a lot of it in parallel (eg. hit 10 different API endpoints in parallel) then you might use threads, or something like this to speed up your view. You can also use an offine task framework like Celery in these cases as well.
What are some alternatives?
requests - A simple, yet elegant, HTTP library.
requests-futures - Asynchronous Python HTTP Requests for Humans using Futures
httplib2 - Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.
pycurl - PycURL - Python interface to libcurl
FGrequests - Fastest python library for making asynchronous group requests.
Uplink - A Declarative HTTP Client for Python
treq - Python requests like API built on top of Twisted's HTTP client.
Doublify API Toolkit