baseplate.py
warehouse
baseplate.py | warehouse | |
---|---|---|
11 | 277 | |
530 | 3,480 | |
0.2% | 0.8% | |
8.5 | 9.7 | |
2 days ago | 7 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
baseplate.py
-
The .zip TLD sucks and it needs to be immediately revoked.
Almost any download link on the internet (for example an attachment to any Wordpress blog post) could serve as an example. Since we are on a programming subreddit let's use Github as an example. When you open a repository and click the "Download ZIP" button it's a link straight to an URL like this one: https://github.com/reddit/baseplate.py/archive/refs/heads/develop.zip (this particular one is for Reddit's Python library).
-
Python use by SWEs
Even Reddit has python backends https://github.com/reddit/baseplate.py based on Pyramid. They also have a go one. https://github.com/reddit/baseplate.go
-
Reddit System Design/Architecture
there's a multitude of services in reddit's architecture. as far as i can tell, they mostly using reddit's baseplate framework (which has implementations in both python and go).
-
Reddit Recap Series: Backend Performance Tuning
Finally, the problem that we didn’t experience directly, but it was mentioned during consultations with another team that had experience with pgBouncer: the Baseplate.py framework that both of us are using sometimes leaked the connections, leaving them open after the request, but not returning them back into the pool.
- What is an example of a fully finished python software product on github?
-
Is the Pyramid framework dead?
Also reddit team using pyramid for services https://github.com/reddit/baseplate.py
-
is flask only for "smaller" projects or it can also be used for large scalable projects ?
Reddit is built on neither Flask nor Django. The old monolith predates Flask and Django and is built on its own framework. Our new microservices are built on our Baseplate.py framework.
-
Evolving Reddit’s ML Model Deployment and Serving Architecture
Minsky is an internal baseplate.py (Reddit’s python web services framework) thrift service owned by Reddit’s Machine Learning team that serves data or derivations of data related to content relevance heuristics — such as similarity between subreddits, a subreddits topic or a users propensity for a given subreddit — from various data stores such as Cassandra or in process caches. Clients of Minsky use this data to improve Redditor’s experiences with the most relevant content. Over the last few years a set of new ML capabilities, referred to as Gazette, were built into Minsky. Gazette is responsible for serving ML model inferences for personalization tasks along with configuration based schema resolution and feature fetching / transformation.
-
Deadline Budget Propagation for Baseplate.py
Baseplate is implemented in Python and Go, and although they share the same main functionality, smaller features differ between the two. One such feature that was previously on the Go implementation but not Python was deadline budget propagation, which passes on the remaining timeout available from the initial client request all the way through the server and any other requests that may follow. The lack of this feature in Baseplate.py meant that many resources were being wasted by servers doing unnecessary work, despite clients no longer awaiting their response due to timeout.
-
Solving The Three Stooges Problem
In order to make this solution work, you’ll need a web stack that can handle many concurrent requests. Reddit’s stack for most microservices is Python 3, Baseplate, and gevent. Django/Flask also work well when run with gevent. gevent is a Python library that transparently enables your microservice to handle high concurrency and I/O without requiring changes to your code. It is the secret sauce that allows you to run tens of thousands of pseudo-threads called greenlets (one per concurrent request) on a small number of instances. It allows for threads handling concurrent duplicate requests to be enqueued while waiting to acquire the lock, and then for those queues to be drained as threads acquire the lock and execute serially, all without exhausting the thread pool.
warehouse
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
pip install PackageName: installs a package (you can browse the available packages in the Python Package Index)
-
Smooth Packaging: Flowing from Source to PyPi with GitLab Pipelines
python3 -m pip install \ --trusted-host test.pypi.org --trusted-host test-files.pythonhosted.org \ --index-url https://test.pypi.org/simple/ \ --extra-index-url https://pypi.org/simple/ \ piper_whistle==$(python3 -m src.piper_whistle.version)
-
Pickling Python in the Cloud via WebAssembly
In my experience so far, I can use a vast amount of the Python Standard Library to build Wasm-powered serverless applications. The caveat I currently understand is that Python’s implementation of TCP and UDP sockets, as well as Python libraries that use threads, processes, and signal handling behind the scenes, will not compile to Wasm. It is worth noting that a similar caveat exists with libraries that I find on The Python Package Index (PyPI) site. While these caveats might limit what can be compiled to Wasm, there are still a ton of extremely powerful libraries to leverage.
-
Introducing Flama for Robust Machine Learning APIs
We believe that poetry is currently the best tool for this purpose, besides of being the most popular one at the moment. This is why we will use poetry to manage the dependencies of our project throughout this series of posts. Poetry allows you to declare the libraries your project depends on, and it will manage (install/update) them for you. Poetry also allows you to package your project into a distributable format and publish it to a repository, such as PyPI. We strongly recommend you to learn more about this tool by reading the official documentation.
-
PyPI Packaging
From there, I needed to learn a bit about PyPi or Python Package Index, which is the home for all the wonderful packages that you know if you have ever run the handy pip install command. PyPi has a pretty quick and easy onboarding, which requires a secured account be created and, for the purposes of submitting packages from CLI, an API token be generated. This can be done in your PyPi profile. Once logg just navigate to https://pypi.org/manage/account/ and scroll down to the API tokens section. Click “Add Token” and follow the few steps to generate an API token which is your access point to uploading packages. With all this in place, I was able to use twine to handle the package upload. First I needed to install twine, again as simple as pip install twine. In order for twine to access my API token during the package upload process, it needed to read it from .pypirc file that contains the token info. For some that file may exist already, for me I was required to create it. Working in windows I simply used a text editor to create it in my home user directory ($HOME/.pypirc). The file contents had a TOML like format looked like this:
-
Releasing my Python Project
I have published the package to Python Package Index, commonly called PyPi, and in this post, I'll be sharing the steps I had to follow in the process.
-
Publishing my open source project to PyPI!
Register at PyPI.org
-
Show HN: I mirrored all the code from PyPI to GitHub
According to the stats on the original link, there are over 25,000 identified secret ids/keys/tokens in the data. And it looks like that's just identifiable secrets, e.g. "Google API Keys" that I'm guessing are identifiable because they have a specific pattern, and may be missing other secrets that use less recognizable patterns.
I mean, sure, compared to the 478,876 Projects claimed on https://pypi.org/, that's a pretty small minority. On the other hand, I'd guess a many Python packages don't use these particular services, or even need to connect to a remote service at all, so the area for this class of mistake should be even smaller.
And mistakes do happen, but that's a pretty big thing to miss if you are knowingly publishing your code with the expectation other people will be reading it.
-
Pezzo v0.5 - Dashboards, Caching, Python Client, and More!
PyPi package
-
Modifying keywords in python package
Does pypi.org display the Union of all keywords, the keywords of the most recent release, the keywords of the first release or some other weird combination like the intersection?
What are some alternatives?
xhtml2pdf - A library for converting HTML into PDFs using ReportLab
devpi
Apache Thrift - Apache Thrift
bandersnatch
Pyramid - Pyramid - A Python web framework
localshop - local pypi server (custom packages and auto-mirroring of pypi)
rtv - Browse Reddit from your terminal
Poe the Poet - A task runner that works well with poetry.
pottery - Redis for humans. 🌎🌍🌏
scribd-downloader
redsync - Distributed mutual exclusion lock using Redis for Go
Python Packages Project Generator - 🚀 Your next Python package needs a bleeding-edge project structure.