phpmiko
unasync
phpmiko | unasync | |
---|---|---|
1 | 5 | |
5 | 82 | |
- | - | |
4.2 | 0.0 | |
6 months ago | 12 months ago | |
PHP | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
phpmiko
-
PHP 8.1.0 Release Announcement
Not intended as a plug but you can even use php for network automation. I cobbled this together:
https://github.com/epiecs/phpmiko for connecting to devices and https://github.com/epiecs/mikodo for inventories and concurrency
The multi core coroutines comment is true but there are ways (to hack) around that.
The way I solved concurrency is just by forking and using sockets:
unasync
-
The bane of my existence: Supporting both async and sync code in Rust
Nice! This is similar to the solution here: https://github.com/python-trio/unasync
-
Need advice to design sync version of an async library
Lastly, I found another project name unasync that is pretty interesting and might works for me. Basically, you write the async version, you run unasync, it generate the sync version from the AST. This project is used by the official elastic search python client.
-
PHP 8.1.0 Release Announcement
Fibers "allow blocking and non-blocking implementations to share the same API"
That's an interesting contrast to Python where the need to use "value = await fn()" v.s. "value = fn()" depending on whether or not that function is awaitable causes all kinds of API design complexity, all the way up to the existence of tools like https://github.com/python-trio/unasync which can code-generate the non-async version of a library from the async version.
-
Async Python is not faster
Async Python has proven faster in my uses for IO and non-CPU-related stuff. But I think Python, either as a community or within the language, needs to solve the anti-pattern of maintaining separate sync and async versions of a library. I'm thinking specifically of aioredis and redis-py, both of which I've worked on.
Some people are looking at ways to solve this. I know urllib3, elasticsearch-py, and a few others use unasync (https://github.com/python-trio/unasync) to transform async code into sync code, leaving one codebase supporting both uses in different namespaces. This leaves you with some conditional logic (is_async_mode() -- https://github.com/python-trio/hip/blob/master/src/ahip/util...). I'm seriously considering this approach.
- unasync – transform your asynchronous code into synchronous code
What are some alternatives?
mikodo - Concurrent library on top of phpmiko. Speeds up the process of sending commands. Libraries are bundled (or planned) to be able to use different providers such as Nornir yaml files, PhpIpam, etc..
jigsaw - Simple static sites with Laravel’s Blade.
woocommerce-custom-orders-table - Store WooCommerce order data in a custom table for improved performance.
Amp - A non-blocking concurrency framework for PHP applications. 🐘
blacksmith - REST API Client
Swoole - 🚀 Coroutine-based concurrency library for PHP
create-siler-app - 🧱 Set up a modern Siler app by running one command.
React - Event-driven, non-blocking I/O with PHP.