pocl
stargz-snapshotter
pocl | stargz-snapshotter | |
---|---|---|
3 | 10 | |
60 | 1,048 | |
- | 1.7% | |
0.0 | 8.4 | |
over 8 years ago | 7 days ago | |
Go | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pocl
- Tree-shaking, the horticulturally misguided algorithm
-
Web bloat impacts users with slow devices
https://github.com/avodonosov/pocl
The unused javascript code can be removed (and loaded on demand). Although I am not sure how valuable that would be for the world. It only saves network traffic, parsing time and some browser memory for compiled code. But js traffic in the Internet is neglidgible comparing to, say, video and images. Will the user experience be signifiqanty better if browser is the saved from the unnesessary js parsing? I don't know of a good way to measure that.
-
Red and blue functions are a good thing
> for such a small piece of work
Don't take the example too literally, some functions calls can be here.
Running computations in parallel is often valuable. Or run computations in parallel with waiting for external resource - why does not the code in the article compute something while waiting for a, b and c?
Anyways, if async functions are so good, why not have all functions async?
The article says this a kind of "documentation" that tells you what functions can wait for some external data and what functions are "pure computation". If it was so, it would be OK. Such a documentation could be computed automatically based on the called function implementations and developer is hinted: "these two functions you call are both async, consider waiting for both in parallel". In reality, the async / await implementations prevent the non-async functions from becoming async without code change and rebuild. This restriction is just a limitation of how async / await is implemented, not something useful.
As other commenter says, the article "embraces a defect introduced for BC reasons as if it's sound engineering. It really isn't."
When my code is called by a 3rd party library, I can not change my code to async. That's the most unpleasant property of today's async / await. What yesterday was quick computation tomorrow can become a network call. For example, I may want to bodies of rarely used functions to only load when called first time (https://github.com/avodonosov/pocl).
The article suggest we have to decide upfront, at the top-level of the application / call stack, which parts can be implemented with as waiting blocks and which should never wait for anything external. This is not practical.
> It's almost always faster to do them in parallel if possible.
stargz-snapshotter
-
Tree-shaking, the horticulturally misguided algorithm
A lazy chunked delivery strategy like used in the k8s stargz-snapshotter[0] project could be effective here, where it only pulls chunks as needed, but it would probably require wasm platform changes.
[0] https://github.com/containerd/stargz-snapshotter
-
Show HN: depot.ai – easily embed ML / AI models in your Dockerfile
To optimize build speed, cache hits, and registry storage, we're building each image reproducibly and indexing the contents with eStargz[0]. The image is stored on Cloudflare R2, and served via a Cloudflare Worker. Everything is open source[1]!
Compared to alternatives like `git lfs clone` or downloading your model at runtime, embedding it with `COPY` produces layers that are cache-stable, with identical hash digests across rebuilds. This means they can be fully cached, even if your base image or source code changes.
And for Docker builders that enable eStargz, copying single files from the image will download only the requested files. eStargz can be enabled in a variety of image builders[2], and we’ve enabled it by default on Depot[3].
Here’s an announcement post with more details: https://depot.dev/blog/depot-ai.
We’d love to hear any feedback you may have!
[0] https://github.com/containerd/stargz-snapshotter/blob/main/docs/estargz.md
[1] https://github.com/depot/depot.ai
[2] https://github.com/containerd/stargz-snapshotter/blob/main/docs/integration.md#image-builders
[3] https://depot.dev
-
A Hidden Gem: Two Ways to Improve AWS Fargate Container Launch Times
Seekable OCI (SOCI) is a technology open-sourced by AWS that enables containers to launch faster by lazily loading the container image. It’s usually not possible to fetch individual files from gzipped tar files. With SOCI, AWS borrowed some of the design principles from stargz-snapshotter, but took a different approach. A SOCI index is generated separately from the container image and is stored in the registry as an OCI Artifact and linked back to the container image by OCI Reference Types. This means that the container images do not need to be converted, image digests do not change, and image signatures remain valid.
- containerd/stargz-snapshotter: Fast container image distribution plugin with lazy pulling
- EStargz: Lazy pull container images for faster cold starts
- How to optimize the security, size and build speed of Docker images
-
Speeding up LXC container pull by up to 3x
This is interesting and seems general purpose. Not merely for container images.
There’s this option for OCI containers which I don’t pretend to understand: https://github.com/containerd/stargz-snapshotter
It is used by containerd and nerdctl. You do have to build the image with it. Images work in OCI compatible registry. By fetching most used files first container can be started before loading is finished. Or so I gather.
-
Optimizing Docker image size and why it matters
stargz is a gamechanger for startup time. You might not need to care about image size at all
kubernetes and podmand support it, and docker support is likely coming. It lazy loads the filesystem on start-up, making network requests for things that are needed and therefore can often start up large images very fast.
https://github.com/containerd/stargz-snapshotter
-
FOSS News International #2: November 8-145, 2021
containerd/stargz-snapshotter: Fast container image distribution plugin with lazy pulling (github.com)
-
Introducing GKE image streaming for fast application startup and autoscaling
Yes, see https://github.com/containerd/stargz-snapshotter
What are some alternatives?
unison - A friendly programming language from the future
kube-fledged - A kubernetes operator for creating and managing a cache of container images directly on the cluster worker nodes, so application pods start almost instantly
lawvere - A categorical programming language with effects
acr - Azure Container Registry samples, troubleshooting tips and references
containerd - An open and reliable container runtime
soci-snapshotter - A containerd snapshotter plugin which enables standard OCI images to be lazily loaded without requiring a build-time conversion step.
snoop - Snoop — инструмент разведки на основе открытых данных (OSINT world)
uChmViewer - A fork of Kchmviewer, the best software for viewing .chm (MS HTML help) and .epub eBooks.
Lean and Mean Docker containers - Slim(toolkit): Don't change anything in your container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source)
veinmind-tools - veinmind-tools 是由长亭科技自研,基于 veinmind-sdk 打造的容器安全工具集
depot.ai - Embed machine learning models in your Dockerfile
dive - A tool for exploring each layer in a docker image