skopeo
distribution-spec
skopeo | distribution-spec | |
---|---|---|
23 | 60 | |
8,136 | 826 | |
1.8% | 2.2% | |
9.1 | 6.9 | |
7 days ago | 19 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
skopeo
-
Abusing url handling in iTerm2 and Hyper for code execution
I believe skopeo should allow you to: https://github.com/containers/skopeo
-
A better, faster approach to downloading docker images without docker-pull: Skopeo
I decided to go searching for an alternative means to pull a docker image. In my search I discovered Skopeo, an alternative method to download Docker images that proved to be surprisingly effective. It not only downloaded the image faster, it also allowed me to save my image in a tar file, which means you can pull an image on one system and share that image to another system, loading it easily to docker instance on that system. This can be very beneficial if you have multiple systems and don't want to download an image multiple times.
-
[OC] Update: dockcheck - Checking updates for docker images without pulling - automatically update containers by choice.
But I'd suggest looking into if it's solved by other tools already, like regclient/regclient and their regsync features or something like containers/skopeo.
-
Wrapping Go CLI tools in another CLI?
Have a use case where we have a CLI (built with cobra) for our dev teams which can execute common tasks. One of those tasks we want to implement is to copy docker images from the internet to our internal registry. A tool such as skopeo can do this and much more. Instead of essentially re-writing the functionality directly into our CLI we'd like to embed it. This would also negate the need for the dev teams to manage multiple CLI tools.
-
Rails on Docker · Fly
Self hoisting here, I put this together to make it easier to generate single (extra) layer docker images without needing a docker agent, capabilities, chroot, etc: https://github.com/andrewbaxter/dinker
Caveat: it doesn't work on Fly.io. They seem to be having some issue with OCI manifests: https://github.com/containers/skopeo/issues/1881 . They're also having issues with new docker versions pushing from CI: https://community.fly.io/t/deploying-to-fly-via-github-actio... ... the timing of this post seems weird.
FWIW the article says
> create a Docker image, also known as an OCI image
I don't think this is quite right. From my investigation, Docker and OCI images are basically content addressed trees, starting with a root manifest that points to other files and their hashes (root -> images -> layers -> layer configs + files). The OCI manifests and configs are separate to Docker manifests and configs and basically Docker will support both side by side.
-
How are you building docker images for Apple M1?
skopeo is another tool worth looking into. we've started deploying amd and arm nodes into our k8s clusters, and this tool was incredibly easy to build around for getting multi-arch images into our container registry.
-
Get list of image architectures
I would use skopeo, the tool is quite handy for working with remote images. https://github.com/containers/skopeo
-
Implement DevSecOps to Secure your CI/CD pipeline
Using distroless images not only reduces the size of the container image it also reduces the surface attack. The need for container image signing is because even with the distroless images there is a chance of facing some security threats such as receiving a malicious image. We can use cosign or skopeo for container signing and verifying. You can read more about securing containers with Cosign and Distroless Images in this blog.
-
ImagePullPolicy: IfNotPresent - (image doesn’t exist in repo) - Is it possible to pull the micro service image from an EKS node and then push to repo?
Look at using tools like skopeo or crane
-
Monitoring image updates when not using :latest!
You could try some commandline tool like skopeo to fetch the image tags regularly and do some shell magic to notify you on any change you want
distribution-spec
-
serverless-registry: A Docker registry backed by Workers and R2
If you are a CloudFlare employee reading this, you should get involved with the OCI Distribution group that develops the standards for the registry: https://github.com/opencontainers/distribution-spec
-
Docker Containers | Linux Namespaces | Container Isolation
What makes containers useful is the tooling that surrounds it. For these labs, we will be using Docker, which has been a widely adopted tool for using containers to build applications. Docker provides developers and operators with a friendly interface to build, ship and run containers on any environment with a Docker engine. Because Docker client requires a Docker engine, an alternative is to use Podman, which is a deamonless container engine to develop, manage and run OCI containers and is able to run containers as root or in rootless mode. For those reasons, we recommend Podman but because of adoption, this lab still uses Docker.
-
Using S3 as a Container Registry
The OCI Distribution Spec is not great.
> According to the specification, a layer push must happen sequentially: even if you upload the layer in chunks, each chunk needs to finish uploading before you can move on to the next one.
As far as I've tested with DockerHub and GHCR, chunked upload is broken anyways, and clients upload the image as a whole. The spec also promotes `Content-Range` value formats that do not match the RFC7233 format.
Another gripe of mine is that they missed the opportunity to standardize pagination of listing tags, because they accidentally deleted some text from the standard [1]. Now different registries roll their own.
[1] https://github.com/opencontainers/distribution-spec/issues/4...
-
A step-by-step guide to building an MLOps pipeline
One of the main reasons teams struggle to build and maintain their MLOps pipelines are vendor specific packaging. As a model is handed off between data science teams, app development teams, and SRE/DevOps teams, the teams are required to repackage the model to work with their unique toolset. This is tedious, and stands in contrast to well adopted development processes where teams have standardized on the use of containers to ensure that project definitions, dependencies, and artifacts are shared in a consistent format. KitOps is a robust and flexible tool that addresses these exact shortcomings in the MLOps pipeline. It packages the entire ML project in an OCI-compliant artifact called a ModelKit. It is uniquely designed with flexible development attributes to accommodate ML workflows. They present more convenient processes for ML development than DevOps pipelines. Some of these benefits include:
-
A Brief History Of Serverless
Internally, Google used a platform called Borg which is still used by Google to this day. It also served as the basis for Kubernetes. Borg is a container-based platform whose goal was to allow developers to focus on code, not infrastructure. Google has an entire infrastructure team to manage the datacenters. This system came out circa 2004. This predates the advent of modern OCI Containers by about a decade.
-
The transitory nature of MLOps: Advocating for DevOps/MLOps coalescence
Back in 2013, a little company called Docker made it really easy to start using containers to package up applications. A big key to their success was the OCI (you can learn about that here), an industry wide initiative to have standards around how we package up our applications. Because of OCI standards, we have hundreds (maybe thousands?) of tools that can be combined to manage and deploy applications. So why aren’t we using this for packaging up Notebooks and AI models as well? It would make deploying, sharing, and managing our models easier for everyone involved.
-
The Road To Kubernetes: How Older Technologies Add Up
Kubernetes on the backend used to utilize docker for much of its container runtime solutions. One of the modular features of Kubernetes is the ability to utilize a Container Runtime Interface or CRI. The problem was that Docker didn't really meet the spec properly and they had to maintain a shim to translate properly. Instead users could utilize the popular containerd or cri-o runtimes. These follow the Open Container Initiative or OCI's guidelines on container formats.
-
Coexistence of containers and Helm charts - OCI based registries
OCI stands for Open Container Initiative, and its goal as an organization is to define a specification for container formats and runtime.
-
Bazzite – a Steam0S-like OCI image for desktop, living room, and handheld PCs
https://opencontainers.org/
Here is Containerfile from the repo: https://github.com/ublue-os/bazzite/blob/main/Containerfile
-
Distroless images using melange and apko
apko allows us to build OCI container images from .apk packages.
What are some alternatives?
go-containerregistry - Go library and CLIs for working with container registries
spin - Spin is the open source developer tool for building and running serverless applications powered by WebAssembly.
kaniko - Build Container Images In Kubernetes
proxmox-lxc-idmapper - Proxmox unprivileged container/host uid/gid mapping syntax tool.
dive - A tool for exploring each layer in a docker image
appleprivacyletter - An open letter against Apple's new privacy-invasive client-side content scanning.
sinker - A tool to sync images from one container registry to another
jib - 🏗 Build container images for your Java applications.
regclient - Docker and OCI Registry Client in Go and tooling using those libraries.
bartholomew - The Micro-CMS for WebAssembly and Spin