Using S3 as a Container Registry

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Purpose built for real-time analytics at any scale.
InfluxDB Platform is powered by columnar analytics, optimized for cost-efficient storage, and built with open data standards.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • distribution-spec

    OCI Distribution Specification

    The OCI Distribution Spec is not great.

    > According to the specification, a layer push must happen sequentially: even if you upload the layer in chunks, each chunk needs to finish uploading before you can move on to the next one.

    As far as I've tested with DockerHub and GHCR, chunked upload is broken anyways, and clients upload the image as a whole. The spec also promotes `Content-Range` value formats that do not match the RFC7233 format.

    Another gripe of mine is that they missed the opportunity to standardize pagination of listing tags, because they accidentally deleted some text from the standard [1]. Now different registries roll their own.

    [1] https://github.com/opencontainers/distribution-spec/issues/4...

  • InfluxDB

    Purpose built for real-time analytics at any scale. InfluxDB Platform is powered by columnar analytics, optimized for cost-efficient storage, and built with open data standards.

    InfluxDB logo
  • distribution

    The toolkit to pack, ship, store, and deliver container content

    What’s wrong with https://github.com/distribution/distribution?

  • serverless-registry

    A Docker registry backed by Workers and R2.

    Actually, Cloudflare open-sourced a registry server using R2.[1]

    Anyone tried it?

    [1]: https://github.com/cloudflare/serverless-registry

  • buildkit

    concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit

    At the very real risk of talking out of my ass, the new versioned Dockerfile mechanism on top of builtkit should enable you to do that: https://github.com/moby/buildkit/blob/v0.15.0/frontend/docke...

    In true "when all you have is a hammer" fashion, as very best I can tell that syntax= directive is pointing to a separate docker image whose job it is to read the file and translate it into builtkit api calls, e.g. https://github.com/moby/buildkit/blob/v0.15.0/frontend/docke...

    But, again for clarity: I've never tried such a stunt, that's just the impression I get from having done mortal kombat with builtkit's other silly parts

  • storage

    Container Storage Library (by containers)

    If $PROGRAMMING_LANGUAGE = go, you might be looking for https://github.com/containers/storage which can create layers, images, and so on. I think `Store` is the main entry: https://pkg.go.dev/github.com/containers/storage#Store

    Buildah uses it: https://github.com/containers/buildah/blob/main/go.mod#L27C2...

  • buildah

    A tool that facilitates building OCI images.

    If $PROGRAMMING_LANGUAGE = go, you might be looking for https://github.com/containers/storage which can create layers, images, and so on. I think `Store` is the main entry: https://pkg.go.dev/github.com/containers/storage#Store

    Buildah uses it: https://github.com/containers/buildah/blob/main/go.mod#L27C2...

  • keppel

    Regionally federated multi-tenant container image registry

    Source: I have implemented a OCI-compliant registry [1], though for the most part I've been following the behavior of the reference implementation [2] rather than the spec, on account of its convolutedness.

    When the client finalizes a blob upload, they need to supply the digest of the full blob. This requirement evidently serves to enable the server side to validate the integrity of the supplied bytes. If the server only started checking the digest as part of the finalize HTTP request, it would have to read back all the blob contents that had already been written into storage in previous HTTP requests. For large layers, this can introduce an unreasonable delay. (Because of specific client requirements, I have verified my implementation to work with blobs as large as 150 GiB.)

    Instead, my implementation runs the digest computation throughout the entire sequence of requests. As blob data is taken in chunk by chunk, it is simultaneously streamed into the digest computation and into blob storage. Between each request, the state of the digest computation is serialized in the upload URL that is passed back to the client in the Location header. This is roughly the part where it happens in my code: https://github.com/sapcc/keppel/blob/7e43d1f6e77ca72f0020645...

    I believe that this is the same approach that the reference implementation uses. Because digest computation can only work sequentially, therefore the upload has to proceed sequentially.

    [1] https://github.com/sapcc/keppel

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • OCI image from dockerfile

    2 projects | /r/docker | 6 Dec 2023
  • Building Docker image from layers from registry v2 API

    3 projects | /r/docker | 12 Mar 2021
  • Docker Containers | Linux Namespaces | Container Isolation

    5 projects | dev.to | 10 Aug 2024
  • Unfashionably secure: why we use isolated VMs

    6 projects | news.ycombinator.com | 25 Jul 2024
  • 5 Alternatives to Docker Desktop

    7 projects | dev.to | 24 Jul 2024

Did you konow that Go is
the 4th most popular programming language
based on number of metions?