stargz-snapshotter
docker-node
stargz-snapshotter | docker-node | |
---|---|---|
10 | 62 | |
1,048 | 8,069 | |
1.7% | 0.3% | |
8.4 | 8.3 | |
4 days ago | 10 days ago | |
Go | Shell | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stargz-snapshotter
-
Tree-shaking, the horticulturally misguided algorithm
A lazy chunked delivery strategy like used in the k8s stargz-snapshotter[0] project could be effective here, where it only pulls chunks as needed, but it would probably require wasm platform changes.
[0] https://github.com/containerd/stargz-snapshotter
-
Show HN: depot.ai – easily embed ML / AI models in your Dockerfile
To optimize build speed, cache hits, and registry storage, we're building each image reproducibly and indexing the contents with eStargz[0]. The image is stored on Cloudflare R2, and served via a Cloudflare Worker. Everything is open source[1]!
Compared to alternatives like `git lfs clone` or downloading your model at runtime, embedding it with `COPY` produces layers that are cache-stable, with identical hash digests across rebuilds. This means they can be fully cached, even if your base image or source code changes.
And for Docker builders that enable eStargz, copying single files from the image will download only the requested files. eStargz can be enabled in a variety of image builders[2], and we’ve enabled it by default on Depot[3].
Here’s an announcement post with more details: https://depot.dev/blog/depot-ai.
We’d love to hear any feedback you may have!
[0] https://github.com/containerd/stargz-snapshotter/blob/main/docs/estargz.md
[1] https://github.com/depot/depot.ai
[2] https://github.com/containerd/stargz-snapshotter/blob/main/docs/integration.md#image-builders
[3] https://depot.dev
-
A Hidden Gem: Two Ways to Improve AWS Fargate Container Launch Times
Seekable OCI (SOCI) is a technology open-sourced by AWS that enables containers to launch faster by lazily loading the container image. It’s usually not possible to fetch individual files from gzipped tar files. With SOCI, AWS borrowed some of the design principles from stargz-snapshotter, but took a different approach. A SOCI index is generated separately from the container image and is stored in the registry as an OCI Artifact and linked back to the container image by OCI Reference Types. This means that the container images do not need to be converted, image digests do not change, and image signatures remain valid.
- containerd/stargz-snapshotter: Fast container image distribution plugin with lazy pulling
- EStargz: Lazy pull container images for faster cold starts
- How to optimize the security, size and build speed of Docker images
-
Speeding up LXC container pull by up to 3x
This is interesting and seems general purpose. Not merely for container images.
There’s this option for OCI containers which I don’t pretend to understand: https://github.com/containerd/stargz-snapshotter
It is used by containerd and nerdctl. You do have to build the image with it. Images work in OCI compatible registry. By fetching most used files first container can be started before loading is finished. Or so I gather.
-
Optimizing Docker image size and why it matters
stargz is a gamechanger for startup time. You might not need to care about image size at all
kubernetes and podmand support it, and docker support is likely coming. It lazy loads the filesystem on start-up, making network requests for things that are needed and therefore can often start up large images very fast.
https://github.com/containerd/stargz-snapshotter
-
FOSS News International #2: November 8-145, 2021
containerd/stargz-snapshotter: Fast container image distribution plugin with lazy pulling (github.com)
-
Introducing GKE image streaming for fast application startup and autoscaling
Yes, see https://github.com/containerd/stargz-snapshotter
docker-node
-
Standalone Next.js. When serverless is not an option
FROM node:16-alpine AS base FROM base AS deps # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat WORKDIR /app COPY package.json package-lock.json* ./ RUN npm ci FROM base AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN npm run build # Production image, copy all the files and run next FROM base AS runner WORKDIR /app ENV NODE_ENV production COPY --from=builder /app/.next/standalone ./ COPY --from=builder /app/.next/static ./.next/static COPY --from=builder /app/public ./public EXPOSE 3000 ENV PORT 3000 ENV HOSTNAME localhost CMD ["node", "server.js"]
-
Deploying a Web Service on a Cloud VPS Using Kubernetes MicroK8s: A Comprehensive Guide
This instructs docker to start building our image from an existing node image based on Alpine Linux. Alpine distribution is the smallest Linux distribution which allows building lightweight images.
-
.dockerignore being ignored by docker-compose? no space left on device
FROM node:21-alpine AS base FROM base AS builder Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat RUN apk update Set working directory WORKDIR /app Install pnpm with corepack RUN corepack enable && corepack prepare pnpm@latest --activate Enable pnpm add --global on Alpine Linux by setting home location environment variable to a location already in $PATH https://github.com/pnpm/pnpm/issues/784#issuecomment-1518582235 ENV PNPM_HOME=/usr/local/bin RUN pnpm install turbo --global COPY . . RUN turbo prune web --docker Add lockfile and package.json's of isolated subworkspace FROM base AS installer RUN apk add --no-cache libc6-compat RUN apk update WORKDIR /app First install the dependencies (as they change less often) COPY .gitignore .gitignore COPY --from=builder /app/out/json/ . COPY --from=builder /app/out/pnpm-workspace.yaml ./pnpm-workspace.yaml COPY --from=builder /app/out/pnpm-lock.yaml ./pnpm-lock.yaml RUN pnpm install Build the project COPY --from=builder /app/out/full/ . RUN pnpm turbo run build --filter=web FROM base AS runner WORKDIR /app Don't run production as root RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs USER nextjs COPY --from=installer /app/apps/web/next.config.js . COPY --from=installer /app/apps/web/package.json . Automatically leverage output traces to reduce image size https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/standalone ./ COPY --from=installer --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static COPY --from=installer --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public CMD node apps/web/server.js
-
WTF...Next.js app deployed with Docker?
FROM node:18-alpine AS base # Install dependencies only when needed FROM base AS deps # Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. RUN apk add --no-cache libc6-compat WORKDIR /app # Install dependencies based on the preferred package manager COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ RUN \ if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ elif [ -f package-lock.json ]; then npm ci; \ elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ else echo "Lockfile not found." && exit 1; \ fi # Rebuild the source code only when needed FROM base AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . # Next.js collects completely anonymous telemetry data about general usage. # Learn more here: https://nextjs.org/telemetry # Uncomment the following line in case you want to disable telemetry during the build. # ENV NEXT_TELEMETRY_DISABLED 1 RUN yarn build # If using npm comment out above and use below instead # RUN npm run build # Production image, copy all the files and run next FROM base AS runner WORKDIR /app ENV NODE_ENV production # Uncomment the following line in case you want to disable telemetry during runtime. # ENV NEXT_TELEMETRY_DISABLED 1 RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public # Set the correct permission for prerender cache RUN mkdir .next RUN chown nextjs:nodejs .next # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 # set hostname to localhost ENV HOSTNAME "0.0.0.0" # server.js is created by next build from the standalone output # https://nextjs.org/docs/pages/api-reference/next-config-js/output CMD ["node", "server.js"]
-
Node.js built-ins on Deno Deploy
Official docker image for node is built from Alpine or Debian [1]
Forgive me if I don't believe that running a full OS on a host OS to run a single node command doesn't amount to running a VM.
[1] https://github.com/nodejs/docker-node/tree/main/20
-
Beginner recommendations
This is Node's Docker image.
-
Dockerize Your App: An Introduction to Docker
Since the project is written in Node.js, we need to find a Node.js environment on Docker Hub. We can find the official Node image on Docker Hub by searching for "Node.js".
-
Managing upstream security fixes in uselagoon docker images
This node image is just one of a range published by the Node.js team (https://hub.docker.com/_/node), and they also have the Dockerfile for their build available
-
How can i get a container with npm command?I can’t find it with internet.
do you mean container image? npm comes in the "node" container image https://hub.docker.com/_/node
-
nodejs docker on SCALE?
I'm trying to setup a docker container for running a node js app. I'm a bit of a newbie when it comes to docker, but from what I've read all I need to do is "Launch docker image" and enter the image name from the docker repo. In this case: https://hub.docker.com/_/node My config looks like this. After that the UI just says deploying forever. I must be missing something obvious, any ideas?
What are some alternatives?
kube-fledged - A kubernetes operator for creating and managing a cache of container images directly on the cluster worker nodes, so application pods start almost instantly
nvm - Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions
acr - Azure Container Registry samples, troubleshooting tips and references
klipper-web-control-docker - Klipper with Moonraker shipped with Fluidd and/or Mainsail
containerd - An open and reliable container runtime
berry - 📦🐈 Active development trunk for Yarn ⚒
soci-snapshotter - A containerd snapshotter plugin which enables standard OCI images to be lazily loaded without requiring a build-time conversion step.
docker-flutter - flutter docker image with full android sdk
snoop - Snoop — инструмент разведки на основе открытых данных (OSINT world)
colima - Container runtimes on macOS (and Linux) with minimal setup
uChmViewer - A fork of Kchmviewer, the best software for viewing .chm (MS HTML help) and .epub eBooks.
docker-openresty - Docker tooling for OpenResty