containers
kraken
containers | kraken | |
---|---|---|
9 | 14 | |
191 | 5,860 | |
3.1% | 0.7% | |
8.7 | 3.5 | |
4 days ago | 8 days ago | |
Dockerfile | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
containers
-
Need a VM for Java 11 and a specific Program - which distro to choose?
eclipse-temurin:11 https://hub.docker.com/_/eclipse-temurin
-
CentOS 7 vs CentOS Stream vs Rocky vs Alma vs Debian vs Ubuntu for server
Then you build the container. That will download that container that already has linux with java on it, like this one: https://hub.docker.com/_/eclipse-temurin
- Primeiros passos no desenvolvimento Java em 2023: um guia particular
-
From Java to Golang and back
You can shrink the docker image greatly by starting with an Alpine based one like this https://hub.docker.com/_/eclipse-temurin
-
MinIO passes 1B cumulative Docker Pulls
> Just imagine the vast number of poorly cached CI jobs pulling gigabytes from Docker hub on every commit, coupled with naive aproaches to CI/CD when doing microservices, prod/dev/test deployments, etc.
I hit the rate limits that others talk of in the comments, which motivated me to use Nexus for both proxying and storing my own container images.
So far, it's been pretty good, I actually wrote about the process on my blog, "Moving from GitLab Registry to Sonatype Nexus": https://blog.kronis.dev/tutorials/moving-from-gitlab-registr...
Another thing that I tried, however, was to only rely upon Docker Hub for the base images that I want (Ubuntu in my case) and then build everything I need on top of that, doing things like installing Java/Node/Python/Ruby/... manually, adding utilities I want across all of the images etc.
Once again, I wrote about it on my blog, "Using Ubuntu as the base for all of my containers": https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
That approach is absolutely more work, but also is something that's underexplored and works really nicely for me. Now I mostly rely on the OS package manager repositories (or mirrors of those), put less load on Docker Hub, don't risk running into its rate limits and also have common base layers across most of the images that I build, which in practice means less data actually needing to be downloaded to any of the servers where I want to utilize my images.
Of course, the downside is that getting something like PHP running was an absolute pain (tried with Apache, didn't work for some reason, then moved over to Nginx), and I technically miss out on some of the more complex space optimizations because if you look at the Dockerfiles for some of the more popular images, like OpenJDK, you'll occasionally see some interesting approaches, like getting the software package as a bunch of files and "installing" them directly, as opposed to using something like apt/yum: https://github.com/adoptium/containers/blob/08dd7d416cee0fe0...
Then again, personally I'd much prefer to rely on packages that I can get from something like apt directly, even if some of those versions can be a bit older (or add the project's official apt repositories as needed).
-
Question?
The FROM looks incorrect. When i watch the Youtube video it mentions adoptopenjdk which is deprecated (https://hub.docker.com/\_/adoptopenjdk). You now should use https://hub.docker.com/_/eclipse-temurin/.
- Uberjar hosting services?
-
Java eclipse temurin:18.0.1_10-jre-alpine is out ! Now what ?
Eclipse Temurin is maintaining a rich collection of Java images.
-
Anyone using the Alpine Musl JDK builds in production?
Intially only the 17 was the musl-native variant, later added 11 and very recently (6 days ago) for 8 as well: https://github.com/adoptium/containers/issues/72
kraken
-
BTFS (BitTorrent Filesystem)
https://github.com/uber/kraken?tab=readme-ov-file#comparison...
"Kraken was initially built with a BitTorrent driver, however, we ended up implementing our P2P driver based on BitTorrent protocol to allow for tighter integration with storage solutions and more control over performance optimizations.
Kraken's problem space is slightly different than what BitTorrent was designed for. Kraken's goal is to reduce global max download time and communication overhead in a stable environment, while BitTorrent was designed for an unpredictable and adversarial environment, so it needs to preserve more copies of scarce data and defend against malicious or bad behaving peers.
Despite the differences, we re-examine Kraken's protocol from time to time, and if it's feasible, we hope to make it compatible with BitTorrent again."
-
Resilient image cache/mirror
Kraken seems unmaintained: https://github.com/uber/kraken/issues/313
-
DockerHub replacement stratagy and options
For within your boundary of control, whether that be r/selfhosting, r/homelab, or enterprise a small registry or something like uber's kraken registry makes more sense.
-
Docker is deleting Open Source organisations - what you need to know
First hit on Google is https://github.com/uber/kraken Did not know such thing exists.
-
MinIO passes 1B cumulative Docker Pulls
Uber Engineering open-sourced Kraken [1], their peer-to-peer docker registry. I remember it originally using the BitTorrent protocol but in their readme they now say it is "based on BitTorrent" due to different tradeoffs they needed to make.
As far as I know there aren't any projects doing peer-to-peer distribution of container images to servers, probably because it's useful to be able to use a stock docker daemon on your server. The Kraken page references Dragonfly [2] but I haven't grokked it yet, it might be that.
It's also possible that in practice you'd want your CI nodes optimized for compute because they're doing a lot of work, your registry hosts for bandwidth, and your servers again for compute, and having one daemon to rule them all seems elegant but is actually overgeneralized, and specialization is better.
1 https://github.com/uber/kraken
-
Ask HN: Have You Left Kubernetes?
If you're pulling big images you could try kube-fledged (it's the simplest option, a CRD that works like a pre-puller for your images), or if you have a big cluster you can try a p2p distributor, like kraken or dragonfly2.
Also there's that project called Nydus that allows starting up big containers way faster. IIRC, starts the container before pulling the whole image, and begins to pull data as needed from the registry.
https://github.com/senthilrch/kube-fledged
https://github.com/dragonflyoss/Dragonfly2
https://github.com/uber/kraken
https://nydus.dev/
-
Kube-fledged: Cache Container Images in Kubernetes
Uber Kraken: Kraken is a P2P Docker registry capable of distributing TBs of data in seconds (URL: https://github.com/uber/kraken)
-
How to handle registry outages ? Registry outage contingency plans ?
Might want to consider a private p2p solution like https://github.com/uber/kraken or similar.
-
How to handle locally build container images across nodes? Container Registry the only way?
Cost, availability, upkeep. Same as any other service. There are alternatives… https://github.com/uber/kraken
- Can Kubernetes pre-pull and cache images?