dagger
nixpkgs
Our great sponsors
dagger | nixpkgs | |
---|---|---|
92 | 962 | |
9,986 | 15,311 | |
4.1% | 6.3% | |
9.9 | 10.0 | |
about 3 hours ago | about 13 hours ago | |
Go | Nix | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dagger
-
Nix is a better Docker image builder than Docker's image builder
Since there are a plethora of dagger projects, lazyweb: https://github.com/dagger/dagger#readme
They also recently released their "github actions" replacement <https://news.ycombinator.com/item?id=39550431> but holy hell their documentation is just aggressively bad
The fact that I couldn't point to one page on the docs that shows the tl;dr or the what problem is this solving
https://docs.dagger.io/quickstart/562821/hello just emits "Hello, world!" which is fantastic if you're writing a programming language but less helpful if you're trying to replace a CI/CD pipeline. Then, https://docs.dagger.io/quickstart/292472/arguments doubles down on that fallacy by going whole hog into "if you need printf in your pipline, dagger's got your back". The subsequent pages have a lot of english with little concrete examples of what's being shown.
I summarized my complaint in the linked thread as "less cowsay in the examples" but to be honest there are upteen bazillion GitHub Actions out in the world, not the very least of which your GHA pipelines use some https://github.com/dagger/dagger/blob/v0.10.2/.github/workfl... https://github.com/dagger/dagger/blob/v0.10.2/.github/workfl... so demonstrate to a potential user how they'd run any such pipeline in dagger, locally, or in Jenkins, or whatever by leveraging reusable CI functions that setup go or run trivy
Related to that, I was going to say "try incorporating some of the dagger that builds dagger" but while digging up an example, it seems that dagger doesn't make use of the functions yet <https://github.com/dagger/dagger/tree/v0.10.2/ci#readme> which is made worse by the perpetual reference to them as their internal codename of Zenith. So, even if it's not invoked by CI yet, pointing to a WIP PR or branch or something to give folks who have CI/CD problems in their head something concrete to map into how GHA or GitLabCI or Jenkins or something would go a long way
-
Testcontainers
> GHA has "service containers", but unfortunately the feature is too basic to address real-world use cases: it assumes a container image can just … boot! … and only talk to the code via the network. Real world use cases often require serialized steps between the test & the dependencies, e.g., to create or init database dirs, set up certs, etc.)
My biased recommendation is to write a custom Dagger function, and run it in your GHA workflow. https://dagger.io
If you find me on the Dagger discord, I will gladly write a code snippet summarizing what I have in mind, based on what you explained of your CI stack. We use GHA ourselves and use this pattern to great effect.
Disclaimer: I work there :)
-
BuildKit in depth: Docker's build engine explained
Dagger (https://dagger.io) is a great way to use BuildKit through language SDKs. It's such a better paradigm, I cannot imagine going back.
Dagger is by the same folks that brought us Docker. This is their fresh take on solving the problem of container building and much more. BuildKit can more than build images and Dagger unlocks it for you.
-
Cloud, why so difficult? 🤷♀️
And suddenly, it's almost painfully obvious where all the pain came from. Cloud applications today are simply a patchwork of disconnected pieces. I have a compiler for my infrastructure, another for my functions, another for my containers, another for my CI/CD pipelines. Each one takes its job super seriously, and keeps me safe and happy inside each of these machines, but my application is not running on a single machine anymore, my application is running on the cloud.
-
Share your DevOps setups
That said I've been moving my CI/CD to https://dagger.io/ which has been FANTASTIC. It's code based so you can define all your pipelines in Go, Python, or Javascript and they all run on containers so I can run actions locally without any special setup. Highly recommended.
-
What’s with DevOps engineers using `make` of all things?
You are right make is arcane. But it gets the job done. There are new exciting things happening in this area. Check out https://dagger.io.
-
Shellcheck finds bugs in your shell scripts
> but I'm not convinced it's ready to replace Gitlab CI.
The purpose of Dagger it's not to replace your entire CI (Gitlab in your case). As you can see from our website (https://dagger.io/engine), it works and integrates with all the current CI providers. Where Dagger really shines is to help you and your teams move all the artisanal scripts encoded in YAML into actual code and run them in containers through a fluent SDK which can be written in your language of choice. This unlocks a lot of benefits which are detailed in our docs (https://docs.dagger.io/).
> Dagger has one very big downside IMO: It does not have native integration with Gitlab, so you end up having to use Docker-in-Docker and just running dagger as a job in your pipeline.
This is not correct. Dagger doesn't depend on Docker. We're just conveniently using Docker (and other container runtimes) as it's generally available pretty much everywhere by default as a way to bootstrap the Dagger Engine. You can read more about the Dagger architecture here: https://github.com/dagger/dagger/blob/main/core/docs/d7yxc-o...
As you can see from our docs (https://docs.dagger.io/759201/gitlab-google-cloud/#step-5-cr...), we're leveraging the *default* Gitlab CI `docker` service to bootstrap the engine. There's no `docker-in-docker` happening there.
> It clumps all your previously separated steps into a single step in the Gitlab pipeline.
This is also not the case, we should definitely improve our docs to reflect that. You can organize your dagger pipelines in multiple functions and call them in separate Gitlab jobs as you're currently doing. For example, you can do the following:
```.gitlab-ci.yml
-
Cicada – A FOSS, Cross-Platform Version of GitHub Actions and Gitlab CI
Check out https://dagger.io/. Write declarative pipelines in code, reproducibly run anywhere.
-
Show HN: Togomak – declarative pipeline orchestrator based on HCL and Terraform
Is this similar to Dagger[1] ?
nixpkgs
-
Combining Nix with Terraform for better DevOps
We’ve noticed that some users have been asking about how to use older versions of Terraform in their Nix setups [1, 2]. This is an example of the diverse needs of people and the importance of maintaining backward compatibility. We hope that nixpkgs-terraform will be a useful tool for these users.
-
Nix is a better Docker image builder than Docker's image builder
I think whateveracct was referring to is this link:
https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...
What that file is doing, is building a package, and it essentially is a combination of what Makefile and what RPM spec file does.
I don't know if you're familiar with those tools, but if you aren't it takes some time to know them enough to understand what is happening. So why would be different here?
That's doesn't happen in a single thread, but e.g. asynchronous multithreaded code can spit values in arbitrary order, and depending on what you do you can end up with a different result (floating point is just an example). Generally, you can't guarantee reproducibility because there's too much hardware state that can't be isolated even in a VM. Sure, 99% software doesn't depend on it or do cursed stuff like microarchitecture probing during building, and you won't care until you try to package some automated tests for a game physics engine or something like that. What can happen, inevitably happens.
We don't need to be looking for such contrived examples actually, nixpkgs track the packages that aren't reproducible for much more trivial reasons:
https://github.com/NixOS/nixpkgs/issues?q=is%3Aopen+is%3Aiss...
- trim boto3/botocore, to remove all stuff I did not use, that sucker on it's own is over 100MB
The thing is what you need to understand is that the packages are primarily targeting the NixOS operating system, where in normal situation you have plenty of disk space, and you rather want all features to be available (because why not?). So you end up with bunch of dependencies, that you don't need. Alpine image for example was designed to be for docker, so the goal with all packages is to disable extra bells and whistles.
This is why your result is bigger.
To build a small image you will need to use override and disable all that unnecessary shit. Look at zulu for example:
https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...
you add alsa, fontconfig (probably comes with entire X11), freetype, xorg (oh, nvm fontconfig, it's added explicitly), cups, gtk, cairo and ffmpeg)
Notice how your friend carefully extracts and places only needed files in the container, while you just bundle the entire zulu package with all of its dependencies in your project.
-
Use Ansible to create and start LXD virtual machines
#!/usr/bin/env nix-shell #! nix-shell -i bash #! nix-shell -p sops #! nix-shell -I https://github.com/NixOS/nixpkgs/archive/refs/tags/23.05.tar.gz source config.sh "$@"
-
What AI assistants are already bundled for Linux?
NixOS just got tabbyml[1] which is built on llama-cpp. Working on systemsd services the weekend and updating latest tabbyml release which supports rocm in addition to cuda
-
Contributing Scrutiny to Nixpkgs
It's easy to open a PR, but not so easy to get someone to actually review it.
There's currently 165 open PRs by first-time contributors adding a new package, some of which have been just sitting there without review comments for years. https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+label%3A%22...
At least they're meticulously labeled so it's easy to find them.
- I Just Wanted Emacs to Look Nice – Using 24-Bit Color in Terminals
-
Going declarative on macOS with Nix and Nix-Darwin
I'm also using NixOS and working on Go projects, and had to deal with out-of-date Go releases. Nixpkgs generally does get the latest Go versions pretty quickly, but only in the unstable channels, they're not backported to NixOS releases. You can just grab that one package out of nixpkgs-unstable or nixos-unstable, like:
(import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/nixpkgs-unstable.tar.gz") {}).go_1_21
-
NixOS: Declarative Builds and Deployments
> What exactly would this "cleaner base" look like?
My interpretation would be something like: the abandonment of software that is so poorly designed that it is difficult to package and/or run under Nix.
This commit message (from one of my commits) details some of the struggles supporting Ruby under Nix:
https://github.com/NixOS/nixpkgs/commit/b6c06e216bb3bface40e...
Each of those problems is due to either:
1. Some unmotivated contrivance in Bundler, where the maintainers refused to make their stuff less needlessly broken, or
2. Ruby programmers in general not programming with packaging in mind (haven't touched Ruby/Rails professionally in a while, but when I did, it was par for the course to rsync/capistrano files around -- no one saw the utility of any sort of packaging)
And the two really reinforce each other. Bundler is the de facto way to declare and pin dependencies at the app level, but then Bundler makes it nearly impossible (see the commit message for details) to package software using Bundler, which reinforces the "fuck it, we'll just rsync files around over SSH", which means no one pressures Bundler to Do The Right Thing.
It's the same thing everywhere else. There are complaints elsewhere in this comment section about the nodejs/npm experience on Nix: same underlying problem. The design behind npm is so unnecessarily shit-tacular that it kinda sorta just barely works on its tier 1 platforms. I don't envy the brave souls that have worked on supporting npm packages on Nix.
What are some alternatives?
earthly - Super simple build framework with fast, repeatable builds and an instantly familiar syntax – like Dockerfile and Makefile had a baby.
asdf - Extendable version manager with support for Ruby, Node.js, Elixir, Erlang & more
Home Manager using Nix - Manage a user environment using Nix [maintainer=@rycee]
pipeline - A cloud-native Pipeline resource.
git-lfs - Git extension for versioning large files
easyeffects - Limiter, compressor, convolver, equalizer and auto volume and many other plugins for PipeWire applications
spack - A flexible package manager that supports multiple versions, configurations, platforms, and compilers.
gitlab-ci-local - Tired of pushing to test your .gitlab-ci.yml?
waydroid - Waydroid uses a container-based approach to boot a full Android system on a regular GNU/Linux system like Ubuntu.
nixos - My NixOS Configurations
youtube-dl-gui - A cross-platform GUI for youtube-dl made in Electron and node.js
devshell - Per project developer environments