iamlive
dagger
iamlive | dagger | |
---|---|---|
30 | 93 | |
2,952 | 10,287 | |
- | 2.9% | |
6.2 | 9.9 | |
2 months ago | 2 days ago | |
Go | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
iamlive
-
Why has AWS made IAM Actions impossible to find?
Also things like this (same guy) if you have a sandbox to play in with wider permissions and are trying to build a more scoped profile: https://github.com/iann0036/iamlive
- iann0036/iamlive: Generate an IAM policy from AWS calls using client-side monitoring (CSM) or embedded proxy
-
Why Companies Still Struggle with Least Privilege in the Cloud?
I know there is a tool called iamlive that logs all API calls on your local machine. So you can run commands as an admin user locally while this is running, and find out what permissions were needed. Then you tear down the infra you just deployed, and add those same permissions to a service user of some kind (e.g. a CICD role) to avoid over-privileging it. It's messy but it can be helpful.
-
AWS Creates New Policy-Based Access Control Language Cedar
actually Ian (aws hero) has a tool that does exactly this
https://github.com/iann0036/iamlive
- Permissions Map
- iamlive
-
Show HN: Slauth.io (YC S22) – IAM Policy Auto-Generation
I have used https://github.com/iann0036/iamlive with great success in the past. On high level, the approach you are describing is iamlive on steroids and UX improved.
Kudos on launch, will check your beta
- IAM Live
-
Pike: Tool to determine your IAM requirements from code
Thanks! Permissions are determined per resource or datasource. There's no easy way that I had found, especially if you want this done statically, https://github.com/iann0036/iamlive does it by inspecting your api calls but there's always a look up somewhere. Hopefully ill manage to get a few community contributions and get the ball rolling, i've made it as easy as I could to add support for other resources without you even really having to know golang.
-
The End of CI
IAM isn’t fun, but there’s lots of options.
https://pypi.org/project/access-undenied-aws/ will allow you to start with least privilege and fix specific issues.
https://github.com/iann0036/iamlive allows an admin to perform the action via CLI and capture the policy.
Access advisor can inspect how you actually use the role and give suggestions on what to remove.
A more helpful suggestion is to experiment with these tools and then find gaps in IAM actions and submit those as feature requests via your TAM.
dagger
- Dagger: Programmable open source CI/CD engine that runs pipelines in containers
-
Nix is a better Docker image builder than Docker's image builder
The fact that I couldn't point to one page on the docs that shows the tl;dr or the what problem is this solving
https://docs.dagger.io/quickstart/562821/hello just emits "Hello, world!" which is fantastic if you're writing a programming language but less helpful if you're trying to replace a CI/CD pipeline. Then, https://docs.dagger.io/quickstart/292472/arguments doubles down on that fallacy by going whole hog into "if you need printf in your pipline, dagger's got your back". The subsequent pages have a lot of english with little concrete examples of what's being shown.
I summarized my complaint in the linked thread as "less cowsay in the examples" but to be honest there are upteen bazillion GitHub Actions out in the world, not the very least of which your GHA pipelines use some https://github.com/dagger/dagger/blob/v0.10.2/.github/workfl... https://github.com/dagger/dagger/blob/v0.10.2/.github/workfl... so demonstrate to a potential user how they'd run any such pipeline in dagger, locally, or in Jenkins, or whatever by leveraging reusable CI functions that setup go or run trivy
Related to that, I was going to say "try incorporating some of the dagger that builds dagger" but while digging up an example, it seems that dagger doesn't make use of the functions yet <https://github.com/dagger/dagger/tree/v0.10.2/ci#readme> which is made worse by the perpetual reference to them as their internal codename of Zenith. So, even if it's not invoked by CI yet, pointing to a WIP PR or branch or something to give folks who have CI/CD problems in their head something concrete to map into how GHA or GitLabCI or Jenkins or something would go a long way
-
Testcontainers
> GHA has "service containers", but unfortunately the feature is too basic to address real-world use cases: it assumes a container image can just … boot! … and only talk to the code via the network. Real world use cases often require serialized steps between the test & the dependencies, e.g., to create or init database dirs, set up certs, etc.)
My biased recommendation is to write a custom Dagger function, and run it in your GHA workflow. https://dagger.io
If you find me on the Dagger discord, I will gladly write a code snippet summarizing what I have in mind, based on what you explained of your CI stack. We use GHA ourselves and use this pattern to great effect.
Disclaimer: I work there :)
-
BuildKit in depth: Docker's build engine explained
Dagger (https://dagger.io) is a great way to use BuildKit through language SDKs. It's such a better paradigm, I cannot imagine going back.
Dagger is by the same folks that brought us Docker. This is their fresh take on solving the problem of container building and much more. BuildKit can more than build images and Dagger unlocks it for you.
-
Cloud, why so difficult? 🤷♀️
And suddenly, it's almost painfully obvious where all the pain came from. Cloud applications today are simply a patchwork of disconnected pieces. I have a compiler for my infrastructure, another for my functions, another for my containers, another for my CI/CD pipelines. Each one takes its job super seriously, and keeps me safe and happy inside each of these machines, but my application is not running on a single machine anymore, my application is running on the cloud.
-
Share your DevOps setups
That said I've been moving my CI/CD to https://dagger.io/ which has been FANTASTIC. It's code based so you can define all your pipelines in Go, Python, or Javascript and they all run on containers so I can run actions locally without any special setup. Highly recommended.
-
What’s with DevOps engineers using `make` of all things?
You are right make is arcane. But it gets the job done. There are new exciting things happening in this area. Check out https://dagger.io.
-
Shellcheck finds bugs in your shell scripts
> but I'm not convinced it's ready to replace Gitlab CI.
The purpose of Dagger it's not to replace your entire CI (Gitlab in your case). As you can see from our website (https://dagger.io/engine), it works and integrates with all the current CI providers. Where Dagger really shines is to help you and your teams move all the artisanal scripts encoded in YAML into actual code and run them in containers through a fluent SDK which can be written in your language of choice. This unlocks a lot of benefits which are detailed in our docs (https://docs.dagger.io/).
> Dagger has one very big downside IMO: It does not have native integration with Gitlab, so you end up having to use Docker-in-Docker and just running dagger as a job in your pipeline.
This is not correct. Dagger doesn't depend on Docker. We're just conveniently using Docker (and other container runtimes) as it's generally available pretty much everywhere by default as a way to bootstrap the Dagger Engine. You can read more about the Dagger architecture here: https://github.com/dagger/dagger/blob/main/core/docs/d7yxc-o...
As you can see from our docs (https://docs.dagger.io/759201/gitlab-google-cloud/#step-5-cr...), we're leveraging the *default* Gitlab CI `docker` service to bootstrap the engine. There's no `docker-in-docker` happening there.
> It clumps all your previously separated steps into a single step in the Gitlab pipeline.
This is also not the case, we should definitely improve our docs to reflect that. You can organize your dagger pipelines in multiple functions and call them in separate Gitlab jobs as you're currently doing. For example, you can do the following:
```.gitlab-ci.yml
-
Cicada – A FOSS, Cross-Platform Version of GitHub Actions and Gitlab CI
Check out https://dagger.io/. Write declarative pipelines in code, reproducibly run anywhere.
-
Show HN: Togomak – declarative pipeline orchestrator based on HCL and Terraform
Is this similar to Dagger[1] ?
[1] https://dagger.io
What are some alternatives?
aws-leastprivilege - Generates an IAM policy for the CloudFormation service role that adheres to least privilege.
earthly - Super simple build framework with fast, repeatable builds and an instantly familiar syntax – like Dockerfile and Makefile had a baby.
consoleme - A Central Control Plane for AWS Permissions and Access
pipeline - A cloud-native Pipeline resource.
policy_sentry - IAM Least Privilege Policy Generator
gitlab-ci-local - Tired of pushing to test your .gitlab-ci.yml?
iamzero - Identity & Access Management simplified and secure.
act - Run your GitHub Actions locally 🚀
iamlive-lambda-extension - Lambda Extension for iamlive
aws-cdk - The AWS Cloud Development Kit is a framework for defining cloud infrastructure in code
trailscraper - A command-line tool to get valuable information out of AWS CloudTrail
dagster - An orchestration platform for the development, production, and observation of data assets.