intents-operator
djinn
intents-operator | djinn | |
---|---|---|
10 | 20 | |
278 | 39 | |
1.8% | - | |
9.3 | 7.1 | |
4 days ago | 6 months ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
intents-operator
-
Otterize launches open-source, declarative IAM permissions for workloads on AWS EKS clusters
No more! The open-source intents-operator and credentials-operator enable you to achieve the same, except without all that work: do it all from Kubernetes, declaratively, and just-in-time, through the magic of IBAC (intent-based access control).
-
Alternative to Network Policys
As you've mentioned, it is not possible to define deny rules using the native NetworkPolicy resource. Instead, you could use your CNI’s implementation for network policies. If you use Calico as your CNI you can use Calico's network policies to create deny rules. You can also take a look at Otterize OSS, an open-source solution my team and I are working on recently. It simplifies network policies by defining them from the client’s perspective in a ClientIntents resource. You can use the network mapper to auto-generate those ClientIntents from the traffic in your cluster, and then deploy them and let the intents-operator manage the network policies for you.
-
Did I miss something here, regarding network policies and helm templates? (Slightly ranty)
However, if you want to control pod-to-pod communication, you might be better suited with managing network policies using ClientIntents, which let you specify which pods should communicate with which, from the client's point of view, and without requiring labels beforehand. It's open source, have a look at the intents operator here: https://github.com/otterize/intents-operator
-
Can I create a NetworkPolicy with podSelector that matches a pod name instead of its labels?
You can try it out by installing an open source, standalone Kubernetes operator that implements them using network policies - https://github.com/otterize/intents-operator
-
Monthly 'Shameless Self Promotion' thread - 2022/12
Hi! I'm Tomer, the CEO of Otterize - a cloud-native open-source tool that makes secure access transparent for developers with a declarative approach to service-to-service authorization. Otterize allows you to automate the creation of network policies and Kafka ACLs in a Kubernetes cluster using a human-readable format. Just declare which services your code intends to call using a Kubernetes custom resource, and access will be granted automatically while blocking anything else. Give it a try! It's free and takes 5 min to get started. https://github.com/otterize/intents-operator
-
Creating network policies for pods with services
You can use https://github.com/otterize/intents-operator to easily configure network policies using only pod names by specifying logical connections (a->b, c->b), and the operator configures network policies and labels for cluster resources automatically.
- otterize/intents-operator: Manage network policies and Kafka ACLs in a Kubernetes cluster with ease.
- Show HN: Intents Operator, turns dev intent into K8s netpolicies and Kafka ACLs
-
What's your take on Zero Trust for Kubernetes?
I'm very passionate about this as I think cybersecurity and ops people lean too far into control -- controlling people, that is, not just programs, and they end up shooting themselves in the foot. Instead, I think you should make it easy for devs in your team to create the right access controls, and that this is the only way to achieve zero trust. Zero-trust inherently relies on all access being intentional and authorized, so if other engineers don't declare which access their code needs, it's impossible to achieve. There's an open source Kubernetes operator that aims to get this concept right with network policies and Kafka ACLs - make it easy for one person to declare which access is intentional and start rolling out zero trust using network policies, and have the access control policy live alongside the client code. Check it out at https://github.com/otterize/intents-operator. Full disclosure - I'm one of the contributors, so I'm a bit biased ;) I'm there on the Slack, so feel free to hit me up (Ori).
-
Manage network policies and Kafka ACLs in a Kubernetes cluster with ease
Hi all, I’m Tomer @Otterize. We just launched an open-source tool to easily automate the creation of network policies and Kafka ACLs in a Kubernetes cluster using a human-readable format, via a custom resource. Check it out - https://github.com/otterize/intents-operator
djinn
-
Monthly 'Shameless Self Promotion' thread - 2022/12
Djinn CI is a newly launched CI platform, with the following features:
-
Act: Run your GitHub Actions locally
I've built a CI platform [1] that does support running your CI builds without the server using an offline runner. I wrote about it here before: https://blog.djinn-ci.com/showcase/2022/08/06/running-your-c...
[1] - https://about.djinn-ci.com/
-
Djinn CI – open-source CI platform
Author of Djinn CI here. This is a CI platform that I developed, it is open source but there is also a hosted offering https://about.djinn-ci.com. Some of the features are detailed below:
* Fully virtualized Linux VMs
* GitHub/GitLab integration
* Variable masking
* Configurable artifact cleanup limits
* Multi-repository builds
* Repeatable builds with cron jobs
* Custom QCOW2 images for builds
I've written some posts demonstrating the features of the platform which I have posted here before:
* https://blog.djinn-ci.com/showcase/2022/08/06/running-your-c...
* https://blog.djinn-ci.com/showcase/2022/08/16/using-multiple...
For further reading there is also the documentation sub-site at https://docs.djinn-ci.com/.
If you have any questions don't hesitate to reach out.
-
Blazing fast CI with MicroVMs
Good article. Firecracker is something that has definitely piqued my interest when it comes to quickly spinning up a throwaway environment to use for either development or CI. I run a CI platform [1], which currently uses QEMU for the build environments (Docker is also supported but currently disabled on the hosted offering), startup times are ok, but having a boot time of 1-2s is definitely highly appealing. I will have to investigate Firecracker further to see if I could incorporate this into what I'm doing.
Julia Evans has also written about Firecracker in the past too [2][3].
[1] - https://about.djinn-ci.com
[2] - https://jvns.ca/blog/2021/01/23/firecracker--start-a-vm-in-l...
[3] - https://news.ycombinator.com/item?id=25883253
-
From WampServer, to Vagrant, to QEMU
At this point when it came to my hobbyist development, I had moved past PHP and started learning Go, and was looking to do some serious development with this for a CI platform I had an idea for. By now, I had a firmer grasp of the software stack I wanted to work with, a better understanding of how everything pieced together. And so I went about developing that CI platform, that would later become Djinn CI. I uninstalled VirtualBox and Vagrant and fully committed to using QEMU, booting up the local machine was as simple as hitting CTRL + R in my terminal, searching for qemu and hitting enter, an elegant solution I know.
-
Looking for a mature distributed task queuer/scheduler in go
I use mcmathja/curlyq and found it pretty reliable. This is the queue I use for Djinn CI an open source CI platform I developed.
-
Using multiple repositories in your CI builds
Djinn CI makes working with multiple repositoriesin a build simple via the sourcesparameter in the build manifest. This allows you to specify multiple Git respositories to clone into your build environment. Each source would be a URL that could be cloned via git clone. With most CI platforms, a build's manifest is typically tied to the source code repository itself. With Djinn CI, whilst you can have a build manifest in a source code repository, the CI server itself doesn't really have an understanding of that repository. Instead, it simply looks at the sources in the manifest that is specified, and clones each of them into the build environment.
-
Running your CI builds without the server
Perhaps the one feature that sets Djinn CI out from other CI platforms is the fact that is has an offline runner. The offline runner allows for CI builds to be run without having to send them to the server. There are some limitations around this, of course, but it provides a useful mechanism for sanity checking build manifests, testing custom images, and for building software without the need for a CI server.
-
Show HN: OneDev – A Lightweight Gitlab Alternative
You mention CI being done in a distributed fashion. Could you elaborate on what you mean by this?
I'm asking as I'm someone who has developed a CI platform [1], and one of its features is the offline runner [2]. The offline runner allows you to run your CI builds on your own computer, and does not communicate with the CI server whatsoever. Is this what you had in mind?
[1] https://about.djinn-ci.com
[2] https://docs.djinn-ci.com/user/offline-runner/
-
Monthly 'Shameless Self Promotion' thread - 2022/06
Djinn CI is a newly launched CI platform, with the following features:
What are some alternatives?
kubelet-csr-approver - Kubernetes controller to enable automatic kubelet CSR validation after a series of (configurable) security checks
gatus - ⛑ Automated developer-oriented status page
certify - :lock: Create private CA and Issue Certificates without hassle
tracetest - 🔭 Tracetest - Build integration and end-to-end tests in minutes, instead of days, using OpenTelemetry and trace-based testing.
network-mapper - Map Kubernetes traffic: in-cluster, to the Internet, and to AWS IAM and export as text, intents, or an image
packj - Packj stops :zap: Solarwinds-, ESLint-, and PyTorch-like attacks by flagging malicious/vulnerable open-source dependencies ("weak links") in your software supply-chain
argocd-example-apps - Example Apps to Demonstrate Argo CD
atuin - ✨ Magical shell history
ziti - The parent project for OpenZiti. Here you will find the executables for a fully zero trust, application embedded, programmable network @OpenZiti
onedev - Git Server with CI/CD, Kanban, and Packages. Seamless integration. Unparalleled experience.
Lux - Lux is a command-line interface for controlling and monitoring Govee lighting, built in Go.
anteon - Anteon (formerly Ddosify) - Effortless Kubernetes Monitoring and Performance Testing. Available on CLI, Self-Hosted, and Cloud