k3s-oci-cluster VS ingress-nginx

Compare k3s-oci-cluster vs ingress-nginx and see what are their differences.

k3s-oci-cluster

Deploy a Kubernetes cluster for free, using k3s and Oracle always free resources (by garutilorenzo)

ingress-nginx

Ingress-NGINX Controller for Kubernetes (by kubernetes)
Our great sponsors
  • InfluxDB - Build time-series-based applications quickly and at scale.
  • SonarQube - Static code analysis for 29 languages.
  • SaaSHub - Software Alternatives and Reviews
k3s-oci-cluster ingress-nginx
5 160
121 14,285
- 1.4%
8.0 9.3
5 days ago 2 days ago
HCL Go
GNU General Public License v3.0 only Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

k3s-oci-cluster

Posts with mentions or reviews of k3s-oci-cluster. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-14.
  • Deploy Kubernetes (K8s) on Amazon AWS using mixed on-demand and spot instances
    7 projects | dev.to | 14 Apr 2022
    We use the same stack used in this repository. This stack need longhorn and nginx ingress.
  • Deploy a Kubernetes cluster for free, using K3s and Oracle always free resources
    4 projects | dev.to | 22 Feb 2022
    Var Required Desc region yes set the correct OCI region based on your needs availability_domain yes Set the correct availability domain. See how to find the availability domain compartment_ocid yes Set the correct compartment ocid. See how to find the compartment ocid cluster_name yes the name of your K3s cluster. Default: k3s-cluster k3s_token yes The token of your K3s cluster. How to generate a random token my_public_ip_cidr yes your public ip in cidr format (Example: 195.102.xxx.xxx/32) environment yes Current work environment (Example: staging/dev/prod). This value is used for tag all the deployed resources compute_shape no Compute shape to use. Default VM.Standard.A1.Flex. NOTE Is mandatory to use this compute shape for provision 4 always free VMs os_image_id no Image id to use. Default image: Canonical-Ubuntu-20.04-aarch64-2022.01.18-0. See how to list all available OS images oci_core_vcn_cidr no VCN CIDR. Default: oci_core_vcn_cidr oci_core_subnet_cidr10 no First subnet CIDR. Default: 10.0.0.0/24 oci_core_subnet_cidr11 no Second subnet CIDR. Default: 10.0.1.0/24 oci_identity_dynamic_group_name no Dynamic group name. This dynamic group will contains all the instances of this specific compartment. Default: Compute_Dynamic_Group oci_identity_policy_name no Policy name. This policy will allow dynamic group 'oci_identity_dynamic_group_name' to read OCI api without auth. Default: Compute_To_Oci_Api_Policy kube_api_port no Kube api default port Default: 6443 public_lb_shape no LB shape for the public LB. Default: flexible. NOTE is mandatory to use this kind of shape to provision two always free LB (public and private) http_lb_port no http port used by the public LB. Default: 80 https_lb_port no http port used by the public LB. Default: 443 k3s_server_pool_size no Number of k3s servers deployed. Default 2 k3s_worker_pool_size no Number of k3s workers deployed. Default 2 install_longhorn no Boolean value, install longhorn "Cloud native distributed block storage for Kubernetes". Default: true longhorn_release no Longhorn release. Default: v1.2.3 unique_tag_key no Unique tag name used for tagging all the deployed resources. Default: k3s-provisioner unique_tag_value no Unique value used with unique_tag_key. Default: https://github.com/garutilorenzo/k3s-oci-cluster PATH_TO_PUBLIC_KEY no Path to your public ssh key (Default: "~/.ssh/id_rsa.pub) PATH_TO_PRIVATE_KEY no Path to your private ssh key (Default: "~/.ssh/id_rsa)

ingress-nginx

Posts with mentions or reviews of ingress-nginx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-03.
  • When is Kubernetes getting HTTP/3?
    2 projects | dev.to | 3 Feb 2023
    For example, ubuntu.com (which I work on during my day job) still doesn't support HTTP/3. This is because getting it into Kubernetes seems to be taking a while. It sounds like it won't land until NGINX merge it into their stable release.
  • How to use ACM public certificate for Nginx ingress controller?
    3 projects | reddit.com/r/kubernetes | 26 Jan 2023
    Also, of personal note, I highly recommend you use the "ingress-nginx" controller which has a huge community and is of much higher quality and flexibility than the "nginx-ingress controller by nginx inc". I've had a lot of success with dozens of clients with this controller. It rocks!
  • Créer des applications directement dans Kubernetes avec Acorn …
    7 projects | dev.to | 21 Dec 2022
    [email protected]:~$ [email protected]:~$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash 100 11345 100 11345 0 0 45436 0 --:--:-- --:--:-- --:--:-- 45562 Downloading https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm [email protected]:~$ helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION [email protected]:~$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx "ingress-nginx" has been added to your repositories [email protected]:~$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "ingress-nginx" chart repository Update Complete. ⎈Happy Helming!⎈ [email protected]:~$ helm install ingress-nginx ingress-nginx/ingress-nginx \ --create-namespace \ --namespace ingress-nginx NAME: ingress-nginx LAST DEPLOYED: Wed Dec 21 21:39:08 2022 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller' [email protected]:~$ kubectl get po,svc -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/local-path-provisioner-79f67d76f8-5rxs8 1/1 Running 0 17m kube-system pod/coredns-597584b69b-g2rbs 1/1 Running 0 17m kube-system pod/metrics-server-5c8978b444-hp822 1/1 Running 0 17m metallb-system pod/controller-84d6d4db45-j49mj 1/1 Running 0 10m metallb-system pod/speaker-pnzxw 1/1 Running 0 10m metallb-system pod/speaker-rs7ds 1/1 Running 0 10m metallb-system pod/speaker-pr4rq 1/1 Running 0 10m ingress-nginx pod/ingress-nginx-controller-8574b6d7c9-frtw7 1/1 Running 0 2m47s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.43.0.1 443/TCP 18m kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 18m kube-system service/metrics-server ClusterIP 10.43.249.247 443/TCP 18m metallb-system service/webhook-service ClusterIP 10.43.4.24 443/TCP 10m ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.43.125.197 443/TCP 2m47s ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.110.31 10.124.110.210 80:32381/TCP,443:30453/TCP 2m47s
  • nginx ingress pod not being deployed with using values.yaml
    2 projects | reddit.com/r/kubernetes | 6 Dec 2022
    Verify that the values in your values.yaml file match the values in the example file from the nginx ingress controller's Helm chart (https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml). In particular, check that all required values are present and that they are in the correct format.
    2 projects | reddit.com/r/kubernetes | 6 Dec 2022
  • How hard is it to deploy kubernetes on bare metal in 2022 ?
    2 projects | reddit.com/r/kubernetes | 28 Nov 2022
    Set up nginx-ingress https://github.com/kubernetes/ingress-nginx - again either via helm or manifest, set to use 'Loadbalancer', which will assign an ip out of the available MetalLB pool.
  • Using Authorizer with DynamoDB and EKS
    5 projects | dev.to | 18 Nov 2022
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --timeout 600s \ --debug \ --set controller.publishService.enabled=true
  • Which nginx ingress you use and recommend? Community or Opensource?
    4 projects | reddit.com/r/kubernetes | 18 Nov 2022
    Community version - based on NGINX Open Source, maintained by the Kubernetes community with a commitment from NGINX teams github - https://github.com/kubernetes/ingress-nginx docs - https://kubernetes.github.io/ingress-nginx/
  • A Brief Interview with Common Lisp Creator Dr. Scott Fahlman
    3 projects | news.ycombinator.com | 12 Nov 2022
  • Déployer et exposer globalement une application multi-clusters via K8GB et Liqo …
    11 projects | dev.to | 11 Nov 2022
    make[3]: Leaving directory '/root/k8gb' Deploy Ingress helm repo add --force-update nginx-stable https://kubernetes.github.io/ingress-nginx "nginx-stable" has been added to your repositories helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "k8gb" chart repository ...Successfully got an update from the "nginx-stable" chart repository Update Complete. ⎈Happy Helming!⎈ helm -n k8gb upgrade -i nginx-ingress nginx-stable/ingress-nginx \ --version 4.0.15 -f deploy/ingress/nginx-ingress-values.yaml Release "nginx-ingress" does not exist. Installing it now. NAME: nginx-ingress LAST DEPLOYED: Fri Nov 11 19:54:37 2022 NAMESPACE: k8gb STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace k8gb get services -o wide -w nginx-ingress-ingress-nginx-controller' An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: foo spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - backend: service: name: exampleService port: number: 80 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: tls.key: type: kubernetes.io/tls make[3]: Entering directory '/root/k8gb' Deploy GSLB cr kubectl apply -f deploy/crds/test-namespace.yaml namespace/test-gslb created sed -i 's/cloud\.example\.com/cloud.example.com/g' "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" gslb.k8gb.absa.oss/test-gslb created git checkout -- "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr.yaml" sed -i 's/cloud\.example\.com/cloud.example.com/g' "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" kubectl apply -f "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" gslb.k8gb.absa.oss/test-gslb-failover created git checkout -- "deploy/crds/k8gb.absa.oss_v1beta1_gslb_cr_failover.yaml" Deploy podinfo kubectl apply -f deploy/test-apps service/unhealthy-app created deployment.apps/unhealthy-app created helm repo add podinfo https://stefanprodan.github.io/podinfo "podinfo" has been added to your repositories helm upgrade --install frontend --namespace test-gslb -f deploy/test-apps/podinfo/podinfo-values.yaml \ --set ui.message="` kubectl -n k8gb describe deploy k8gb | awk '/CLUSTER_GEO_TAG/ { printf $2 }'`" \ --set image.repository="ghcr.io/stefanprodan/podinfo" \ podinfo/podinfo \ --version 5.1.1 Release "frontend" does not exist. Installing it now. NAME: frontend LAST DEPLOYED: Fri Nov 11 19:54:41 2022 NAMESPACE: test-gslb STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: echo "Visit http://127.0.0.1:8080 to use your application" kubectl -n test-gslb port-forward deploy/frontend-podinfo 8080:9898 make[3]: Leaving directory '/root/k8gb' Wait until Ingress controller is ready kubectl -n k8gb wait --for=condition=Ready pod -l app.kubernetes.io/name=ingress-nginx --timeout=600s pod/nginx-ingress-ingress-nginx-controller-7cs7g condition met pod/nginx-ingress-ingress-nginx-controller-qx5db condition met test-gslb1 deployed! make[2]: Leaving directory '/root/k8gb' make[2]: Entering directory '/root/k8gb' Deploy local cluster test-gslb2 kubectl config use-context k3d-test-gslb2 Switched to context "k3d-test-gslb2". Create namespace kubectl apply -f deploy/namespace.yaml namespace/k8gb created Deploy GSLB operator from v0.10.0 make deploy-k8gb-with-helm make[3]: Entering directory '/root/k8gb' # create rfc2136 secret kubectl -n k8gb create secret generic rfc2136 --from-literal=secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= || true secret/rfc2136 created helm repo add --force-update k8gb https://www.k8gb.io "k8gb" has been added to your repositories cd chart/k8gb && helm dependency update walk.go:74: found symbolic link in path: /root/k8gb/chart/k8gb/LICENSE resolves to /root/k8gb/LICENSE. Contents of linked file included and used Getting updates for unmanaged Helm repositories... ...Successfully got an update from the "https://absaoss.github.io/coredns-helm" chart repository Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "k8gb" chart repository ...Successfully got an update from the "podinfo" chart repository ...Successfully got an update from the "nginx-stable" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 1 charts Downloading coredns from repo https://absaoss.github.io/coredns-helm Deleting outdated charts helm -n k8gb upgrade -i k8gb k8gb/k8gb -f "" \ --set k8gb.clusterGeoTag='us' --set k8gb.extGslbClustersGeoTags='eu' \ --set k8gb.reconcileRequeueSeconds=10 \ --set k8gb.dnsZoneNegTTL=10 \ --set k8gb.imageTag=v0.10.0 \ --set k8gb.log.format=simple \ --set k8gb.log.level=debug \ --set rfc2136.enabled=true \ --set k8gb.edgeDNSServers[0]=172.18.0.1:1053 \ --set externaldns.image=absaoss/external-dns:rfc-ns1 \ --wait --timeout=2m0s Release "k8gb" does not exist. Installing it now. NAME: k8gb LAST DEPLOYED: Fri Nov 11 19:55:03 2022 NAMESPACE: k8gb STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: done _ ___ _ | | _( _ ) ___| |__ | |/ / _ \ / _` | '_ \ | < (_) | (_| | |_) | |_|\_\ ___/ \__ , |_.__/ & all dependencies are installed |___/ 1. Check if your DNS Zone is served by K8GB CoreDNS $ kubectl -n k8gb run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools --command -- /usr/bin/dig @k8gb-coredns SOA . +short If everything is fine then you are expected to see similar output:

What are some alternatives?

When comparing k3s-oci-cluster and ingress-nginx you can also consider the following projects:

traefik - The Cloud Native Application Proxy

metallb - A network load-balancer implementation for Kubernetes using standard routing protocols

emissary - open source Kubernetes-native API gateway for microservices built on the Envoy Proxy

haproxy-ingress - HAProxy Ingress

application-gateway-kubernetes-ingress - This is an ingress controller that can be run on Azure Kubernetes Service (AKS) to allow an Azure Application Gateway to act as the ingress for an AKS cluster.

external-dns - Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services

oauth2-proxy - A reverse proxy that provides authentication with Google, Azure, OpenID Connect and many more identity providers.

k8s-helm-helmfile - Project which compares 3 approaches to deploy apps on Kubernetes cluster (using kubectl, helm & helmfile)

skaffold - Easy and Repeatable Kubernetes Development

Harbor - An open source trusted cloud native registry project that stores, signs, and scans content.

cilium-cli - CLI to install, manage & troubleshoot Kubernetes clusters running Cilium

etcd - Distributed reliable key-value store for the most critical data of a distributed system