Deploy JHipster Microservices to GCP with Kubernetes

This page summarizes the projects mentioned and recommended in the original post on dev.to

Our great sponsors
  • SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • spring-cloud-gateway

    An API Gateway built on Spring Framework and Spring Boot providing routing and more.

  • CAUTION: Spring Cloud no longer supports Netflix Zuul. An open issue adds Spring MVC/Servlet support to Spring Cloud Gateway. It's scheduled for implementation before the end of 2021.

  • java-microservices-examples

    Java Microservices: Spring Boot, Spring Cloud, JHipster, Spring Cloud Config, and Spring Cloud Gateway

  • Which applications? Set up monitoring? No Which applications with clustered databases? select store Admin password for JHipster Registry: Kubernetes namespace: demo Docker repository name: Command to push Docker image: docker push Enable Istio? No Kubernetes service type? LoadBalancer Use dynamic storage provisioning? Yes Use a specific storage class? NOTE: If you don't want to publish your images on Docker Hub, leave the Docker repository name blank. After I answered these questions, my k8s/.yo-rc.json file had the following contents: { "generator-jhipster": { "appsFolders": ["blog", "gateway", "store"], "directoryPath": "../", "clusteredDbApps": ["store"], "serviceDiscoveryType": "eureka", "jwtSecretKey": "NDFhMGY4NjF...", "dockerRepositoryName": "mraible", "dockerPushCommand": "docker push", "kubernetesNamespace": "demo", "kubernetesServiceType": "LoadBalancer", "kubernetesUseDynamicStorage": true, "kubernetesStorageClassName": "", "ingressDomain": "", "monitoring": "no", "istio": false } } Enter fullscreen mode Exit fullscreen mode I already showed you how to get everything working with Docker Compose in the previous tutorial. So today, I'd like to show you how to run things locally with Minikube. Install Minikube to Run Kubernetes Locally If you have Docker installed, you can run Kubernetes locally with Minikube. Run minikube start to begin. minikube --cpus 8 start Enter fullscreen mode Exit fullscreen mode CAUTION: If this doesn't work, use brew install minikube, or see Minikube's installation instructions. This command will start Minikube with 16 GB of RAM and 8 CPUs. Unfortunately, the default, which is 16 GB RAM and two CPUs, did not work for me. You can skip ahead to creating your Docker images while you wait for this to complete. After this command executes, it'll print out a message and notify you which cluster and namespace are being used. 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default Enter fullscreen mode Exit fullscreen mode TIP: You can stop Minikube with minikube stop and start over with minikube delete. Create Docker Images with Jib Now, you need to build Docker images for each app. In the { gateway, blog, store } directories, run the following Gradle command (where is gateway, store, or blog). This command should also be in the window where you ran jhipster k8s, so you can copy them from there. ./gradlew bootJar -Pprod jib -Djib.to.image=/ Enter fullscreen mode Exit fullscreen mode Create Private Docker Images You can also build your images locally and publish them to your Docker daemon. This is the default if you didn't specify a base Docker repository name. # this command exposes Docker images to minikube eval $(minikube docker-env) ./gradlew -Pprod bootJar jibDockerBuild Enter fullscreen mode Exit fullscreen mode Because this publishes your images locally to Docker, you'll need to make modifications to your Kubernetes deployment files to use imagePullPolicy: IfNotPresent. - name: gateway-app image: gateway imagePullPolicy: IfNotPresent Enter fullscreen mode Exit fullscreen mode Make sure to add this imagePullPolicy to the following files: k8s/gateway-k8s/gateway-deployment.yml k8s/blog-k8s/blog-deployment.yml k8s/store-k8s/store-deployment.yml Register an OIDC App for Auth You've now built Docker images for your microservices, but you haven't seen them running. First, you'll need to configure Okta for authentication and authorization. Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create jhipster. Select the default app name, or change it as you see fit. Accept the default Redirect URI values provided for you. JHipster ships with JHipster Registry. It acts as a Eureka service for service discovery and contains a Spring Cloud Config server for distributing your configuration settings. Update k8s/registry-k8s/application-configmap.yml to contain your OIDC settings from the .okta.env file the Okta CLI just created. The Spring Cloud Config server reads from this file and shares the values with the gateway and microservices. data: application.yml: |- ... spring: security: oauth2: client: provider: oidc: issuer-uri: https:///oauth2/default registration: oidc: client-id: client-secret: Enter fullscreen mode Exit fullscreen mode To configure the JHipster Registry to use OIDC for authentication, modify k8s/registry-k8s/jhipster-registry.yml to enable the oauth2 profile. - name: SPRING_PROFILES_ACTIVE value: prod,k8s,oauth2 Enter fullscreen mode Exit fullscreen mode Now that you've configured everything, it's time to see it in action. Start Your Spring Boot Microservices with K8s In the k8s directory, start your engines! ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode You can see if everything starts up using the following command. kubectl get pods -n demo Enter fullscreen mode Exit fullscreen mode You can use the name of a pod with kubectl logs to tail its logs. kubectl logs --tail=-1 -n demo Enter fullscreen mode Exit fullscreen mode You can use port-forwarding to see the JHipster Registry. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Open a browser and navigate to http://localhost:8761. You'll need to sign in with your Okta credentials. Once all is green, use port-forwarding to see the gateway app. kubectl port-forward svc/gateway -n demo 8080 Enter fullscreen mode Exit fullscreen mode Then, go to http://localhost:8080, and you should be able to add blogs, posts, tags, and products. You can also automate testing to ensure that everything works. Set your Okta credentials as environment variables and run end-to-end tests using Cypress (from the gateway directory). export CYPRESS_E2E_USERNAME= export CYPRESS_E2E_PASSWORD= npm run e2e Enter fullscreen mode Exit fullscreen mode Proof it worked for me: Plain Text Secrets? Uggh! You may notice that I used a secret in plain text in the application-configmap.yml file. Secrets in plain text are a bad practice! I hope you didn't check everything into source control yet!! Encrypt Your Secrets with Spring Cloud Config The JHipster Registry has an encryption mechanism you can use to encrypt your secrets. That way, it's safe to store them in public repositories. Add an ENCRYPT_KEY to the environment variables in k8s/registry-k8s/jhipster-registry.yml. - name: ENCRYPT_KEY value: really-long-string-of-random-charters-that-you-can-keep-safe Enter fullscreen mode Exit fullscreen mode TIP: You can use JShell to generate a UUID you can use for your encrypt key. jshell UUID.randomUUID() You can quit by typing /exit. Restart your JHipster Registry containers from the k8s directory. ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode Encrypt Your OIDC Client Secret You can encrypt your client secret by logging into http://localhost:8761 and going to Configuration > Encryption. If this address doesn't resolve, you'll need to port-forward again. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Copy and paste your client secret from application-configmap.yml (or .okta.env) and click Encrypt. Then, copy the encrypted value back into application-configmap.yml. Make sure to wrap it in quotes! You can also use curl: curl -X POST http://admin:@localhost:8761/config/encrypt -d your-client-secret Enter fullscreen mode Exit fullscreen mode If you use curl, make sure to add {cipher} to the beginning of the string. For example: client-secret: "{cipher}1b12934716c32d360c85f651a0793df2777090c..." Enter fullscreen mode Exit fullscreen mode Apply these changes and restart all deployments. ./kubectl-apply.sh -f kubectl rollout restart deploy -n demo Enter fullscreen mode Exit fullscreen mode Verify everything still works at http://localhost:8080. TIP: If you don't want to restart the Spring Cloud Config server when you update its configuration, see Refresh the Configuration in Your Spring Cloud Config Server. Change Spring Cloud Config to use Git You might want to store your app's configuration externally. That way, you don't have to redeploy everything to change values. Good news! Spring Cloud Config makes it easy to switch to Git instead of the filesystem to store your configuration. In k8s/registry-k8s/jhipster-registry.yml, find the following variables: - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE value: native - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_LOCATIONS value: file:./central-config Enter fullscreen mode Exit fullscreen mode Below these values, add a second lookup location. - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_TYPE value: git - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_URI value: https://github.com/mraible/reactive-java-ms-config/ - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_SEARCH_PATHS value: config - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_LABEL value: main Enter fullscreen mode Exit fullscreen mode Create a GitHub repo that matches the URI, path, and branch you entered. In my case, I created reactive-java-ms-config and added a config/application.yml file in the main branch. Then, I added my spring.security.* values to it and removed them from k8s/registry-k8s/application-configmap.yml. See Spring Cloud Config's Git Backend docs for more information. Deploy Spring Boot Microservices to Google Cloud (aka GCP) It's nice to see things running locally on your machine, but it's even better to get to production! In this section, I'll show you how to deploy your containers to Google Cloud. First, stop Minikube if you were running it previously. minikube stop Enter fullscreen mode Exit fullscreen mode You can also use kubectl commands to switch clusters. kubectl config get-contexts kubectl config use-context XXX Enter fullscreen mode Exit fullscreen mode The cool kids use kubectx and kubens to set the default context and namespace. You can learn how to install and use them via the kubectx GitHub project. Create a Container Registry on Google Cloud Before the JHipster 7.0.0 release, I tested this microservice example with Kubernetes and Google Cloud. I found many solutions in Ray Tsang's Spring Boot on GCP Guides. Thanks, Ray! To start with Google Cloud, you'll need an account and a project. Sign up for Google Cloud Platform (GCP), log in, and create a project. Open a console in your browser. A GCP project contains all cloud services and resources--such as virtual machines, network, load balancers--that you might use. TIP: You can also download and install the gcloud CLI if you want to run things locally. Enable the Google Kubernetes Engine API and Container Registry: gcloud services enable container.googleapis.com containerregistry.googleapis.com Enter fullscreen mode Exit fullscreen mode Create a Kubernetes Cluster Run the following command to create a cluster for your apps. gcloud container clusters create CLUSTER_NAME \ --zone us-central1-a \ --machine-type n1-standard-4 \ --enable-autorepair \ --enable-autoupgrade Enter fullscreen mode Exit fullscreen mode I called my cluster reactive-ms. See GCP's zones and machine-types for other options. I found the n1-standard-4 to be the minimum for JHipster. You created Docker images earlier to run with Minikube. Then, those images were deployed to Docker Hub or your local Docker registry. If you deployed to Docker Hub, you can use your deployment files as-is. For Google Cloud and its Kubernetes engine (GKE), you can also publish your images to your project's registry. Thankfully, this is easy to do with Jib. Navigate to the gateway directory and run: ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//gateway Enter fullscreen mode Exit fullscreen mode You can get your project ID by running gcloud projects list. Repeat the process for blog and store. You can run these processes in parallel to speed things up. cd ../blog ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//blog cd ../store ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//store Enter fullscreen mode Exit fullscreen mode TIP: You might have to run gcloud auth configure-docker for Jib to publish to your GCP container registry. Then, in your k8s/**/*-deployment.yml files, add gcr.io/ as a prefix. Remove the imagePullPolicy if you specified it earlier. For example: containers: - name: gateway-app image: gcr.io/jhipster7/gateway env: Enter fullscreen mode Exit fullscreen mode In the k8s directory, apply all the deployment descriptors to run all your images. ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode You can monitor the progress of your deployments with kubectl get pods -n demo. TIP: If you make a mistake configuring JHipster Registry and need to deploy it, you can do so with the following command: kubectl apply -f registry-k8s/jhipster-registry.yml -n demo kubectl rollout restart statefulset/jhipster-registry -n demo You'll need to restart all your deployments if you changed any configuration settings that services need to retrieve. kubectl rollout restart deploy -n demo Access Your Gateway on Google Cloud Once everything is up and running, get the external IP of your gateway. kubectl get svc gateway -n demo Enter fullscreen mode Exit fullscreen mode You'll need to add the external IP address as a valid redirect to your Okta OIDC app. Run okta login, open the returned URL in your browser, and sign in to the Okta Admin Console. Go to the Applications section, find your application, and edit it. Add the standard JHipster redirect URIs using the IP address. For example, http://34.71.48.244:8080/login/oauth2/code/oidc for the login redirect URI, and http://34.71.48.244:8080 for the logout redirect URI. You can use the following command to set your gateway's IP address as a variable you can curl. EXTERNAL_IP=$(kubectl get svc gateway -ojsonpath="{.status.loadBalancer.ingress[0].ip}" -n demo) curl $EXTERNAL_IP:8080 Enter fullscreen mode Exit fullscreen mode Run open http://$EXTERNAL_IP:8080, and you should be able to sign in. Great! Now that you know things work, let's integrate better security, starting with HTTPS. Add HTTPS to Your Reactive Gateway You should always use HTTPS. It's one of the easiest ways to secure things, especially with the free certificates offered these days. Ray Tsang's External Load Balancing docs was a big help in figuring out all these steps. You'll need a static IP to assign your TLS (the official name for HTTPS) certificate. gcloud compute addresses create gateway-ingress-ip --global Enter fullscreen mode Exit fullscreen mode You can run the following command to make sure it worked. gcloud compute addresses describe gateway-ingress-ip --global --format='value(address)' Enter fullscreen mode Exit fullscreen mode Then, create a k8s/ingress.yml file: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gateway annotations: kubernetes.io/ingress.global-static-ip-name: "gateway-ingress-ip" spec: rules: - http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: gateway port: number: 8080 Enter fullscreen mode Exit fullscreen mode Deploy it and make sure it worked. kubectl apply -f ingress.yml -n demo # keep running this command displays an IP address # (hint: up arrow recalls the last command) kubectl get ingress gateway -n demo Enter fullscreen mode Exit fullscreen mode To use a TLS certificate, you must have a fully qualified domain name and configure it to point to the IP address. If you don't have a real domain, you can use nip.io. Set the IP in a variable, as well as the domain. EXTERNAL_IP=$(kubectl get ingress gateway -ojsonpath="{.status.loadBalancer.ingress[0].ip}" -n demo) DOMAIN="${EXTERNAL_IP}.nip.io" # Prove it works echo $DOMAIN curl $DOMAIN Enter fullscreen mode Exit fullscreen mode To create a certificate, create a k8s/certificate.yml file. cat << EOF > certificate.yml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: gateway-certificate spec: domains: # Replace the value with your domain name - ${DOMAIN} EOF Enter fullscreen mode Exit fullscreen mode Add the certificate to ingress.yml: ... metadata: name: gateway annotations: kubernetes.io/ingress.global-static-ip-name: "gateway-ingress-ip" networking.gke.io/managed-certificates: "gateway-certificate" ... Enter fullscreen mode Exit fullscreen mode Deploy both files: kubectl apply -f certificate.yml -f ingress.yml -n demo Enter fullscreen mode Exit fullscreen mode Check your certificate's status until it prints Status: ACTIVE: kubectl describe managedcertificate gateway-certificate -n demo Enter fullscreen mode Exit fullscreen mode While you're waiting, you can proceed to forcing HTTPS in the next step. Force HTTPS with Spring Security Spring Security's WebFlux support makes it easy to redirect to HTTPS. However, if you redirect all HTTPS requests, the Kubernetes health checks will fail because they receive a 302 instead of a 200. Crack open SecurityConfiguration.java in the gateway project and add the following code to the springSecurityFilterChain() method. http.redirectToHttps(redirect -> redirect .httpsRedirectWhen(e -> e.getRequest().getHeaders().containsKey("X-Forwarded-Proto")) ); Enter fullscreen mode Exit fullscreen mode Rebuild the Docker image for the gateway project. ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//gateway Enter fullscreen mode Exit fullscreen mode Run the following commands to start a rolling restart of gateway instances: kubectl rollout restart deployment gateway -n demo Enter fullscreen mode Exit fullscreen mode TIP: Run kubectl get deployments to see your deployment names. Now you should get a 302 when you access your domain. HTTPie is a useful alternative to curl. Update your Okta OIDC app to have https://${DOMAIN}/login/oauth2/code/oidc as a valid redirect URI. Add https://${DOMAIN} to the sign-out redirect URIs too. Encrypt Your Kubernetes Secrets Congratulations! Now you have everything running on GKE, using HTTPS! However, you have a lot of plain-text secrets in your K8s YAML files. "But, wait!" you might say. Doesn't Kubernetes Secrets solve everything? In my opinion, no. They're just unencrypted base64-encoded strings stored in YAML files. There's a good chance you'll want to check in the k8s directory you created. Having secrets in your source code is a bad idea! The good news is most people (where most people = my followers) manage secrets externally. Matt Raible @mraible What's your favorite way to protect secrets in your @kubernetesio YAML files? 16:13 PM - 28 Apr 2021 NOTE: Watch Kubernetes Secrets in 5 Minutes if you want to learn more about Kubernetes Secrets. The Current State of Secret Management in Kubernetes I recently noticed a tweet from Daniel Jacob Bilar that links to a talk from FOSDEM 2021 on the current state of secret management within Kubernetes. It's an excellent overview of the various options. Store Secrets in Git with Sealed Secrets and Kubeseal Bitnami has a Sealed Secrets Apache-licensed open source project. Its README explains how it works. Problem: "I can manage all my K8s config in git, except Secrets." Solution: Encrypt your Secret into a SealedSecret, which is safe to store - even to a public repository. The SealedSecret can be decrypted only by the controller running in the target cluster, and nobody else (not even the original author) is able to obtain the original Secret from the SealedSecret. Store your Kubernetes Secrets in Git thanks to Kubeseal. Hello SealedSecret! by Aurélie Vache provides an excellent overview of how to use it. First, you'll need to install the Sealed Secrets CRD (Custom Resource Definition). kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml Enter fullscreen mode Exit fullscreen mode Retrieve the certificate keypair that this controller generates. kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml Enter fullscreen mode Exit fullscreen mode Copy the raw value of tls.crt and decode it. You can use the command line, or learn more about base64 encoding/decoding in our documentation. echo -n | base64 --decode Enter fullscreen mode Exit fullscreen mode Put the raw value in a tls.crt file. Next, install Kubeseal. On macOS, you can use Homebrew. For other platforms, see the release notes. brew install kubeseal Enter fullscreen mode Exit fullscreen mode The major item you need to encrypt in this example is the ENCRYPT_KEY you used to encrypt the OIDC client secret. Run the following command to do this, where the value comes from your k8s/registry-k8s/jhipster-registry.yml file. kubectl create secret generic encrypt-key \ --from-literal=ENCRYPT_KEY='your-value-here' \ --dry-run=client -o yaml > secrets.yml Enter fullscreen mode Exit fullscreen mode Next, use kubeseal to convert the secrets to encrypted secrets. kubeseal --cert tls.crt --format=yaml -n demo < secrets.yml > sealed-secrets.yml Enter fullscreen mode Exit fullscreen mode Remove the original secrets file and deploy your sealed secrets. rm secrets.yml kubectl apply -n demo -f sealed-secrets.yml && kubectl get -n demo sealedsecret encrypt-key Enter fullscreen mode Exit fullscreen mode Configure JHipster Registry to use the Sealed Secret In k8s/registry-k8s/jhipster-registry.yml, change the ENCRYPT_KEY to use your new secret. ... - name: ENCRYPT_KEY valueFrom: secretKeyRef: name: encrypt-key key: ENCRYPT_KEY Enter fullscreen mode Exit fullscreen mode TIP: You should be able to encrypt other secrets, like your database passwords, using a similar technique. Now, redeploy JHipster Registry and restart all your deployments. ./kubectl-apply.sh -f kubectl rollout restart deployment -n demo Enter fullscreen mode Exit fullscreen mode You can use port-forwarding to see the JHipster Registry locally. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Google Cloud Secret Manager Google Cloud has a Secret Manager you can use to store your secrets. There's even a Spring Boot starter to make it convenient to retrieve these values in your app. For example, you could store your database password in a properties file. spring.datasource.password=${sm://my-db-password} Enter fullscreen mode Exit fullscreen mode This is pretty slick, but I like to remain cloud-agnostic. Also, I like how the JHipster Registry allows me to store encrypted secrets in Git. Use Spring Vault for External Secrets Using an external key management solution like HashiCorp Vault is also recommended. The JHipster Registry will have Vault support in its next release. In the meantime, I recommend reading Secure Secrets With Spring Cloud Config and Vault. Scale Your Reactive Java Microservices You can scale your instances using the kubectl scale command. kubectl scale deployments/store --replicas=2 -n demo Enter fullscreen mode Exit fullscreen mode Scaling will work just fine for the microservice apps because they're set up as OAuth 2.0 resource servers and are therefore stateless. However, the gateway uses Spring Security's OIDC login feature and stores the access tokens in the session. So if you scale it, sessions won't be shared. Single sign-on should still work; you'll just have to do the OAuth dance to get tokens if you hit a different instance. To synchronize sessions, you can use Spring Session and Redis with JHipster. CAUTION: If you leave everything running on Google Cloud, you will be charged for usage. Therefore, I recommend removing your cluster or deleting your namespace (kubectl delete ns demo) to reduce your cost. gcloud container clusters delete --zone=us-central1-a You can delete your Ingress IP address too: gcloud compute addresses delete gateway-ingress-ip --global Monitor Your Kubernetes Cluster with K9s Using kubectl to monitor your Kubernetes cluster can get tiresome. That's where K9s can be helpful. It provides a terminal UI to interact with your Kubernetes clusters. K9s was created by my good friend Fernand Galiana. He's also created a commercial version called K9sAlpha. To install it on macOS, run brew install k9s. Then run k9s -n demo to start it. You can navigate to your pods, select them with Return, and navigate back up with Esc. There's also KDash, from JHipster co-lead, Deepu K Sasidharan. It's a simple K8s terminal dashboard built with Rust. Deepu recently released an MVP of the project. If for some reason you don't like CLI's, you can try Kubernetic. Continuous Integration and Delivery of JHipster Microservices This tutorial doesn't mention continuous integration and delivery of your reactive microservice architecture. I plan to cover that in a future post. If you have a solution you like, please leave a comment. Spring on Google Cloud Platform JHipster uses Docker containers to run all its databases in this example. However, there are a number of Google Cloud services you can use as alternatives. See the Spring Cloud GCP project on GitHub for more information. I didn't mention Testcontainers in this post. However, JHipster does support using them. Testcontainers also has a GCloud Module. Why Not Istio? I didn't use Istio in this example because I didn't want to complicate things. Learning Kubernetes is hard enough without learning another system on top of it. Istio acts as a network between your containers that can do networky things like authentication, authorization, monitoring, and retries. I like to think of it as AOP for containers. If you'd like to see how to use JHipster with Istio, see How to set up Java microservices with Istio service mesh on Kubernetes by JHipster co-lead Deepu K Sasidharan. Fernand Galiana recommends checking out BPF (Berkeley Packet Filter) and Cilium. Cilium is open source software for transparently providing and securing the network and API connectivity between application services deployed using Linux container management platforms such as Kubernetes. Learn More About Kubernetes, Spring Boot, and JHipster This blog post showed you how to deploy your reactive Java microservices to production using Kubernetes. JHipster did much of the heavy lifting for you since it generated all the YAML-based deployment descriptors. Since no one really likes writing YAML, I'm calling that a win! You learned how to use JHipster Registry to encrypt your secrets and configure Git as a configuration source for Spring Cloud Config. Bitnami's Sealed Secrets is a nice companion to encrypt the secrets in your Kubernetes deployment descriptors. For more information about storing your secrets externally, these additional resources might help. Kelsey Hightower's Vault on Cloud Run Tutorial James Strachan's Helm Post Renderer You can find the source code for this example on GitHub in our Java microservices examples repository. git clone https://github.com/oktadev/java-microservices-examples.git cd java-microservices-examples/jhipster-k8s Enter fullscreen mode Exit fullscreen mode See JHipster's documentation on Kubernetes and GCP if you'd like more concise instructions. If you enjoyed this post, I think you'll like these others as well: Reactive Java Microservices with Spring Boot and JHipster Build a Secure Micronaut and Angular App with JHipster Fast Java Made Easy with Quarkus and JHipster How to Docker with Spring Boot Security Patterns for Microservice Architectures Build a Microservice Architecture with Spring Boot and Kubernetes (uses Spring Boot 2.1) If you have any questions, please ask them in the comments below. To be notified when we publish new blog posts, follow us on Twitter or LinkedIn. We frequently publish videos to our YouTube channel too. Subscribe today! A huge thanks goes to Fernand Galiana for his review and detailed feedback.

  • SurveyJS

    Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App. With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.

    SurveyJS logo
  • JHipster

    JHipster, much like Spring initializr, is a generator to create a boilerplate backend application, but also with an integrated front end implementation in React, Vue or Angular. In their own words, it "Is a development platform to quickly generate, develop, & deploy modern web applications & microservice architectures."

  • NOTE: The SPA app on the gateway is currently a monolith. The JHipster team is still working on micro frontends support.

  • SDKMan

    The SDKMAN! Command Line Interface

  • Java 11+

  • Which applications? Set up monitoring? No Which applications with clustered databases? select store Admin password for JHipster Registry: Kubernetes namespace: demo Docker repository name: Command to push Docker image: docker push Enable Istio? No Kubernetes service type? LoadBalancer Use dynamic storage provisioning? Yes Use a specific storage class? NOTE: If you don't want to publish your images on Docker Hub, leave the Docker repository name blank. After I answered these questions, my k8s/.yo-rc.json file had the following contents: { "generator-jhipster": { "appsFolders": ["blog", "gateway", "store"], "directoryPath": "../", "clusteredDbApps": ["store"], "serviceDiscoveryType": "eureka", "jwtSecretKey": "NDFhMGY4NjF...", "dockerRepositoryName": "mraible", "dockerPushCommand": "docker push", "kubernetesNamespace": "demo", "kubernetesServiceType": "LoadBalancer", "kubernetesUseDynamicStorage": true, "kubernetesStorageClassName": "", "ingressDomain": "", "monitoring": "no", "istio": false } } Enter fullscreen mode Exit fullscreen mode I already showed you how to get everything working with Docker Compose in the previous tutorial. So today, I'd like to show you how to run things locally with Minikube. Install Minikube to Run Kubernetes Locally If you have Docker installed, you can run Kubernetes locally with Minikube. Run minikube start to begin. minikube --cpus 8 start Enter fullscreen mode Exit fullscreen mode CAUTION: If this doesn't work, use brew install minikube, or see Minikube's installation instructions. This command will start Minikube with 16 GB of RAM and 8 CPUs. Unfortunately, the default, which is 16 GB RAM and two CPUs, did not work for me. You can skip ahead to creating your Docker images while you wait for this to complete. After this command executes, it'll print out a message and notify you which cluster and namespace are being used. 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default Enter fullscreen mode Exit fullscreen mode TIP: You can stop Minikube with minikube stop and start over with minikube delete. Create Docker Images with Jib Now, you need to build Docker images for each app. In the { gateway, blog, store } directories, run the following Gradle command (where is gateway, store, or blog). This command should also be in the window where you ran jhipster k8s, so you can copy them from there. ./gradlew bootJar -Pprod jib -Djib.to.image=/ Enter fullscreen mode Exit fullscreen mode Create Private Docker Images You can also build your images locally and publish them to your Docker daemon. This is the default if you didn't specify a base Docker repository name. # this command exposes Docker images to minikube eval $(minikube docker-env) ./gradlew -Pprod bootJar jibDockerBuild Enter fullscreen mode Exit fullscreen mode Because this publishes your images locally to Docker, you'll need to make modifications to your Kubernetes deployment files to use imagePullPolicy: IfNotPresent. - name: gateway-app image: gateway imagePullPolicy: IfNotPresent Enter fullscreen mode Exit fullscreen mode Make sure to add this imagePullPolicy to the following files: k8s/gateway-k8s/gateway-deployment.yml k8s/blog-k8s/blog-deployment.yml k8s/store-k8s/store-deployment.yml Register an OIDC App for Auth You've now built Docker images for your microservices, but you haven't seen them running. First, you'll need to configure Okta for authentication and authorization. Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create jhipster. Select the default app name, or change it as you see fit. Accept the default Redirect URI values provided for you. JHipster ships with JHipster Registry. It acts as a Eureka service for service discovery and contains a Spring Cloud Config server for distributing your configuration settings. Update k8s/registry-k8s/application-configmap.yml to contain your OIDC settings from the .okta.env file the Okta CLI just created. The Spring Cloud Config server reads from this file and shares the values with the gateway and microservices. data: application.yml: |- ... spring: security: oauth2: client: provider: oidc: issuer-uri: https:///oauth2/default registration: oidc: client-id: client-secret: Enter fullscreen mode Exit fullscreen mode To configure the JHipster Registry to use OIDC for authentication, modify k8s/registry-k8s/jhipster-registry.yml to enable the oauth2 profile. - name: SPRING_PROFILES_ACTIVE value: prod,k8s,oauth2 Enter fullscreen mode Exit fullscreen mode Now that you've configured everything, it's time to see it in action. Start Your Spring Boot Microservices with K8s In the k8s directory, start your engines! ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode You can see if everything starts up using the following command. kubectl get pods -n demo Enter fullscreen mode Exit fullscreen mode You can use the name of a pod with kubectl logs to tail its logs. kubectl logs --tail=-1 -n demo Enter fullscreen mode Exit fullscreen mode You can use port-forwarding to see the JHipster Registry. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Open a browser and navigate to http://localhost:8761. You'll need to sign in with your Okta credentials. Once all is green, use port-forwarding to see the gateway app. kubectl port-forward svc/gateway -n demo 8080 Enter fullscreen mode Exit fullscreen mode Then, go to http://localhost:8080, and you should be able to add blogs, posts, tags, and products. You can also automate testing to ensure that everything works. Set your Okta credentials as environment variables and run end-to-end tests using Cypress (from the gateway directory). export CYPRESS_E2E_USERNAME= export CYPRESS_E2E_PASSWORD= npm run e2e Enter fullscreen mode Exit fullscreen mode Proof it worked for me: Plain Text Secrets? Uggh! You may notice that I used a secret in plain text in the application-configmap.yml file. Secrets in plain text are a bad practice! I hope you didn't check everything into source control yet!! Encrypt Your Secrets with Spring Cloud Config The JHipster Registry has an encryption mechanism you can use to encrypt your secrets. That way, it's safe to store them in public repositories. Add an ENCRYPT_KEY to the environment variables in k8s/registry-k8s/jhipster-registry.yml. - name: ENCRYPT_KEY value: really-long-string-of-random-charters-that-you-can-keep-safe Enter fullscreen mode Exit fullscreen mode TIP: You can use JShell to generate a UUID you can use for your encrypt key. jshell UUID.randomUUID() You can quit by typing /exit. Restart your JHipster Registry containers from the k8s directory. ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode Encrypt Your OIDC Client Secret You can encrypt your client secret by logging into http://localhost:8761 and going to Configuration > Encryption. If this address doesn't resolve, you'll need to port-forward again. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Copy and paste your client secret from application-configmap.yml (or .okta.env) and click Encrypt. Then, copy the encrypted value back into application-configmap.yml. Make sure to wrap it in quotes! You can also use curl: curl -X POST http://admin:@localhost:8761/config/encrypt -d your-client-secret Enter fullscreen mode Exit fullscreen mode If you use curl, make sure to add {cipher} to the beginning of the string. For example: client-secret: "{cipher}1b12934716c32d360c85f651a0793df2777090c..." Enter fullscreen mode Exit fullscreen mode Apply these changes and restart all deployments. ./kubectl-apply.sh -f kubectl rollout restart deploy -n demo Enter fullscreen mode Exit fullscreen mode Verify everything still works at http://localhost:8080. TIP: If you don't want to restart the Spring Cloud Config server when you update its configuration, see Refresh the Configuration in Your Spring Cloud Config Server. Change Spring Cloud Config to use Git You might want to store your app's configuration externally. That way, you don't have to redeploy everything to change values. Good news! Spring Cloud Config makes it easy to switch to Git instead of the filesystem to store your configuration. In k8s/registry-k8s/jhipster-registry.yml, find the following variables: - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE value: native - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_LOCATIONS value: file:./central-config Enter fullscreen mode Exit fullscreen mode Below these values, add a second lookup location. - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_TYPE value: git - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_URI value: https://github.com/mraible/reactive-java-ms-config/ - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_SEARCH_PATHS value: config - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_LABEL value: main Enter fullscreen mode Exit fullscreen mode Create a GitHub repo that matches the URI, path, and branch you entered. In my case, I created reactive-java-ms-config and added a config/application.yml file in the main branch. Then, I added my spring.security.* values to it and removed them from k8s/registry-k8s/application-configmap.yml. See Spring Cloud Config's Git Backend docs for more information. Deploy Spring Boot Microservices to Google Cloud (aka GCP) It's nice to see things running locally on your machine, but it's even better to get to production! In this section, I'll show you how to deploy your containers to Google Cloud. First, stop Minikube if you were running it previously. minikube stop Enter fullscreen mode Exit fullscreen mode You can also use kubectl commands to switch clusters. kubectl config get-contexts kubectl config use-context XXX Enter fullscreen mode Exit fullscreen mode The cool kids use kubectx and kubens to set the default context and namespace. You can learn how to install and use them via the kubectx GitHub project. Create a Container Registry on Google Cloud Before the JHipster 7.0.0 release, I tested this microservice example with Kubernetes and Google Cloud. I found many solutions in Ray Tsang's Spring Boot on GCP Guides. Thanks, Ray! To start with Google Cloud, you'll need an account and a project. Sign up for Google Cloud Platform (GCP), log in, and create a project. Open a console in your browser. A GCP project contains all cloud services and resources--such as virtual machines, network, load balancers--that you might use. TIP: You can also download and install the gcloud CLI if you want to run things locally. Enable the Google Kubernetes Engine API and Container Registry: gcloud services enable container.googleapis.com containerregistry.googleapis.com Enter fullscreen mode Exit fullscreen mode Create a Kubernetes Cluster Run the following command to create a cluster for your apps. gcloud container clusters create CLUSTER_NAME \ --zone us-central1-a \ --machine-type n1-standard-4 \ --enable-autorepair \ --enable-autoupgrade Enter fullscreen mode Exit fullscreen mode I called my cluster reactive-ms. See GCP's zones and machine-types for other options. I found the n1-standard-4 to be the minimum for JHipster. You created Docker images earlier to run with Minikube. Then, those images were deployed to Docker Hub or your local Docker registry. If you deployed to Docker Hub, you can use your deployment files as-is. For Google Cloud and its Kubernetes engine (GKE), you can also publish your images to your project's registry. Thankfully, this is easy to do with Jib. Navigate to the gateway directory and run: ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//gateway Enter fullscreen mode Exit fullscreen mode You can get your project ID by running gcloud projects list. Repeat the process for blog and store. You can run these processes in parallel to speed things up. cd ../blog ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//blog cd ../store ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//store Enter fullscreen mode Exit fullscreen mode TIP: You might have to run gcloud auth configure-docker for Jib to publish to your GCP container registry. Then, in your k8s/**/*-deployment.yml files, add gcr.io/ as a prefix. Remove the imagePullPolicy if you specified it earlier. For example: containers: - name: gateway-app image: gcr.io/jhipster7/gateway env: Enter fullscreen mode Exit fullscreen mode In the k8s directory, apply all the deployment descriptors to run all your images. ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode You can monitor the progress of your deployments with kubectl get pods -n demo. TIP: If you make a mistake configuring JHipster Registry and need to deploy it, you can do so with the following command: kubectl apply -f registry-k8s/jhipster-registry.yml -n demo kubectl rollout restart statefulset/jhipster-registry -n demo You'll need to restart all your deployments if you changed any configuration settings that services need to retrieve. kubectl rollout restart deploy -n demo Access Your Gateway on Google Cloud Once everything is up and running, get the external IP of your gateway. kubectl get svc gateway -n demo Enter fullscreen mode Exit fullscreen mode You'll need to add the external IP address as a valid redirect to your Okta OIDC app. Run okta login, open the returned URL in your browser, and sign in to the Okta Admin Console. Go to the Applications section, find your application, and edit it. Add the standard JHipster redirect URIs using the IP address. For example, http://34.71.48.244:8080/login/oauth2/code/oidc for the login redirect URI, and http://34.71.48.244:8080 for the logout redirect URI. You can use the following command to set your gateway's IP address as a variable you can curl. EXTERNAL_IP=$(kubectl get svc gateway -ojsonpath="{.status.loadBalancer.ingress[0].ip}" -n demo) curl $EXTERNAL_IP:8080 Enter fullscreen mode Exit fullscreen mode Run open http://$EXTERNAL_IP:8080, and you should be able to sign in. Great! Now that you know things work, let's integrate better security, starting with HTTPS. Add HTTPS to Your Reactive Gateway You should always use HTTPS. It's one of the easiest ways to secure things, especially with the free certificates offered these days. Ray Tsang's External Load Balancing docs was a big help in figuring out all these steps. You'll need a static IP to assign your TLS (the official name for HTTPS) certificate. gcloud compute addresses create gateway-ingress-ip --global Enter fullscreen mode Exit fullscreen mode You can run the following command to make sure it worked. gcloud compute addresses describe gateway-ingress-ip --global --format='value(address)' Enter fullscreen mode Exit fullscreen mode Then, create a k8s/ingress.yml file: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gateway annotations: kubernetes.io/ingress.global-static-ip-name: "gateway-ingress-ip" spec: rules: - http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: gateway port: number: 8080 Enter fullscreen mode Exit fullscreen mode Deploy it and make sure it worked. kubectl apply -f ingress.yml -n demo # keep running this command displays an IP address # (hint: up arrow recalls the last command) kubectl get ingress gateway -n demo Enter fullscreen mode Exit fullscreen mode To use a TLS certificate, you must have a fully qualified domain name and configure it to point to the IP address. If you don't have a real domain, you can use nip.io. Set the IP in a variable, as well as the domain. EXTERNAL_IP=$(kubectl get ingress gateway -ojsonpath="{.status.loadBalancer.ingress[0].ip}" -n demo) DOMAIN="${EXTERNAL_IP}.nip.io" # Prove it works echo $DOMAIN curl $DOMAIN Enter fullscreen mode Exit fullscreen mode To create a certificate, create a k8s/certificate.yml file. cat << EOF > certificate.yml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: gateway-certificate spec: domains: # Replace the value with your domain name - ${DOMAIN} EOF Enter fullscreen mode Exit fullscreen mode Add the certificate to ingress.yml: ... metadata: name: gateway annotations: kubernetes.io/ingress.global-static-ip-name: "gateway-ingress-ip" networking.gke.io/managed-certificates: "gateway-certificate" ... Enter fullscreen mode Exit fullscreen mode Deploy both files: kubectl apply -f certificate.yml -f ingress.yml -n demo Enter fullscreen mode Exit fullscreen mode Check your certificate's status until it prints Status: ACTIVE: kubectl describe managedcertificate gateway-certificate -n demo Enter fullscreen mode Exit fullscreen mode While you're waiting, you can proceed to forcing HTTPS in the next step. Force HTTPS with Spring Security Spring Security's WebFlux support makes it easy to redirect to HTTPS. However, if you redirect all HTTPS requests, the Kubernetes health checks will fail because they receive a 302 instead of a 200. Crack open SecurityConfiguration.java in the gateway project and add the following code to the springSecurityFilterChain() method. http.redirectToHttps(redirect -> redirect .httpsRedirectWhen(e -> e.getRequest().getHeaders().containsKey("X-Forwarded-Proto")) ); Enter fullscreen mode Exit fullscreen mode Rebuild the Docker image for the gateway project. ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//gateway Enter fullscreen mode Exit fullscreen mode Run the following commands to start a rolling restart of gateway instances: kubectl rollout restart deployment gateway -n demo Enter fullscreen mode Exit fullscreen mode TIP: Run kubectl get deployments to see your deployment names. Now you should get a 302 when you access your domain. HTTPie is a useful alternative to curl. Update your Okta OIDC app to have https://${DOMAIN}/login/oauth2/code/oidc as a valid redirect URI. Add https://${DOMAIN} to the sign-out redirect URIs too. Encrypt Your Kubernetes Secrets Congratulations! Now you have everything running on GKE, using HTTPS! However, you have a lot of plain-text secrets in your K8s YAML files. "But, wait!" you might say. Doesn't Kubernetes Secrets solve everything? In my opinion, no. They're just unencrypted base64-encoded strings stored in YAML files. There's a good chance you'll want to check in the k8s directory you created. Having secrets in your source code is a bad idea! The good news is most people (where most people = my followers) manage secrets externally. Matt Raible @mraible What's your favorite way to protect secrets in your @kubernetesio YAML files? 16:13 PM - 28 Apr 2021 NOTE: Watch Kubernetes Secrets in 5 Minutes if you want to learn more about Kubernetes Secrets. The Current State of Secret Management in Kubernetes I recently noticed a tweet from Daniel Jacob Bilar that links to a talk from FOSDEM 2021 on the current state of secret management within Kubernetes. It's an excellent overview of the various options. Store Secrets in Git with Sealed Secrets and Kubeseal Bitnami has a Sealed Secrets Apache-licensed open source project. Its README explains how it works. Problem: "I can manage all my K8s config in git, except Secrets." Solution: Encrypt your Secret into a SealedSecret, which is safe to store - even to a public repository. The SealedSecret can be decrypted only by the controller running in the target cluster, and nobody else (not even the original author) is able to obtain the original Secret from the SealedSecret. Store your Kubernetes Secrets in Git thanks to Kubeseal. Hello SealedSecret! by Aurélie Vache provides an excellent overview of how to use it. First, you'll need to install the Sealed Secrets CRD (Custom Resource Definition). kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml Enter fullscreen mode Exit fullscreen mode Retrieve the certificate keypair that this controller generates. kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml Enter fullscreen mode Exit fullscreen mode Copy the raw value of tls.crt and decode it. You can use the command line, or learn more about base64 encoding/decoding in our documentation. echo -n | base64 --decode Enter fullscreen mode Exit fullscreen mode Put the raw value in a tls.crt file. Next, install Kubeseal. On macOS, you can use Homebrew. For other platforms, see the release notes. brew install kubeseal Enter fullscreen mode Exit fullscreen mode The major item you need to encrypt in this example is the ENCRYPT_KEY you used to encrypt the OIDC client secret. Run the following command to do this, where the value comes from your k8s/registry-k8s/jhipster-registry.yml file. kubectl create secret generic encrypt-key \ --from-literal=ENCRYPT_KEY='your-value-here' \ --dry-run=client -o yaml > secrets.yml Enter fullscreen mode Exit fullscreen mode Next, use kubeseal to convert the secrets to encrypted secrets. kubeseal --cert tls.crt --format=yaml -n demo < secrets.yml > sealed-secrets.yml Enter fullscreen mode Exit fullscreen mode Remove the original secrets file and deploy your sealed secrets. rm secrets.yml kubectl apply -n demo -f sealed-secrets.yml && kubectl get -n demo sealedsecret encrypt-key Enter fullscreen mode Exit fullscreen mode Configure JHipster Registry to use the Sealed Secret In k8s/registry-k8s/jhipster-registry.yml, change the ENCRYPT_KEY to use your new secret. ... - name: ENCRYPT_KEY valueFrom: secretKeyRef: name: encrypt-key key: ENCRYPT_KEY Enter fullscreen mode Exit fullscreen mode TIP: You should be able to encrypt other secrets, like your database passwords, using a similar technique. Now, redeploy JHipster Registry and restart all your deployments. ./kubectl-apply.sh -f kubectl rollout restart deployment -n demo Enter fullscreen mode Exit fullscreen mode You can use port-forwarding to see the JHipster Registry locally. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Google Cloud Secret Manager Google Cloud has a Secret Manager you can use to store your secrets. There's even a Spring Boot starter to make it convenient to retrieve these values in your app. For example, you could store your database password in a properties file. spring.datasource.password=${sm://my-db-password} Enter fullscreen mode Exit fullscreen mode This is pretty slick, but I like to remain cloud-agnostic. Also, I like how the JHipster Registry allows me to store encrypted secrets in Git. Use Spring Vault for External Secrets Using an external key management solution like HashiCorp Vault is also recommended. The JHipster Registry will have Vault support in its next release. In the meantime, I recommend reading Secure Secrets With Spring Cloud Config and Vault. Scale Your Reactive Java Microservices You can scale your instances using the kubectl scale command. kubectl scale deployments/store --replicas=2 -n demo Enter fullscreen mode Exit fullscreen mode Scaling will work just fine for the microservice apps because they're set up as OAuth 2.0 resource servers and are therefore stateless. However, the gateway uses Spring Security's OIDC login feature and stores the access tokens in the session. So if you scale it, sessions won't be shared. Single sign-on should still work; you'll just have to do the OAuth dance to get tokens if you hit a different instance. To synchronize sessions, you can use Spring Session and Redis with JHipster. CAUTION: If you leave everything running on Google Cloud, you will be charged for usage. Therefore, I recommend removing your cluster or deleting your namespace (kubectl delete ns demo) to reduce your cost. gcloud container clusters delete --zone=us-central1-a You can delete your Ingress IP address too: gcloud compute addresses delete gateway-ingress-ip --global Monitor Your Kubernetes Cluster with K9s Using kubectl to monitor your Kubernetes cluster can get tiresome. That's where K9s can be helpful. It provides a terminal UI to interact with your Kubernetes clusters. K9s was created by my good friend Fernand Galiana. He's also created a commercial version called K9sAlpha. To install it on macOS, run brew install k9s. Then run k9s -n demo to start it. You can navigate to your pods, select them with Return, and navigate back up with Esc. There's also KDash, from JHipster co-lead, Deepu K Sasidharan. It's a simple K8s terminal dashboard built with Rust. Deepu recently released an MVP of the project. If for some reason you don't like CLI's, you can try Kubernetic. Continuous Integration and Delivery of JHipster Microservices This tutorial doesn't mention continuous integration and delivery of your reactive microservice architecture. I plan to cover that in a future post. If you have a solution you like, please leave a comment. Spring on Google Cloud Platform JHipster uses Docker containers to run all its databases in this example. However, there are a number of Google Cloud services you can use as alternatives. See the Spring Cloud GCP project on GitHub for more information. I didn't mention Testcontainers in this post. However, JHipster does support using them. Testcontainers also has a GCloud Module. Why Not Istio? I didn't use Istio in this example because I didn't want to complicate things. Learning Kubernetes is hard enough without learning another system on top of it. Istio acts as a network between your containers that can do networky things like authentication, authorization, monitoring, and retries. I like to think of it as AOP for containers. If you'd like to see how to use JHipster with Istio, see How to set up Java microservices with Istio service mesh on Kubernetes by JHipster co-lead Deepu K Sasidharan. Fernand Galiana recommends checking out BPF (Berkeley Packet Filter) and Cilium. Cilium is open source software for transparently providing and securing the network and API connectivity between application services deployed using Linux container management platforms such as Kubernetes. Learn More About Kubernetes, Spring Boot, and JHipster This blog post showed you how to deploy your reactive Java microservices to production using Kubernetes. JHipster did much of the heavy lifting for you since it generated all the YAML-based deployment descriptors. Since no one really likes writing YAML, I'm calling that a win! You learned how to use JHipster Registry to encrypt your secrets and configure Git as a configuration source for Spring Cloud Config. Bitnami's Sealed Secrets is a nice companion to encrypt the secrets in your Kubernetes deployment descriptors. For more information about storing your secrets externally, these additional resources might help. Kelsey Hightower's Vault on Cloud Run Tutorial James Strachan's Helm Post Renderer You can find the source code for this example on GitHub in our Java microservices examples repository. git clone https://github.com/oktadev/java-microservices-examples.git cd java-microservices-examples/jhipster-k8s Enter fullscreen mode Exit fullscreen mode See JHipster's documentation on Kubernetes and GCP if you'd like more concise instructions. If you enjoyed this post, I think you'll like these others as well: Reactive Java Microservices with Spring Boot and JHipster Build a Secure Micronaut and Angular App with JHipster Fast Java Made Easy with Quarkus and JHipster How to Docker with Spring Boot Security Patterns for Microservice Architectures Build a Microservice Architecture with Spring Boot and Kubernetes (uses Spring Boot 2.1) If you have any questions, please ask them in the comments below. To be notified when we publish new blog posts, follow us on Twitter or LinkedIn. We frequently publish videos to our YouTube channel too. Subscribe today! A huge thanks goes to Fernand Galiana for his review and detailed feedback.

  • node

    Node.js JavaScript runtime ✨🐢🚀✨

  • Node.js

  • sealed-secrets

    A Kubernetes controller and tool for one-way encrypted Secrets

  • Which applications? Set up monitoring? No Which applications with clustered databases? select store Admin password for JHipster Registry: Kubernetes namespace: demo Docker repository name: Command to push Docker image: docker push Enable Istio? No Kubernetes service type? LoadBalancer Use dynamic storage provisioning? Yes Use a specific storage class? NOTE: If you don't want to publish your images on Docker Hub, leave the Docker repository name blank. After I answered these questions, my k8s/.yo-rc.json file had the following contents: { "generator-jhipster": { "appsFolders": ["blog", "gateway", "store"], "directoryPath": "../", "clusteredDbApps": ["store"], "serviceDiscoveryType": "eureka", "jwtSecretKey": "NDFhMGY4NjF...", "dockerRepositoryName": "mraible", "dockerPushCommand": "docker push", "kubernetesNamespace": "demo", "kubernetesServiceType": "LoadBalancer", "kubernetesUseDynamicStorage": true, "kubernetesStorageClassName": "", "ingressDomain": "", "monitoring": "no", "istio": false } } Enter fullscreen mode Exit fullscreen mode I already showed you how to get everything working with Docker Compose in the previous tutorial. So today, I'd like to show you how to run things locally with Minikube. Install Minikube to Run Kubernetes Locally If you have Docker installed, you can run Kubernetes locally with Minikube. Run minikube start to begin. minikube --cpus 8 start Enter fullscreen mode Exit fullscreen mode CAUTION: If this doesn't work, use brew install minikube, or see Minikube's installation instructions. This command will start Minikube with 16 GB of RAM and 8 CPUs. Unfortunately, the default, which is 16 GB RAM and two CPUs, did not work for me. You can skip ahead to creating your Docker images while you wait for this to complete. After this command executes, it'll print out a message and notify you which cluster and namespace are being used. 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default Enter fullscreen mode Exit fullscreen mode TIP: You can stop Minikube with minikube stop and start over with minikube delete. Create Docker Images with Jib Now, you need to build Docker images for each app. In the { gateway, blog, store } directories, run the following Gradle command (where is gateway, store, or blog). This command should also be in the window where you ran jhipster k8s, so you can copy them from there. ./gradlew bootJar -Pprod jib -Djib.to.image=/ Enter fullscreen mode Exit fullscreen mode Create Private Docker Images You can also build your images locally and publish them to your Docker daemon. This is the default if you didn't specify a base Docker repository name. # this command exposes Docker images to minikube eval $(minikube docker-env) ./gradlew -Pprod bootJar jibDockerBuild Enter fullscreen mode Exit fullscreen mode Because this publishes your images locally to Docker, you'll need to make modifications to your Kubernetes deployment files to use imagePullPolicy: IfNotPresent. - name: gateway-app image: gateway imagePullPolicy: IfNotPresent Enter fullscreen mode Exit fullscreen mode Make sure to add this imagePullPolicy to the following files: k8s/gateway-k8s/gateway-deployment.yml k8s/blog-k8s/blog-deployment.yml k8s/store-k8s/store-deployment.yml Register an OIDC App for Auth You've now built Docker images for your microservices, but you haven't seen them running. First, you'll need to configure Okta for authentication and authorization. Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create jhipster. Select the default app name, or change it as you see fit. Accept the default Redirect URI values provided for you. JHipster ships with JHipster Registry. It acts as a Eureka service for service discovery and contains a Spring Cloud Config server for distributing your configuration settings. Update k8s/registry-k8s/application-configmap.yml to contain your OIDC settings from the .okta.env file the Okta CLI just created. The Spring Cloud Config server reads from this file and shares the values with the gateway and microservices. data: application.yml: |- ... spring: security: oauth2: client: provider: oidc: issuer-uri: https:///oauth2/default registration: oidc: client-id: client-secret: Enter fullscreen mode Exit fullscreen mode To configure the JHipster Registry to use OIDC for authentication, modify k8s/registry-k8s/jhipster-registry.yml to enable the oauth2 profile. - name: SPRING_PROFILES_ACTIVE value: prod,k8s,oauth2 Enter fullscreen mode Exit fullscreen mode Now that you've configured everything, it's time to see it in action. Start Your Spring Boot Microservices with K8s In the k8s directory, start your engines! ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode You can see if everything starts up using the following command. kubectl get pods -n demo Enter fullscreen mode Exit fullscreen mode You can use the name of a pod with kubectl logs to tail its logs. kubectl logs --tail=-1 -n demo Enter fullscreen mode Exit fullscreen mode You can use port-forwarding to see the JHipster Registry. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Open a browser and navigate to http://localhost:8761. You'll need to sign in with your Okta credentials. Once all is green, use port-forwarding to see the gateway app. kubectl port-forward svc/gateway -n demo 8080 Enter fullscreen mode Exit fullscreen mode Then, go to http://localhost:8080, and you should be able to add blogs, posts, tags, and products. You can also automate testing to ensure that everything works. Set your Okta credentials as environment variables and run end-to-end tests using Cypress (from the gateway directory). export CYPRESS_E2E_USERNAME= export CYPRESS_E2E_PASSWORD= npm run e2e Enter fullscreen mode Exit fullscreen mode Proof it worked for me: Plain Text Secrets? Uggh! You may notice that I used a secret in plain text in the application-configmap.yml file. Secrets in plain text are a bad practice! I hope you didn't check everything into source control yet!! Encrypt Your Secrets with Spring Cloud Config The JHipster Registry has an encryption mechanism you can use to encrypt your secrets. That way, it's safe to store them in public repositories. Add an ENCRYPT_KEY to the environment variables in k8s/registry-k8s/jhipster-registry.yml. - name: ENCRYPT_KEY value: really-long-string-of-random-charters-that-you-can-keep-safe Enter fullscreen mode Exit fullscreen mode TIP: You can use JShell to generate a UUID you can use for your encrypt key. jshell UUID.randomUUID() You can quit by typing /exit. Restart your JHipster Registry containers from the k8s directory. ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode Encrypt Your OIDC Client Secret You can encrypt your client secret by logging into http://localhost:8761 and going to Configuration > Encryption. If this address doesn't resolve, you'll need to port-forward again. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Copy and paste your client secret from application-configmap.yml (or .okta.env) and click Encrypt. Then, copy the encrypted value back into application-configmap.yml. Make sure to wrap it in quotes! You can also use curl: curl -X POST http://admin:@localhost:8761/config/encrypt -d your-client-secret Enter fullscreen mode Exit fullscreen mode If you use curl, make sure to add {cipher} to the beginning of the string. For example: client-secret: "{cipher}1b12934716c32d360c85f651a0793df2777090c..." Enter fullscreen mode Exit fullscreen mode Apply these changes and restart all deployments. ./kubectl-apply.sh -f kubectl rollout restart deploy -n demo Enter fullscreen mode Exit fullscreen mode Verify everything still works at http://localhost:8080. TIP: If you don't want to restart the Spring Cloud Config server when you update its configuration, see Refresh the Configuration in Your Spring Cloud Config Server. Change Spring Cloud Config to use Git You might want to store your app's configuration externally. That way, you don't have to redeploy everything to change values. Good news! Spring Cloud Config makes it easy to switch to Git instead of the filesystem to store your configuration. In k8s/registry-k8s/jhipster-registry.yml, find the following variables: - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE value: native - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_LOCATIONS value: file:./central-config Enter fullscreen mode Exit fullscreen mode Below these values, add a second lookup location. - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_TYPE value: git - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_URI value: https://github.com/mraible/reactive-java-ms-config/ - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_SEARCH_PATHS value: config - name: SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_1_LABEL value: main Enter fullscreen mode Exit fullscreen mode Create a GitHub repo that matches the URI, path, and branch you entered. In my case, I created reactive-java-ms-config and added a config/application.yml file in the main branch. Then, I added my spring.security.* values to it and removed them from k8s/registry-k8s/application-configmap.yml. See Spring Cloud Config's Git Backend docs for more information. Deploy Spring Boot Microservices to Google Cloud (aka GCP) It's nice to see things running locally on your machine, but it's even better to get to production! In this section, I'll show you how to deploy your containers to Google Cloud. First, stop Minikube if you were running it previously. minikube stop Enter fullscreen mode Exit fullscreen mode You can also use kubectl commands to switch clusters. kubectl config get-contexts kubectl config use-context XXX Enter fullscreen mode Exit fullscreen mode The cool kids use kubectx and kubens to set the default context and namespace. You can learn how to install and use them via the kubectx GitHub project. Create a Container Registry on Google Cloud Before the JHipster 7.0.0 release, I tested this microservice example with Kubernetes and Google Cloud. I found many solutions in Ray Tsang's Spring Boot on GCP Guides. Thanks, Ray! To start with Google Cloud, you'll need an account and a project. Sign up for Google Cloud Platform (GCP), log in, and create a project. Open a console in your browser. A GCP project contains all cloud services and resources--such as virtual machines, network, load balancers--that you might use. TIP: You can also download and install the gcloud CLI if you want to run things locally. Enable the Google Kubernetes Engine API and Container Registry: gcloud services enable container.googleapis.com containerregistry.googleapis.com Enter fullscreen mode Exit fullscreen mode Create a Kubernetes Cluster Run the following command to create a cluster for your apps. gcloud container clusters create CLUSTER_NAME \ --zone us-central1-a \ --machine-type n1-standard-4 \ --enable-autorepair \ --enable-autoupgrade Enter fullscreen mode Exit fullscreen mode I called my cluster reactive-ms. See GCP's zones and machine-types for other options. I found the n1-standard-4 to be the minimum for JHipster. You created Docker images earlier to run with Minikube. Then, those images were deployed to Docker Hub or your local Docker registry. If you deployed to Docker Hub, you can use your deployment files as-is. For Google Cloud and its Kubernetes engine (GKE), you can also publish your images to your project's registry. Thankfully, this is easy to do with Jib. Navigate to the gateway directory and run: ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//gateway Enter fullscreen mode Exit fullscreen mode You can get your project ID by running gcloud projects list. Repeat the process for blog and store. You can run these processes in parallel to speed things up. cd ../blog ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//blog cd ../store ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//store Enter fullscreen mode Exit fullscreen mode TIP: You might have to run gcloud auth configure-docker for Jib to publish to your GCP container registry. Then, in your k8s/**/*-deployment.yml files, add gcr.io/ as a prefix. Remove the imagePullPolicy if you specified it earlier. For example: containers: - name: gateway-app image: gcr.io/jhipster7/gateway env: Enter fullscreen mode Exit fullscreen mode In the k8s directory, apply all the deployment descriptors to run all your images. ./kubectl-apply.sh -f Enter fullscreen mode Exit fullscreen mode You can monitor the progress of your deployments with kubectl get pods -n demo. TIP: If you make a mistake configuring JHipster Registry and need to deploy it, you can do so with the following command: kubectl apply -f registry-k8s/jhipster-registry.yml -n demo kubectl rollout restart statefulset/jhipster-registry -n demo You'll need to restart all your deployments if you changed any configuration settings that services need to retrieve. kubectl rollout restart deploy -n demo Access Your Gateway on Google Cloud Once everything is up and running, get the external IP of your gateway. kubectl get svc gateway -n demo Enter fullscreen mode Exit fullscreen mode You'll need to add the external IP address as a valid redirect to your Okta OIDC app. Run okta login, open the returned URL in your browser, and sign in to the Okta Admin Console. Go to the Applications section, find your application, and edit it. Add the standard JHipster redirect URIs using the IP address. For example, http://34.71.48.244:8080/login/oauth2/code/oidc for the login redirect URI, and http://34.71.48.244:8080 for the logout redirect URI. You can use the following command to set your gateway's IP address as a variable you can curl. EXTERNAL_IP=$(kubectl get svc gateway -ojsonpath="{.status.loadBalancer.ingress[0].ip}" -n demo) curl $EXTERNAL_IP:8080 Enter fullscreen mode Exit fullscreen mode Run open http://$EXTERNAL_IP:8080, and you should be able to sign in. Great! Now that you know things work, let's integrate better security, starting with HTTPS. Add HTTPS to Your Reactive Gateway You should always use HTTPS. It's one of the easiest ways to secure things, especially with the free certificates offered these days. Ray Tsang's External Load Balancing docs was a big help in figuring out all these steps. You'll need a static IP to assign your TLS (the official name for HTTPS) certificate. gcloud compute addresses create gateway-ingress-ip --global Enter fullscreen mode Exit fullscreen mode You can run the following command to make sure it worked. gcloud compute addresses describe gateway-ingress-ip --global --format='value(address)' Enter fullscreen mode Exit fullscreen mode Then, create a k8s/ingress.yml file: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gateway annotations: kubernetes.io/ingress.global-static-ip-name: "gateway-ingress-ip" spec: rules: - http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: gateway port: number: 8080 Enter fullscreen mode Exit fullscreen mode Deploy it and make sure it worked. kubectl apply -f ingress.yml -n demo # keep running this command displays an IP address # (hint: up arrow recalls the last command) kubectl get ingress gateway -n demo Enter fullscreen mode Exit fullscreen mode To use a TLS certificate, you must have a fully qualified domain name and configure it to point to the IP address. If you don't have a real domain, you can use nip.io. Set the IP in a variable, as well as the domain. EXTERNAL_IP=$(kubectl get ingress gateway -ojsonpath="{.status.loadBalancer.ingress[0].ip}" -n demo) DOMAIN="${EXTERNAL_IP}.nip.io" # Prove it works echo $DOMAIN curl $DOMAIN Enter fullscreen mode Exit fullscreen mode To create a certificate, create a k8s/certificate.yml file. cat << EOF > certificate.yml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: gateway-certificate spec: domains: # Replace the value with your domain name - ${DOMAIN} EOF Enter fullscreen mode Exit fullscreen mode Add the certificate to ingress.yml: ... metadata: name: gateway annotations: kubernetes.io/ingress.global-static-ip-name: "gateway-ingress-ip" networking.gke.io/managed-certificates: "gateway-certificate" ... Enter fullscreen mode Exit fullscreen mode Deploy both files: kubectl apply -f certificate.yml -f ingress.yml -n demo Enter fullscreen mode Exit fullscreen mode Check your certificate's status until it prints Status: ACTIVE: kubectl describe managedcertificate gateway-certificate -n demo Enter fullscreen mode Exit fullscreen mode While you're waiting, you can proceed to forcing HTTPS in the next step. Force HTTPS with Spring Security Spring Security's WebFlux support makes it easy to redirect to HTTPS. However, if you redirect all HTTPS requests, the Kubernetes health checks will fail because they receive a 302 instead of a 200. Crack open SecurityConfiguration.java in the gateway project and add the following code to the springSecurityFilterChain() method. http.redirectToHttps(redirect -> redirect .httpsRedirectWhen(e -> e.getRequest().getHeaders().containsKey("X-Forwarded-Proto")) ); Enter fullscreen mode Exit fullscreen mode Rebuild the Docker image for the gateway project. ./gradlew bootJar -Pprod jib -Djib.to.image=gcr.io//gateway Enter fullscreen mode Exit fullscreen mode Run the following commands to start a rolling restart of gateway instances: kubectl rollout restart deployment gateway -n demo Enter fullscreen mode Exit fullscreen mode TIP: Run kubectl get deployments to see your deployment names. Now you should get a 302 when you access your domain. HTTPie is a useful alternative to curl. Update your Okta OIDC app to have https://${DOMAIN}/login/oauth2/code/oidc as a valid redirect URI. Add https://${DOMAIN} to the sign-out redirect URIs too. Encrypt Your Kubernetes Secrets Congratulations! Now you have everything running on GKE, using HTTPS! However, you have a lot of plain-text secrets in your K8s YAML files. "But, wait!" you might say. Doesn't Kubernetes Secrets solve everything? In my opinion, no. They're just unencrypted base64-encoded strings stored in YAML files. There's a good chance you'll want to check in the k8s directory you created. Having secrets in your source code is a bad idea! The good news is most people (where most people = my followers) manage secrets externally. Matt Raible @mraible What's your favorite way to protect secrets in your @kubernetesio YAML files? 16:13 PM - 28 Apr 2021 NOTE: Watch Kubernetes Secrets in 5 Minutes if you want to learn more about Kubernetes Secrets. The Current State of Secret Management in Kubernetes I recently noticed a tweet from Daniel Jacob Bilar that links to a talk from FOSDEM 2021 on the current state of secret management within Kubernetes. It's an excellent overview of the various options. Store Secrets in Git with Sealed Secrets and Kubeseal Bitnami has a Sealed Secrets Apache-licensed open source project. Its README explains how it works. Problem: "I can manage all my K8s config in git, except Secrets." Solution: Encrypt your Secret into a SealedSecret, which is safe to store - even to a public repository. The SealedSecret can be decrypted only by the controller running in the target cluster, and nobody else (not even the original author) is able to obtain the original Secret from the SealedSecret. Store your Kubernetes Secrets in Git thanks to Kubeseal. Hello SealedSecret! by Aurélie Vache provides an excellent overview of how to use it. First, you'll need to install the Sealed Secrets CRD (Custom Resource Definition). kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml Enter fullscreen mode Exit fullscreen mode Retrieve the certificate keypair that this controller generates. kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml Enter fullscreen mode Exit fullscreen mode Copy the raw value of tls.crt and decode it. You can use the command line, or learn more about base64 encoding/decoding in our documentation. echo -n | base64 --decode Enter fullscreen mode Exit fullscreen mode Put the raw value in a tls.crt file. Next, install Kubeseal. On macOS, you can use Homebrew. For other platforms, see the release notes. brew install kubeseal Enter fullscreen mode Exit fullscreen mode The major item you need to encrypt in this example is the ENCRYPT_KEY you used to encrypt the OIDC client secret. Run the following command to do this, where the value comes from your k8s/registry-k8s/jhipster-registry.yml file. kubectl create secret generic encrypt-key \ --from-literal=ENCRYPT_KEY='your-value-here' \ --dry-run=client -o yaml > secrets.yml Enter fullscreen mode Exit fullscreen mode Next, use kubeseal to convert the secrets to encrypted secrets. kubeseal --cert tls.crt --format=yaml -n demo < secrets.yml > sealed-secrets.yml Enter fullscreen mode Exit fullscreen mode Remove the original secrets file and deploy your sealed secrets. rm secrets.yml kubectl apply -n demo -f sealed-secrets.yml && kubectl get -n demo sealedsecret encrypt-key Enter fullscreen mode Exit fullscreen mode Configure JHipster Registry to use the Sealed Secret In k8s/registry-k8s/jhipster-registry.yml, change the ENCRYPT_KEY to use your new secret. ... - name: ENCRYPT_KEY valueFrom: secretKeyRef: name: encrypt-key key: ENCRYPT_KEY Enter fullscreen mode Exit fullscreen mode TIP: You should be able to encrypt other secrets, like your database passwords, using a similar technique. Now, redeploy JHipster Registry and restart all your deployments. ./kubectl-apply.sh -f kubectl rollout restart deployment -n demo Enter fullscreen mode Exit fullscreen mode You can use port-forwarding to see the JHipster Registry locally. kubectl port-forward svc/jhipster-registry -n demo 8761 Enter fullscreen mode Exit fullscreen mode Google Cloud Secret Manager Google Cloud has a Secret Manager you can use to store your secrets. There's even a Spring Boot starter to make it convenient to retrieve these values in your app. For example, you could store your database password in a properties file. spring.datasource.password=${sm://my-db-password} Enter fullscreen mode Exit fullscreen mode This is pretty slick, but I like to remain cloud-agnostic. Also, I like how the JHipster Registry allows me to store encrypted secrets in Git. Use Spring Vault for External Secrets Using an external key management solution like HashiCorp Vault is also recommended. The JHipster Registry will have Vault support in its next release. In the meantime, I recommend reading Secure Secrets With Spring Cloud Config and Vault. Scale Your Reactive Java Microservices You can scale your instances using the kubectl scale command. kubectl scale deployments/store --replicas=2 -n demo Enter fullscreen mode Exit fullscreen mode Scaling will work just fine for the microservice apps because they're set up as OAuth 2.0 resource servers and are therefore stateless. However, the gateway uses Spring Security's OIDC login feature and stores the access tokens in the session. So if you scale it, sessions won't be shared. Single sign-on should still work; you'll just have to do the OAuth dance to get tokens if you hit a different instance. To synchronize sessions, you can use Spring Session and Redis with JHipster. CAUTION: If you leave everything running on Google Cloud, you will be charged for usage. Therefore, I recommend removing your cluster or deleting your namespace (kubectl delete ns demo) to reduce your cost. gcloud container clusters delete --zone=us-central1-a You can delete your Ingress IP address too: gcloud compute addresses delete gateway-ingress-ip --global Monitor Your Kubernetes Cluster with K9s Using kubectl to monitor your Kubernetes cluster can get tiresome. That's where K9s can be helpful. It provides a terminal UI to interact with your Kubernetes clusters. K9s was created by my good friend Fernand Galiana. He's also created a commercial version called K9sAlpha. To install it on macOS, run brew install k9s. Then run k9s -n demo to start it. You can navigate to your pods, select them with Return, and navigate back up with Esc. There's also KDash, from JHipster co-lead, Deepu K Sasidharan. It's a simple K8s terminal dashboard built with Rust. Deepu recently released an MVP of the project. If for some reason you don't like CLI's, you can try Kubernetic. Continuous Integration and Delivery of JHipster Microservices This tutorial doesn't mention continuous integration and delivery of your reactive microservice architecture. I plan to cover that in a future post. If you have a solution you like, please leave a comment. Spring on Google Cloud Platform JHipster uses Docker containers to run all its databases in this example. However, there are a number of Google Cloud services you can use as alternatives. See the Spring Cloud GCP project on GitHub for more information. I didn't mention Testcontainers in this post. However, JHipster does support using them. Testcontainers also has a GCloud Module. Why Not Istio? I didn't use Istio in this example because I didn't want to complicate things. Learning Kubernetes is hard enough without learning another system on top of it. Istio acts as a network between your containers that can do networky things like authentication, authorization, monitoring, and retries. I like to think of it as AOP for containers. If you'd like to see how to use JHipster with Istio, see How to set up Java microservices with Istio service mesh on Kubernetes by JHipster co-lead Deepu K Sasidharan. Fernand Galiana recommends checking out BPF (Berkeley Packet Filter) and Cilium. Cilium is open source software for transparently providing and securing the network and API connectivity between application services deployed using Linux container management platforms such as Kubernetes. Learn More About Kubernetes, Spring Boot, and JHipster This blog post showed you how to deploy your reactive Java microservices to production using Kubernetes. JHipster did much of the heavy lifting for you since it generated all the YAML-based deployment descriptors. Since no one really likes writing YAML, I'm calling that a win! You learned how to use JHipster Registry to encrypt your secrets and configure Git as a configuration source for Spring Cloud Config. Bitnami's Sealed Secrets is a nice companion to encrypt the secrets in your Kubernetes deployment descriptors. For more information about storing your secrets externally, these additional resources might help. Kelsey Hightower's Vault on Cloud Run Tutorial James Strachan's Helm Post Renderer You can find the source code for this example on GitHub in our Java microservices examples repository. git clone https://github.com/oktadev/java-microservices-examples.git cd java-microservices-examples/jhipster-k8s Enter fullscreen mode Exit fullscreen mode See JHipster's documentation on Kubernetes and GCP if you'd like more concise instructions. If you enjoyed this post, I think you'll like these others as well: Reactive Java Microservices with Spring Boot and JHipster Build a Secure Micronaut and Angular App with JHipster Fast Java Made Easy with Quarkus and JHipster How to Docker with Spring Boot Security Patterns for Microservice Architectures Build a Microservice Architecture with Spring Boot and Kubernetes (uses Spring Boot 2.1) If you have any questions, please ask them in the comments below. To be notified when we publish new blog posts, follow us on Twitter or LinkedIn. We frequently publish videos to our YouTube channel too. Subscribe today! A huge thanks goes to Fernand Galiana for his review and detailed feedback.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts