-
The repository contains a small Rust application that could be used for sending batches of messages to the SQS queue (see src/loader). You can run the app using the cargo run command with the necessary flags --message-count and --queue-url:
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
As mentioned in the previous paragraph, the transformer has been built using the SQS trigger for Spin. Looking at the implementation, you can see that incoming data is validated, transformed and loaded into the target system (Redis Channel provided through Valkey):
-
valkey
A flexible distributed key-value datastore that is optimized for caching and other realtime workloads.
For the sake of this article, we’ll focus on scaling the transformation part of the ETL application. We’ll use an Amazon SQS (Simple Queue Service) queue as the ingestion layer and a simple Valkey Channel (deployed to the Kubernetes cluster) as the target system.
-
Spin Apps are packaged and distributed as OCI artifacts. By leveraging OCI artifacts, Spin Apps can be distributed using any registry that implements the Open Container Initiative Distribution Specification (a.k.a. “OCI Distribution Spec”).
-
keda
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
KEDA (Kubernetes Event-Driven Autoscaling) extends Kubernetes’ scaling capabilities by allowing workloads to scale based on event-driven metrics such as message queue length, HTTP requests, or custom Prometheus queries. Unlike traditional Horizontal Pod Autoscalers (HPA) that rely solely on CPU or memory metrics, KEDA provides fine-grained control and adaptability to diverse application needs. For developers using SpinKube, KEDA enables efficient scaling of Spin apps based on application-specific metrics, making it easier to handle event-driven workloads in a Kubernetes environment. KEDA has a vast amount of built-in scalers to simplify integration with services running inside and outside of Kubernetes itself.
-
Setting up the Kubernetes cluster and the AWS SQS queue is outside the scope of this article, but you can deploy an Amazon EKS cluster by following this guide, or use k3s as a lightweight, local alternative. For setting up an SQS queue, refer to this tutorial.