Knative switchboard series, part 1. Setup Knative Eventing with Kafka from scratch, scale based on events volume, and monitor

This page summarizes the projects mentioned and recommended in the original post on dev.to

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
  • poc-files

    Follow Strimzi quickstart to install kafka in knative-eventing namespace, but use different Kafka cluster definition, see below. Knative workloads are expecting to be run in knative-eventing namespace, otherwise issues arise. And it's easier to keep Knative and Kafka in one namespace. Use kafka-cluster.yaml as kafka cluster resource instead of the one used in Strimzi quickstart (kafka-single-persistent.yaml). If you're not limited on disk, best to set storage: size: 50Gi or 100Gb in kafka-cluster yaml, and at least 25Gb for zookeeper storage. For trial quota, you're limited to 20Gb and 10Gb for zookeeper (if we're doing 2 Kafka clusters, if one - can be more).

  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • eventing-autoscaler-keda

    KEDA support for Knative Event Sources Autoscaling

    Install scaling controller for Kafka sources - Keda autoscaler. HPA parameters are controlled by annotations on the Kafka source yaml definition:

  • grafana-dashboard-from-metric-list

    Create grafana dashboard from metric list and uid of datasource

    Knative exposes a couple of it's own metrics (like processing delays) and also exposes a huge amount of Kafka metrics for it's consumers/producers. I ended up curl-ing Knative Services on the metrics port, and scripting a tool that would help to create primitive Grafana dashboard for the list of metric names and uid of datasource. See readme on how to use the tool. Or can replace datasource uid in the dashboard-*.json with your datasource uid, and make sure job selectors in the dashboard JSON match the service name that sends metrics.

  • strimzi-kafka-operator

    Apache Kafka® running on Kubernetes

    Knative dashboards together with Kafka's dashboards it sheds light on almost any aspect of what's going on in the system.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • How to self host Apache Kafka?

    1 project | /r/selfhosted | 22 Feb 2023
  • Prometheus Additional Scrape Config node metrics limitation

    1 project | /r/PrometheusMonitoring | 21 Jul 2022
  • How to renew Certificate in Strimzi Kafka entity operator.

    1 project | /r/sysadmin | 14 Jul 2022
  • Kafka visualization tool

    1 project | /r/apachekafka | 15 Feb 2023
  • Local app to debug pub/sub?

    1 project | /r/googlecloud | 6 Feb 2023