Kubernetes Dashboard supports integration with third-party tools and services to enhance monitoring, logging, and observability capabilities. This guide covers available integrations and how to configure them.
Overview
Dashboard provides an integration framework that allows it to work seamlessly with external tools and services:
Metrics Providers : CPU and memory metrics from various sources
Logging Systems : Log aggregation and analysis platforms
Service Meshes : Integration with Istio, Linkerd, and others
Monitoring Tools : Prometheus, Grafana, and observability platforms
The integration framework is located in modules/api/pkg/integration/manager.go and supports pluggable integrations for metrics and other services.
Metrics Integration
Dashboard integrates with metrics providers to display resource utilization data.
metrics-server (Default)
Dashboard uses metrics-server as the default metrics provider:
Install metrics-server
Deploy metrics-server to your cluster: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Verify Installation
Check that metrics-server is running: kubectl get deployment metrics-server -n kube-system
kubectl top nodes
kubectl top pods -A
Deploy metrics-scraper
The dashboard-metrics-scraper is deployed automatically with Dashboard: kubectl get pods -n kubernetes-dashboard | grep metrics-scraper
How It Works
The metrics integration flow:
metrics-server collects metrics from kubelets
dashboard-metrics-scraper queries the Metrics API every 60 seconds
Metrics are stored in a SQLite database
Dashboard API serves metrics to the frontend
UI displays graphs, sparklines, and usage data
Configuration
Configure metrics integration via Dashboard flags:
args :
- --metrics-scraper-service-name=kubernetes-dashboard-metrics-scraper
- --namespace=kubernetes-dashboard
- --metric-client-check-period=30s
Flags:
--metrics-scraper-service-name: Service name for the metrics scraper
--namespace: Namespace where scraper is deployed
--metric-client-check-period: Health check interval (default: 30s)
Dashboard automatically disables metrics if the provider becomes unavailable, ensuring resilience to metric provider crashes.
Monitoring Integrations
Prometheus
Integrate Dashboard with Prometheus for advanced monitoring:
Scraping Dashboard Metrics
Configure Prometheus to scrape Dashboard metrics:
scrape_configs :
- job_name : 'kubernetes-dashboard'
kubernetes_sd_configs :
- role : pod
namespaces :
names :
- kubernetes-dashboard
relabel_configs :
- source_labels : [ __meta_kubernetes_pod_label_k8s_app ]
regex : kubernetes-dashboard
action : keep
- source_labels : [ __meta_kubernetes_pod_annotation_prometheus_io_scrape ]
regex : "true"
action : keep
Adding Prometheus Annotations
Annotate Dashboard resources for Prometheus discovery:
metadata :
annotations :
prometheus.io/scrape : "true"
prometheus.io/port : "9090"
prometheus.io/path : "/metrics"
Grafana
Visualize Dashboard metrics in Grafana:
Add Prometheus Data Source
Configure Grafana to use your Prometheus instance: apiVersion : 1
datasources :
- name : Prometheus
type : prometheus
url : http://prometheus:9090
access : proxy
Import Dashboard
Create a Grafana dashboard for Kubernetes Dashboard metrics
Configure Panels
Add panels for:
API request rates
Response times
Error rates
Resource utilization
Use Grafana’s Kubernetes dashboards as templates and extend them with Dashboard-specific metrics.
Logging Integrations
ELK Stack (Elasticsearch, Logstash, Kibana)
Integrate Dashboard with ELK for advanced log analysis:
Fluentd Configuration
Deploy Fluentd to collect Dashboard logs:
apiVersion : v1
kind : ConfigMap
metadata :
name : fluentd-config
data :
fluent.conf : |
<source>
@type tail
path /var/log/containers/kubernetes-dashboard-*.log
pos_file /var/log/dashboard.log.pos
tag kubernetes.dashboard
format json
</source>
<match kubernetes.dashboard>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix dashboard
</match>
Kibana Dashboards
Create Kibana visualizations for:
Error rate trends
Request volume by endpoint
Authentication failures
Resource access patterns
Grafana Loki
Use Loki for lightweight log aggregation:
apiVersion : v1
kind : ConfigMap
metadata :
name : promtail-config
data :
promtail.yaml : |
server:
http_listen_port: 9080
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: kubernetes-dashboard
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- kubernetes-dashboard
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_k8s_app]
target_label: app
- source_labels: [__meta_kubernetes_namespace]
target_label: namespace
Service Mesh Integration
Istio
Integrate Dashboard with Istio service mesh:
Enable Sidecar Injection
apiVersion : v1
kind : Namespace
metadata :
name : kubernetes-dashboard
labels :
istio-injection : enabled
Or annotate specific pods:
metadata :
annotations :
sidecar.istio.io/inject : "true"
Virtual Service Configuration
Route traffic through Istio:
apiVersion : networking.istio.io/v1beta1
kind : VirtualService
metadata :
name : dashboard
namespace : kubernetes-dashboard
spec :
hosts :
- dashboard.example.com
gateways :
- dashboard-gateway
http :
- route :
- destination :
host : kubernetes-dashboard
port :
number : 443
Observability
Istio provides automatic metrics, traces, and logs for Dashboard:
Kiali : Service graph and topology
Jaeger : Distributed tracing
Grafana : Istio dashboards
Linkerd
Integrate with Linkerd service mesh:
# Inject Linkerd proxy
kubectl get deploy -n kubernetes-dashboard -o yaml | linkerd inject - | kubectl apply -f -
# Verify injection
linkerd -n kubernetes-dashboard check --proxy
Authentication Integration
OIDC (OpenID Connect)
Dashboard supports OIDC authentication through the Kubernetes API server.
Enable OIDC on your cluster:
kind : ClusterConfiguration
apiVersion : kubeadm.k8s.io/v1beta3
apiServer :
extraArgs :
oidc-issuer-url : https://accounts.google.com
oidc-client-id : kubernetes
oidc-username-claim : email
oidc-groups-claim : groups
Login Flow
Obtain OIDC Token
User authenticates with identity provider
Present Token to Dashboard
Paste the ID token in Dashboard login screen
API Server Validation
Kubernetes API server validates the token
Access Granted
Dashboard proxies requests with the authenticated identity
OAuth2 Proxy
Use oauth2-proxy for SSO integration:
apiVersion : apps/v1
kind : Deployment
metadata :
name : oauth2-proxy
spec :
template :
spec :
containers :
- name : oauth2-proxy
image : quay.io/oauth2-proxy/oauth2-proxy:latest
args :
- --provider=oidc
- --email-domain=*
- --upstream=http://kubernetes-dashboard:443
- --http-address=0.0.0.0:4180
- --redirect-url=https://dashboard.example.com/oauth2/callback
env :
- name : OAUTH2_PROXY_CLIENT_ID
value : your-client-id
- name : OAUTH2_PROXY_CLIENT_SECRET
valueFrom :
secretKeyRef :
name : oauth2-proxy
key : client-secret
Alert Manager Integration
Integrate with Prometheus AlertManager:
Alert Rules
Define alerts for Dashboard issues:
groups :
- name : dashboard
interval : 30s
rules :
- alert : DashboardDown
expr : up{job="kubernetes-dashboard"} == 0
for : 5m
labels :
severity : critical
annotations :
summary : "Kubernetes Dashboard is down"
description : "Dashboard has been down for more than 5 minutes"
- alert : DashboardHighErrorRate
expr : rate(dashboard_errors_total[5m]) > 0.05
for : 10m
labels :
severity : warning
annotations :
summary : "High error rate in Dashboard"
description : "Error rate is {{ $value }} errors/sec"
Notification Routing
Configure alert routing:
route :
group_by : [ 'alertname' , 'cluster' ]
group_wait : 10s
group_interval : 10s
repeat_interval : 12h
receiver : 'team-dashboard'
routes :
- match :
severity : critical
receiver : 'pagerduty'
receivers :
- name : 'team-dashboard'
slack_configs :
- channel : '#kubernetes-dashboard'
text : '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'
- name : 'pagerduty'
pagerduty_configs :
- service_key : your-pagerduty-key
Custom Integrations
Integration Framework
Dashboard’s integration framework supports custom providers:
type IntegrationManager interface {
// Metric returns metric integration
Metric () MetricIntegration
// RegisterMetricIntegration registers new metric integration
RegisterMetricIntegration ( integration MetricIntegration )
}
Creating a Custom Integration
Implement custom metric providers or integrations by extending the framework.
Future versions of Dashboard may support additional integration types beyond metrics, such as custom logging providers or workflow integrations.
Best Practices
Use dedicated namespaces for integrations
Deploy monitoring and logging tools in separate namespaces: kubectl create namespace monitoring
kubectl create namespace logging
Implement resource limits
Set resource requests/limits for integration components: resources :
requests :
memory : "256Mi"
cpu : "100m"
limits :
memory : "512Mi"
cpu : "200m"
Enable TLS for metrics endpoints
Secure metrics scraping with TLS: scheme : https
tls_config :
ca_file : /etc/prometheus/ca.crt
Configure retention policies
Monitor integration health
Set up alerts for integration failures: - alert : MetricsScraperDown
expr : up{job="dashboard-metrics-scraper"} == 0
Troubleshooting
Check metrics-server: kubectl get pods -n kube-system | grep metrics-server
kubectl logs -n kube-system deployment/metrics-server
Verify metrics-scraper: kubectl logs -n kubernetes-dashboard deployment/dashboard-metrics-scraper
Test Metrics API: kubectl top nodes
kubectl top pods -A
Check service discovery: kubectl get servicemonitor -n kubernetes-dashboard
Verify annotations: kubectl get pods -n kubernetes-dashboard -o yaml | grep prometheus.io
Check Prometheus targets:
Navigate to Prometheus UI → Status → Targets
Istio sidecar not injecting
Verify namespace label: kubectl get namespace kubernetes-dashboard --show-labels
Check injection status: kubectl get pods -n kubernetes-dashboard -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].name}{"\n"}{end}'
Look for istio-proxy container.
Next Steps
Monitoring Metrics Learn about metrics collection and visualization
Viewing Logs Access and analyze container logs