Skip to main content
Kubernetes Dashboard integrates with metrics-server to provide real-time insights into cluster resource utilization. This guide covers how to view and interpret metrics for nodes, pods, and other resources.

Overview

Dashboard uses the dashboard-metrics-scraper sidecar to collect and store metrics data, providing:
  • CPU and memory usage graphs
  • Historical sparklines in list views
  • Resource utilization trends
  • Per-pod and per-node metrics
Metrics collection requires metrics-server to be running in your cluster. The metrics-scraper is deployed by default with Kubernetes Dashboard.

Prerequisites

Installing metrics-server

Verify metrics-server is running:
kubectl top nodes
If the command fails, install metrics-server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Verify installation:
kubectl get deployment metrics-server -n kube-system
kubectl top pod

Dashboard Metrics Scraper

The metrics scraper is automatically deployed with Dashboard. It:
  • Queries the Metrics API every 60 seconds
  • Stores data points in a SQLite database
  • Serves metrics to the Dashboard frontend
  • Maintains historical data for sparklines and graphs

Metrics Architecture

The metrics flow:

Metrics Data Structure

The scraper stores metrics in a structured format (modules/metrics-scraper/pkg/api/dashboard/types.go:28-75):
type SidecarMetric struct {
    DataPoints   []DataPoint
    MetricPoints []MetricPoint
    MetricName   string
    UIDs         []types.UID
}

type MetricPoint struct {
    Timestamp time.Time
    Value     uint64
}

type DataPoint struct {
    X int64  // Timestamp
    Y int64  // Value
}

Database Schema

Metrics are stored in two tables: Pods Table:
CREATE TABLE pods (
    namespace TEXT,
    name TEXT,
    uid TEXT,
    time TEXT,
    cpu INTEGER,
    memory INTEGER
)
Nodes Table:
CREATE TABLE nodes (
    name TEXT,
    uid TEXT,
    time TEXT,
    cpu INTEGER,
    memory INTEGER
)

Viewing Node Metrics

Access node metrics at ClusterNodes.

Node Metrics Display

CPU Usage

Shows CPU utilization in millicores and percentage

Memory Usage

Displays memory consumption in bytes and percentage

Pod Count

Number of pods running on the node

Allocation

Resource requests vs. capacity

Node Detail Metrics

Click on a node to view detailed metrics:
  • CPU Chart: Historical CPU usage over time
  • Memory Chart: Historical memory consumption
  • Resource Allocation: Visual breakdown of allocated resources
  • Capacity: Total node capacity vs. requests vs. limits
Hover over chart data points to see exact values and timestamps.

Viewing Pod Metrics

Access pod metrics at WorkloadsPods.

List View Sparklines

The pod list displays mini sparkline graphs showing:
  • Recent CPU usage trend
  • Recent memory usage trend
  • Visual indication of resource consumption patterns

Pod Detail Metrics

The pod detail page shows:
type PodMetrics struct {
    CPUUsage    int64  // Current CPU in millicores
    MemoryUsage int64  // Current memory in bytes
    CPUHistory  []DataPoint
    MemoryHistory []DataPoint
}
  • Current Usage: Millicores currently consumed
  • Requested: CPU requests defined in pod spec
  • Limited: CPU limits defined in pod spec
  • Usage Graph: Historical CPU consumption

Metrics API Endpoints

The metrics scraper exposes REST endpoints (modules/metrics-scraper/pkg/api/dashboard/dashboard.go:33-36):

Node Metrics

GET /nodes/{nodeName}/metrics/{metricName}/{whatever}
Example:
curl http://dashboard-metrics-scraper/nodes/node-1/metrics/cpu/
Response:
{
  "items": [
    {
      "metricName": "cpu",
      "metricPoints": [
        {"timestamp": "2026-03-05T10:30:00Z", "value": 450000000},
        {"timestamp": "2026-03-05T10:31:00Z", "value": 520000000}
      ],
      "dataPoints": [{"x": 1709637000, "y": 450}],
      "uids": ["node-1-uid"]
    }
  ]
}

Pod Metrics

GET /namespaces/{namespace}/pod-list/{podName}/metrics/{metricName}/{whatever}
Example:
curl http://dashboard-metrics-scraper/namespaces/default/pod-list/nginx-abc/metrics/memory/

Deployment Metrics

View aggregated metrics for deployments at WorkloadsDeployments. The deployment view shows (modules/api/pkg/resource/deployment/list.go:31-44):
type DeploymentList struct {
    ListMeta          types.ListMeta
    CumulativeMetrics []metricapi.Metric
    Status            common.ResourceStatus
    Deployments       []Deployment
    Errors            []error
}
Cumulative metrics include:
  • Aggregate CPU usage across all pods
  • Aggregate memory usage across all pods
  • Per-deployment sparklines
  • Resource utilization trends

Metrics Configuration

Customize metrics behavior with Dashboard flags:

API Container Flags

--metrics-scraper-service-name=kubernetes-dashboard-metrics-scraper
--namespace=kubernetes-dashboard
--metric-client-check-period=30s

Metrics Scraper Flags

--db-file=/tmp/metrics.db
--metric-resolution=1m
--metric-duration=15m
The --metric-client-check-period flag controls health check frequency. Dashboard disables metrics if the scraper becomes unavailable.

Understanding Metrics Data

CPU Metrics

CPU is measured in millicores:
  • 1000m = 1 full CPU core
  • 500m = 0.5 CPU cores
  • 100m = 10% of one CPU core

Memory Metrics

Memory is measured in bytes:
  • 1024 bytes = 1 KiB
  • 1048576 bytes = 1 MiB
  • 1073741824 bytes = 1 GiB

Time Windows

Default retention:
  • Scrape Interval: 60 seconds
  • Retention Period: 15 minutes
  • Database Size: Automatically managed

Troubleshooting

Check metrics-server installation:
kubectl get pods -n kube-system | grep metrics-server
kubectl top nodes
Verify metrics-scraper is running:
kubectl get pods -n kubernetes-dashboard | grep metrics-scraper
kubectl logs -n kubernetes-dashboard deployment/kubernetes-dashboard-metrics-scraper
Check the scrape interval and database health:
kubectl logs -n kubernetes-dashboard deployment/kubernetes-dashboard-metrics-scraper --tail=50
Restart the metrics scraper:
kubectl rollout restart deployment/kubernetes-dashboard-metrics-scraper -n kubernetes-dashboard
Reduce retention period or scrape interval:
--metric-duration=10m  # Reduce from 15m

Metrics Integration

Dashboard integrates with the metrics ecosystem:

Supported Metrics Providers

metrics-server

Default metrics provider using the Metrics API

Custom Providers

Extend via the integration framework

Integration Framework

The integration framework (modules/api/pkg/integration/manager.go) supports:
  • Multiple metric providers
  • Health checking and failover
  • Provider registration and discovery
Dashboard currently integrates metrics-server by default. The integration framework allows for future expansion to providers like Prometheus or custom metrics APIs.

Best Practices

Check metrics weekly to identify resource bottlenecks and optimize allocations.
Use metrics data to determine realistic CPU and memory limits for your workloads.
Integrate external monitoring systems for alerting on unusual metric patterns.
Use HorizontalPodAutoscaler (HPA) to automatically scale based on CPU/memory metrics.

Next Steps

Viewing Logs

Access container logs for debugging

Integrations

Learn about third-party monitoring integrations

Build docs developers (and LLMs) love