Skip to main content
A Fleet is a high-level abstraction for managing a group of identical GameServers. Similar to Kubernetes Deployments, Fleets provide declarative updates, rolling deployments, and automatic scaling capabilities for game server workloads.

Fleet Resource

Fleets use a two-level hierarchy to manage GameServers: Source: pkg/apis/agones/v1/fleet.go:114-152
apiVersion: agones.dev/v1
kind: Fleet
metadata:
  name: simple-fleet
spec:
  replicas: 5
  template:
    spec:
      ports:
        - name: default
          containerPort: 7654
          portPolicy: Dynamic
      template:
        spec:
          containers:
            - name: simple-game-server
              image: us-docker.pkg.dev/agones-images/examples/simple-game-server:0.41

Scaling Operations

Manual Scaling

Scale a Fleet by updating the replicas field:
# Via kubectl
kubectl scale fleet simple-fleet --replicas=10

# Via patch
kubectl patch fleet simple-fleet --type=merge -p '{"spec":{"replicas":10}}'

# Via edit
kubectl edit fleet simple-fleet

Fleet Status

Fleet status aggregates information from underlying GameServers: Source: pkg/apis/agones/v1/fleet.go:88-112
status:
  replicas: 10              # Total GameServers
  readyReplicas: 7          # Ready state
  reservedReplicas: 1       # Reserved state  
  allocatedReplicas: 2      # Allocated state
  players:                  # Aggregated player tracking (Alpha)
    count: 45
    capacity: 1000
  counters:                 # Aggregated counters (Beta)
    rooms:
      count: 12
      capacity: 100
      allocatedCount: 8
      allocatedCapacity: 80
  lists:                    # Aggregated lists (Beta)
    players:
      count: 45
      capacity: 1000
      allocatedCount: 30
      allocatedCapacity: 800
Query Fleet status:
# Get ready count
kubectl get fleet simple-fleet -o jsonpath='{.status.readyReplicas}'

# Get all status fields
kubectl get fleet simple-fleet -o jsonpath='{.status}' | jq

# Watch status changes
kubectl get fleet simple-fleet -w

Deployment Strategies

Fleets support two deployment strategies for updating GameServers:

RollingUpdate (Default)

Gradually replaces old GameServers with new ones: Source: fleet.go:156-177
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%        # Max new GameServers to create
      maxUnavailable: 25%  # Max old GameServers to terminate
1

Calculate bounds

Based on maxSurge and maxUnavailable, determine how many GameServers to create/delete per iteration.
2

Create new GameServers

Spin up new GameServers with updated configuration (up to maxSurge).
3

Wait for Ready

Wait for new GameServers to become Ready before proceeding.
4

Delete old GameServers

Remove old non-allocated GameServers (up to maxUnavailable).
5

Repeat

Continue until all GameServers are updated.
Configuration options:
rollingUpdate:
  maxSurge: 25%        # 25% of replicas
  maxUnavailable: 25%  # Must be 1-99%
Validation (fleet.go:190-201):
  • Must be between 1% and 99%
  • Calculated by rounding up
Allocated GameServers are never deleted during a rolling update. Only Ready and Reserved GameServers are replaced.

Recreate

Deletes all non-allocated GameServers before creating new ones:
spec:
  strategy:
    type: Recreate
Recreate strategy causes temporary unavailability of Ready GameServers. Use only when you need to completely replace the Fleet configuration.
Best for:
  • Major version updates
  • Breaking configuration changes
  • When zero Ready GameServers is acceptable

Scheduling Strategies

Control how GameServers are distributed across nodes: Source: fleet.go:154-162
Goal: Minimize infrastructure usage
spec:
  scheduling: Packed
Behavior:
  • Concentrates GameServers on fewer nodes
  • Uses pod affinity to prefer nodes with existing GameServers
  • Allows cluster autoscaler to scale down unused nodes
Best for: Cloud environments with node autoscalingImplementation: gameserver.go:889-912

Allocation Overflow

Handle situations where allocated GameServers exceed desired replicas: Source: common.go:113-176
spec:
  replicas: 10
  allocationOverflow:
    labels:
      overflow: "true"
      tier: "spot-instances"
    annotations:
      reason: "excess-allocation"
      timestamp: "auto"
Use case: When you scale down a Fleet from 20 to 10 replicas, but 15 GameServers are Allocated:
  1. Fleet maintains all 15 Allocated GameServers
  2. Only creates 10 total GameServers (desired replicas)
  3. 5 “overflow” GameServers get the overflow labels/annotations applied
  4. Overflow GameServers are deleted when they become non-allocated
AllocationOverflow helps identify GameServers that exist beyond your desired fleet size, useful for cost tracking and monitoring.

Priorities (Beta)

Define which GameServers are most important to keep during scale-down: Source: fleet.go:71-83
spec:
  priorities:
    - type: Counter      # Sort by Counter or List
      key: rooms         # Name of the Counter/List
      order: Ascending   # Ascending or Descending
    - type: List
      key: players
      order: Descending
How it works:
  • Position 0 has highest priority (checked first)
  • Compares available capacity: Capacity - Count (Counters) or Capacity - len(Values) (Lists)
  • Ascending: GameServers with less available capacity are removed first
  • Descending: GameServers with more available capacity are removed first
Impact by scheduling strategy:
Priorities are a tie-breaker within the same node:
  1. Choose least utilized node (fewest GameServers)
  2. Within that node, use priorities to pick GameServer to delete
Feature Gate: CountsAndLists

Fleet Updates

Update GameServer Template

Changing the Fleet template triggers a rollout:
# Update image
kubectl set image fleet/simple-fleet simple-game-server=myimage:v2

# Update environment variable
kubectl patch fleet simple-fleet --type=json -p='[
  {"op": "add", "path": "/spec/template/spec/template/spec/containers/0/env/-", 
   "value": {"name": "VERSION", "value": "v2"}}
]'

Rollback

# Via kubectl rollout (if using Deployment-like workflow)
kubectl rollout undo fleet/simple-fleet

# Via direct patch (revert to previous template)
kubectl patch fleet simple-fleet --type=merge -p "$(cat previous-template.yaml)"
GameServerSet resources are not automatically deleted. Old GameServerSets remain with 0 replicas for rollback purposes.

Fleet Metrics

Fleets expose metrics for monitoring:

Prometheus Metrics

# Ready replicas
agones_fleet_replicas_ready{name="simple-fleet", namespace="default"}

# Allocated replicas  
agones_fleet_replicas_allocated{name="simple-fleet", namespace="default"}

# Total replicas
agones_fleet_replicas_total{name="simple-fleet", namespace="default"}

# Desired replicas
agones_fleet_replicas_desired{name="simple-fleet", namespace="default"}

Querying Status

# Get GameServers in Fleet
kubectl get gs -l agones.dev/fleet=simple-fleet

# Count by state
kubectl get gs -l agones.dev/fleet=simple-fleet -o json | \
  jq '.items | group_by(.status.state) | map({state: .[0].status.state, count: length})'

# Get allocation rate
kubectl get fleet simple-fleet -o json | \
  jq '.status.allocatedReplicas / .status.replicas'

Common Patterns

Blue-Green Deployments

Run two Fleets side-by-side:
# Blue fleet (current version)
apiVersion: agones.dev/v1
kind: Fleet
metadata:
  name: game-blue
spec:
  replicas: 10
  template:
    metadata:
      labels:
        version: blue
    spec:
      # ... game server v1 config
---
# Green fleet (new version)
apiVersion: agones.dev/v1
kind: Fleet  
metadata:
  name: game-green
spec:
  replicas: 0  # Scale up when ready to switch
  template:
    metadata:
      labels:
        version: green
    spec:
      # ... game server v2 config
Switch traffic:
# Start green fleet
kubectl scale fleet game-green --replicas=10

# Wait for ready
kubectl wait --for=jsonpath='{.status.readyReplicas}'=10 fleet/game-green

# Update allocations to prefer green
kubectl patch gameserverallocation ... -p '{"spec":{"selectors":[{"matchLabels":{"version":"green"}}]}}'

# Scale down blue when no allocations remain
kubectl scale fleet game-blue --replicas=0

Canary Deployments

Gradually shift traffic to new version:
apiVersion: agones.dev/v1
kind: Fleet
metadata:
  name: game-stable
spec:
  replicas: 9  # 90% of traffic
  template:
    metadata:
      labels:
        version: stable
---
apiVersion: agones.dev/v1
kind: Fleet
metadata:
  name: game-canary
spec:
  replicas: 1  # 10% of traffic
  template:
    metadata:
      labels:
        version: canary
Increase canary percentage:
kubectl scale fleet game-canary --replicas=3  # 30%
kubectl scale fleet game-stable --replicas=7  # 70%

Best Practices

Set resource limits

Define CPU and memory limits in the GameServer template to ensure predictable scheduling.

Use health checks

Configure appropriate health check settings to detect and replace unhealthy GameServers.

Monitor allocation ratio

Track allocatedReplicas / readyReplicas to right-size your Fleet or configure autoscaling.

Use priorities for cost optimization

Configure priorities to preferentially delete less utilized GameServers first.

Troubleshooting

Fleet not scaling

# Check Fleet events
kubectl describe fleet <name>

# Check GameServerSet status
kubectl get gameserverset -l agones.dev/fleet=<name>

# Check for resource constraints
kubectl get events --field-selector reason=FailedScheduling

GameServers not updating

# Verify strategy configuration
kubectl get fleet <name> -o jsonpath='{.spec.strategy}'

# Check if GameServers are allocated (won't be deleted)
kubectl get gs -l agones.dev/fleet=<name> -o jsonpath='{.items[*].status.state}'

# Force delete old GameServerSet if needed (careful!)
kubectl delete gameserverset <old-gss-name>

Next Steps

Allocation

Learn how to allocate GameServers from Fleets

Autoscaling

Automatically scale Fleets based on demand

Multi-Cluster Fleets

Distribute Fleets across multiple clusters

Fleet Metrics

Monitor Fleet health and performance

Build docs developers (and LLMs) love