Skip to main content
Pprof middleware exposes runtime profiling endpoints for analysis with Go’s pprof tool. It registers handlers under /debug/pprof/ to help diagnose performance issues, memory leaks, and goroutine problems.

Installation

go get -u github.com/gofiber/fiber/v3
go get -u github.com/gofiber/fiber/v3/middleware/pprof

Signatures

func New(config ...Config) fiber.Handler

Usage

Basic Usage

package main

import (
    "github.com/gofiber/fiber/v3"
    "github.com/gofiber/fiber/v3/middleware/pprof"
)

func main() {
    app := fiber.New()

    // Register pprof handlers
    app.Use(pprof.New())

    app.Get("/", func(c fiber.Ctx) error {
        return c.SendString("Hello, World!")
    })

    app.Listen(":3000")
}

Custom Prefix

// Add URL prefix for multi-ingress systems
app.Use(pprof.New(pprof.Config{
    Prefix: "/myapp",
}))

// Endpoints will be available at:
// /myapp/debug/pprof/
// /myapp/debug/pprof/heap
// /myapp/debug/pprof/goroutine
// etc.

Conditional Registration

app.Use(pprof.New(pprof.Config{
    Next: func(c fiber.Ctx) bool {
        // Only enable in development
        return os.Getenv("ENV") == "production"
    },
}))

Configuration

Next
func(fiber.Ctx) bool
default:"nil"
Function to skip this middleware when it returns true.
Prefix
string
URL prefix added before /debug/pprof. Must start with / and omit trailing slash. Example: /federated-fiber

Default Configuration

var ConfigDefault = Config{
    Next:   nil,
    Prefix: "",
}

Available Endpoints

Once registered, the following endpoints are available:
EndpointDescription
/debug/pprof/Index page with all available profiles
/debug/pprof/cmdlineCommand line that started the program
/debug/pprof/profileCPU profile (30 seconds by default)
/debug/pprof/symbolSymbol lookup
/debug/pprof/traceExecution trace
/debug/pprof/heapHeap memory allocations
/debug/pprof/goroutineStack traces of all goroutines
/debug/pprof/threadcreateStack traces of thread creation
/debug/pprof/blockStack traces of blocking operations
/debug/pprof/mutexStack traces of mutex contention
/debug/pprof/allocsAll past memory allocations

Usage Examples

View Index Page

curl http://localhost:3000/debug/pprof/

CPU Profiling

# Collect 30-second CPU profile
curl http://localhost:3000/debug/pprof/profile?seconds=30 > cpu.prof

# Analyze with pprof
go tool pprof cpu.prof

# Or analyze directly from URL
go tool pprof http://localhost:3000/debug/pprof/profile?seconds=30

Heap Profiling

# Collect heap profile
curl http://localhost:3000/debug/pprof/heap > heap.prof

# Analyze memory usage
go tool pprof -http=:8080 heap.prof

# Or directly
go tool pprof -http=:8080 http://localhost:3000/debug/pprof/heap

Goroutine Analysis

# Get goroutine dump
curl http://localhost:3000/debug/pprof/goroutine > goroutine.prof

# Analyze goroutines
go tool pprof goroutine.prof

# Or view as text
curl http://localhost:3000/debug/pprof/goroutine?debug=1

Trace Collection

# Collect 5-second trace
curl http://localhost:3000/debug/pprof/trace?seconds=5 > trace.out

# View trace in browser
go tool trace trace.out

Block Profiling

import "runtime"

func main() {
    // Enable block profiling
    runtime.SetBlockProfileRate(1)

    app := fiber.New()
    app.Use(pprof.New())
    app.Listen(":3000")
}
# Collect block profile
go tool pprof http://localhost:3000/debug/pprof/block

Mutex Profiling

import "runtime"

func main() {
    // Enable mutex profiling
    runtime.SetMutexProfileFraction(1)

    app := fiber.New()
    app.Use(pprof.New())
    app.Listen(":3000")
}
# Collect mutex profile
go tool pprof http://localhost:3000/debug/pprof/mutex

Best Practices

Restrict Access in Production

app.Use(pprof.New(pprof.Config{
    Next: func(c fiber.Ctx) bool {
        // Only allow from localhost in production
        if os.Getenv("ENV") == "production" {
            return c.IP() != "127.0.0.1"
        }
        return false
    },
}))

Use with Authentication

import "github.com/gofiber/fiber/v3/middleware/keyauth"

// Protect with API key
app.Use("/debug/pprof", keyauth.New(keyauth.Config{
    Validator: validateKey,
}))

app.Use(pprof.New())

Development Only

if os.Getenv("ENV") != "production" {
    app.Use(pprof.New())
}

Common Patterns

Debug Container in Kubernetes

apiVersion: v1
kind: Pod
metadata:
  name: myapp
spec:
  containers:
  - name: app
    image: myapp:latest
    ports:
    - containerPort: 3000
      name: http
    - containerPort: 6060
      name: pprof
// Main application on :3000
go app.Listen(":3000")

// Pprof on separate port
debugApp := fiber.New()
debugApp.Use(pprof.New())
go debugApp.Listen(":6060")

Profile Specific Routes

// Only enable pprof for /debug routes
app.Get("/debug/*", pprof.New())

With Custom Prefix for Service Mesh

// For services behind Envoy/Istio
app.Use(pprof.New(pprof.Config{
    Prefix: "/myservice",
}))

// Access at: /myservice/debug/pprof/

Analysis Commands

Interactive Mode

# CPU profile
go tool pprof http://localhost:3000/debug/pprof/profile?seconds=30

# Common commands in pprof:
# top - Show top functions
# list <function> - Show source code
# web - Open in browser (requires graphviz)
# pdf - Generate PDF

Web Interface

# Start web UI
go tool pprof -http=:8080 http://localhost:3000/debug/pprof/heap

Compare Profiles

# Take baseline
curl http://localhost:3000/debug/pprof/heap > baseline.prof

# ... run some operations ...

# Take another snapshot
curl http://localhost:3000/debug/pprof/heap > current.prof

# Compare
go tool pprof -base=baseline.prof current.prof

Security Considerations

  • Never expose pprof publicly in production - it reveals sensitive application internals
  • Use authentication/authorization for pprof endpoints
  • Restrict access by IP address or network
  • Consider running pprof on a separate port
  • Use the Next function to conditionally enable/disable
  • Monitor access to pprof endpoints

Profiling Best Practices

  1. CPU profiling: Use for identifying hot code paths
  2. Heap profiling: Use for finding memory leaks
  3. Goroutine profiling: Use for detecting goroutine leaks
  4. Block profiling: Use for identifying synchronization bottlenecks
  5. Mutex profiling: Use for finding lock contention

Notes

  • CPU profiles are collected over a time period (default 30 seconds)
  • Heap profiles show current allocations
  • Profiling has minimal overhead but should be used carefully in production
  • The Prefix must start with / and must not end with /
  • Profiling data can be large - ensure adequate bandwidth

Build docs developers (and LLMs) love