Skip to main content
LiquidBounce integrates deep learning capabilities through the Deep Java Library (DJL) framework, enabling AI-powered combat features and adaptive behavior.

Overview

The deep learning system provides:
  • AI-powered combat prediction and aim assistance
  • Neural network model management
  • Training capabilities for custom models
  • PyTorch backend integration
  • CPU-optimized inference

Architecture

DeepLearningEngine

The core engine manages DJL initialization and model storage (DeepLearningEngine.kt:31).
suspend fun init(task: Task) {
    this.task = task
    
    logger.info("Initializing engine...")
    val engine = withContext(Dispatchers.IO) {
        Engine.getInstance()
    }
    val name = engine.engineName
    val version = engine.version
    val deviceType = engine.defaultDevice().deviceType
    logger.info("Using deep learning engine $name $version on $deviceType.")
    
    isInitialized = true
    this.task = null
}

Directory Structure

The engine maintains organized directories:
deeplearning/
directory
Root directory for all AI-related files

Model Manager

The ModelManager handles loading, training, and managing AI models (ModelManager.kt:33).

Base Models

Pre-trained combat models included with LiquidBounce:
val combatModels = arrayOf(
    "21KC11KP",  // 1.21 KillAura Combat (11K Parameters)
    "19KC8KP"    // 1.19 KillAura Combat (8K Parameters)
)

Loading Models

1

Scan Available Models

Searches the models folder for custom trained models
2

Load Base Models

Loads embedded models from resources
3

Initialize Each Model

Creates model instances with proper translators
4

Sync GUI

Updates the ClickGUI with available models
fun load() {
    logger.info("Loading models...")
    val choices = allCombatModels.mapToArray { name ->
        TwoDimensionalRegressionModel(name, models)
    }
    
    for (model in choices) {
        runCatching {
            measureTime {
                model.load()
            }
        }.onFailure { error ->
            logger.error("Failed to load model '${model.name}'.", error)
        }.onSuccess { time ->
            logger.info("Loaded model '${model.name}' in ${time.inWholeMilliseconds}ms.")
        }
    }
}

Model Architecture

Multi-Layer Perceptron (MLP)

Models use a standard MLP architecture (ModelWrapper.kt:146):
1

Input Layer

128 units with Xavier initialization
2

Hidden Layer 1

64 units with Batch Normalization and ReLU activation
3

Hidden Layer 2

32 units with Batch Normalization and ReLU activation
4

Output Layer

Variable units based on task (2 for 2D regression)
private fun createMlpBlock(outputs: Long) = SequentialBlock()
    .add(Linear.builder().setUnits(128).build())
    .add(Blocks.batchFlattenBlock())
    .add(BatchNorm.builder().build())
    .add(Activation.reluBlock())
    
    .add(Linear.builder().setUnits(64).build())
    .add(Blocks.batchFlattenBlock())
    .add(BatchNorm.builder().build())
    .add(Activation.reluBlock())
    
    .add(Linear.builder().setUnits(32).build())
    .add(Blocks.batchFlattenBlock())
    .add(BatchNorm.builder().build())
    .add(Activation.reluBlock())
    
    .add(Linear.builder().setUnits(outputs).build())

Making Predictions

2D Regression Model

Used for aim prediction and targeting (TwoDimensionalRegressionModel.kt:24):
class TwoDimensionalRegressionModel(
    name: String,
    parent: ModeValueGroup<*>
) : ModelWrapper<FloatArray, FloatArray>(
    name,
    FloatArrayInAndOutTranslator(),
    2, // X, Y outputs
    parent
)

Prediction API

@Throws(TranslateException::class)
fun predict(input: I): O {
    require(DeepLearningEngine.isInitialized) { 
        "DeepLearningEngine is not initialized" 
    }
    
    return predictor.predict(input)
}
// Get active model
val model = ModelManager.models.activeMode

// Prepare input features
val input = floatArrayOf(playerX, playerY)

// Make prediction
val output = model.predict(input)
val targetX = output[0]
val targetY = output[1]

Training Models

Training Configuration

fun train(features: Array<FloatArray>, labels: Array<FloatArray>) {
    require(features.size == labels.size) { 
        "Features and labels must have the same size" 
    }
    require(features.isNotEmpty()) { 
        "Features and labels must not be empty" 
    }
    
    val trainingConfig = DefaultTrainingConfig(Loss.l2Loss())
        .optInitializer(XavierInitializer(), "weight")
        .optOptimizer(
            Adam.builder()
                .optLearningRateTracker(Tracker.fixed(0.001f))
                .build()
        )
        .addTrainingListeners(
            LoggingTrainingListener(), 
            OverlayTrainingListener(NUM_EPOCH)
        )
}

Training Parameters

NUM_EPOCH
int
default:"100"
Number of training epochs
BATCH_SIZE
int
default:"32"
Batch size for training
learning_rate
float
default:"0.001"
Adam optimizer learning rate
loss
Loss
default:"L2Loss"
Mean squared error loss function

Model Persistence

Saving Models

// Save to models folder with default name
model.save()

// Save with custom name
model.save("my-custom-model")

// Save to specific path
model.save(Path.of("/custom/path/model"))

Loading Models

// Load from models folder
model.load("custom-model")

Performance Optimization

CPU-Only Configuration

LiquidBounce uses CPU-optimized PyTorch to avoid CUDA conflicts and reduce download size.
// Enforce CPU pytorch flavor
System.setProperty("PYTORCH_FLAVOR", "cpu")

Resource Management

All models implement Closeable for proper resource cleanup:
override fun close() {
    predictor.close()
    model.close()
}
Always close models when done to prevent memory leaks. Use ModelManager.unload() to close all models.

Model Management Commands

// Load all models
ModelManager.load()

// Unload all models
ModelManager.unload()

// Reload models (unload + load)
ModelManager.reload()

// Delete specific model
model.delete()

Integration with Combat Modules

AI models are typically used in combat modules:
  1. Feature Extraction: Collect combat data (positions, velocities, distances)
  2. Prediction: Feed features to the model
  3. Application: Apply predicted aim adjustments
  4. Training: Optionally collect data for model improvement
Models can be selected dynamically through the module configuration system using ModeValueGroup.

Best Practices

Initialize Once

Only initialize the DeepLearningEngine once during startup

Lazy Loading

Models use lazy initialization to reduce startup time

Error Handling

Always wrap predictions in try-catch blocks

Resource Cleanup

Close models and predictors when no longer needed

Troubleshooting

Ensure DeepLearningEngine.init() is called before using models.
Check that model files exist in the correct directory and are not corrupted.
Verify input dimensions match the model’s expected input shape.
Consider reducing model complexity or prediction frequency.

DJL Documentation

Official Deep Java Library documentation

Combat Modules

Modules that utilize AI features

Build docs developers (and LLMs) love