Overview
Voxy World Gen V2 is designed to generate massive amounts of terrain without impacting server performance. The system uses multiple layers of throttling, TPS monitoring, and intelligent resource allocation to maintain stable operation even under heavy load.
TPS-Aware Throttling
The TpsMonitor tracks server performance and automatically pauses generation when TPS drops:
// TpsMonitor.java:6-14
public class TpsMonitor {
private final long[] recentTickTimes = new long[20];
private int tickTimeIndex = 0;
private long lastTickNanos = 0;
private final AtomicBoolean throttled = new AtomicBoolean(false);
// standard for high performance: 18 tps (55.5ms)
// aggressively pause if server truly struggles
private static final double MSPT_THRESHOLD = 1000.0 / 18.0;
}
MSPT Calculation
Every server tick, the monitor calculates average milliseconds per tick (MSPT):
// TpsMonitor.java:16-45
public void tick() {
long now = System.nanoTime();
long delta = 0;
if (lastTickNanos > 0) {
delta = now - lastTickNanos;
recentTickTimes[tickTimeIndex] = delta;
tickTimeIndex = (tickTimeIndex + 1) % recentTickTimes.length;
}
lastTickNanos = now;
long totalTickTime = 0;
int count = 0;
for (long tickNanos : recentTickTimes) {
if (tickNanos > 0) {
totalTickTime += tickNanos;
count++;
}
}
float mspt = 0.0f;
if (count > 0) {
mspt = (float) (totalTickTime / count) / 1_000_000.0f;
}
if (mspt > MSPT_THRESHOLD) {
throttled.set(true);
} else {
throttled.set(false);
}
}
Rolling average:
- Samples last 20 ticks
- Smooths out temporary spikes
- Prevents thrashing (rapid on/off)
Threshold: 55.5ms (18 TPS)
- Normal tick: 50ms (20 TPS)
- Buffer: 5.5ms headroom
- Ensures generation pauses before serious lag
Worker Thread Check
The worker loop checks throttle state before processing batches:
// ChunkGenerationManager.java:168-171
if (tpsMonitor.isThrottled() || pauseCheck.getAsBoolean()) {
Thread.sleep(500);
continue;
}
When throttled:
- No new batches dispatched
- Active tasks continue to completion
- Worker sleeps for 500ms
- Resumes when TPS recovers
TPS throttling is reactive, not predictive. It responds to actual server load, not estimated cost. This allows generation to maximize throughput without manual tuning.
Task Parallelism Control
The maxActiveTasks config limits concurrent chunk generation:
// Config.java:51-58
public static class ConfigData {
public boolean enabled = true;
public boolean showF3MenuStats = true;
public int generationRadius = 128;
public int update_interval = 20;
public int maxQueueSize = 20000;
public int maxActiveTasks = 20; // Default: 20 concurrent tasks
public boolean saveNormalChunks = true;
}
Semaphore-Based Limiting
A semaphore controls the number of active generation tasks:
// ChunkGenerationManager.java:69
private Semaphore throttle;
// ChunkGenerationManager.java:109
this.throttle = new Semaphore(Config.DATA.maxActiveTasks);
How it works:
- Acquire Permit Before Task Dispatch
// ChunkGenerationManager.java:258-267
boolean acquired = false;
try {
acquired = throttle.tryAcquire(50, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
if (!acquired) break;
processedCount++;
if (finalState.trackedChunks.add(pos.toLong())) {
activeTaskCount.incrementAndGet();
stats.incrementQueued();
// ... dispatch task ...
}
tryAcquire() with 50ms timeout
- Non-blocking: worker continues if unavailable
- Breaks batch loop if permit unavailable
- Prevents runaway task accumulation
- Release Permit After Task Completes
// ChunkGenerationManager.java:559-564
private void completeTask(DimensionState state, ChunkPos pos) {
if (state.trackedChunks.remove(pos.toLong())) {
activeTaskCount.decrementAndGet();
throttle.release();
}
}
- Called after chunk generation finishes
- Releases permit for next task
- Decrements
activeTaskCount counter
- Dynamic Capacity Updates
// ChunkGenerationManager.java:490-497
private void updateThrottleCapacity() {
int target = Config.DATA.maxActiveTasks;
int available = throttle.availablePermits();
int maxPossible = available + activeTaskCount.get();
if (target > maxPossible) {
throttle.release(target - maxPossible);
}
}
Called when config reloads (ChunkGenerationManager.java:357-361), allowing live adjustment without restart.
Tuning Guidelines
maxActiveTasks controls CPU and memory usage:
| Value | Use Case | CPU Usage | Memory Usage |
|---|
| 5-10 | Weak servers, low RAM | Low | ~1-2 GB |
| 20 | Default, balanced | Medium | ~2-4 GB |
| 50+ | High-end servers, fast generation | High | ~4-8 GB |
| 100+ | Dedicated machine, extreme throughput | Very High | ~8-16 GB |
Factors to consider:
- Server CPU core count (1-2 tasks per core)
- Available RAM (each task ~50-200 MB)
- Player count (leave headroom for gameplay)
- World generator complexity (modded gens cost more)
Setting maxActiveTasks too high can cause:
- Out of memory errors
- Thread contention
- Reduced player performance
- Chunk loading delays
Start with default (20) and increase gradually while monitoring TPS and memory.
Memory Management
Chunk State Tracking
Memory usage for tracking completed chunks:
// ChunkGenerationManager.java:45-46
final LongSet completedChunks = LongSets.synchronize(new LongOpenHashSet());
final LongSet trackedChunks = LongSets.synchronize(new LongOpenHashSet());
Per-dimension overhead:
- Each chunk position: 8 bytes (long)
- 100,000 chunks: ~800 KB
- 1,000,000 chunks: ~8 MB
LongOpenHashSet uses open addressing with ~50% fill factor, so actual memory is ~2x the theoretical minimum.
DistanceGraph Memory
Hierarchical spatial index memory:
// DistanceGraph.java:21-34
private static class Node {
final int level;
final int x, z;
volatile long fullMask = 0;
final Map<Integer, Object> children = new ConcurrentHashMap<>();
}
Node sizes:
- L3 (root): 1 per 2048x2048 chunk region (~32 bytes)
- L2: 64 per L3 node (~2 KB per L3)
- L1: 64 per L2 node (~128 KB per L3)
- L0 (batches): stored as Integer masks in L1
Example: 256 chunk radius
- Area: ~200,000 chunks
- L3 nodes: 1
- L2 nodes: ~64
- L1 nodes: ~4,096
- Total memory: ~150-300 KB
The graph aggressively prunes complete subtrees:
// DistanceGraph.java:71-76
if (child.isFull()) {
synchronized(node) {
node.fullMask |= (1L << idx);
node.children.remove(idx); // Free memory!
}
}
Once a region is fully generated, all child nodes are freed.
Player Tracking
// PlayerTracker.java:12-13
private final Set<ServerPlayer> players;
private final Map<UUID, LongSet> syncedChunks;
Per-player overhead:
- Player reference: 8 bytes
- Synced chunks set: 8 bytes + (8 bytes × chunk count)
- 100 synced chunks: ~800 bytes
- 10,000 synced chunks: ~80 KB
Typical memory: ~1-5 MB per player (depends on generation radius)
Network Payload Size
LOD data transmission size:
// NetworkHandler.java:45-84
public record LODDataPayload(
ChunkPos pos, // 8 bytes
int minY, // 4 bytes
List<SectionData> sections // Variable
)
SectionData per section:
- Block states: ~1-4 KB (compressed palette)
- Biomes: ~100-500 bytes
- Block light: 2048 bytes (nullable)
- Sky light: 2048 bytes (nullable)
Typical chunk (16 sections):
- Empty sections: 0 bytes
- Terrain sections: ~10-15
- Total: ~30-80 KB per chunk
For 64 chunks (8x8 area): ~2-5 MB network traffic
CPU Considerations
Worker Thread
Single background thread for coordination:
- Minimal CPU usage (~1-5%)
- Mostly sleeping/waiting
- No heavy computation
- Just dispatches work
Server Thread
Chunk generation runs on server thread:
- Each task: 50-500ms (depends on generator)
- Tasks dispatched via
server.execute()
- CompletableFuture for async completion
- Does not block other server operations
Tellus Worker Pool
For Earth-scale terrain (optional):
// TellusIntegration.java:41-57
int threadCount = Math.max(1, Runtime.getRuntime().availableProcessors() / 2);
workerPool = new ThreadPoolExecutor(
threadCount, threadCount, 0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>(10000),
r -> {
Thread t = new Thread(r, "tellus-voxy-worker-" + threadCounter.getAndIncrement());
t.setDaemon(true);
t.setPriority(Thread.NORM_PRIORITY - 1);
return t;
},
new ThreadPoolExecutor.CallerRunsPolicy()
);
Pool configuration:
- Thread count: CPU cores / 2
- Priority: below normal (doesn’t starve server)
- Queue: 10,000 tasks
- Policy: CallerRunsPolicy (backpressure)
When queue fills, caller (worker thread) executes task directly, providing automatic throttling.
Generation Statistics
The GenerationStats class tracks performance metrics:
// GenerationStats.java:5-15
public class GenerationStats {
private final AtomicLong chunksQueued = new AtomicLong(0);
private final AtomicLong chunksCompleted = new AtomicLong(0);
private final AtomicLong chunksFailed = new AtomicLong(0);
private final AtomicLong chunksSkipped = new AtomicLong(0);
// rolling average over 10s
private final long[] rollingHistory = new long[10];
private int historyIndex = 0;
private long lastCompletedCount = 0;
}
Chunks Per Second
Rolling 10-second average:
// GenerationStats.java:28-62
public synchronized void tick() {
long now = System.currentTimeMillis();
if (lastTickTime == 0) {
lastTickTime = now;
lastCompletedCount = chunksCompleted.get() + chunksSkipped.get();
return;
}
long secondsPassed = (now - lastTickTime) / 1000;
if (secondsPassed < 1) return;
long currentTotal = chunksCompleted.get() + chunksSkipped.get();
long delta = currentTotal - lastCompletedCount;
int updateCount = (int) Math.min(secondsPassed, rollingHistory.length);
long perSlot = delta / updateCount;
long remainder = delta % updateCount;
for (int i = 0; i < updateCount; i++) {
long val = perSlot + (i < remainder ? 1 : 0);
rollingHistory[historyIndex] = val;
historyIndex = (historyIndex + 1) % rollingHistory.length;
}
lastCompletedCount = currentTotal;
lastTickTime += secondsPassed * 1000;
}
public synchronized double getChunksPerSecond() {
long sum = 0;
for (long val : rollingHistory) {
sum += val;
}
return sum / 10.0;
}
Typical performance:
- Vanilla generation: 5-20 chunks/sec
- Modded generation: 2-10 chunks/sec
- Tellus generation: 50-200 chunks/sec
Tellus is faster because it bypasses Minecraft’s feature placement and builds voxel data directly.
Config Tuning
- Start Conservative
{
"enabled": true,
"generationRadius": 64,
"maxActiveTasks": 10,
"saveNormalChunks": false
}
- Monitor TPS
- Use F3 menu or
/tps command
- Watch for throttling messages in logs
- Increase
maxActiveTasks if TPS stable
- Scale Up Gradually
{
"generationRadius": 128,
"maxActiveTasks": 20
}
- High-End Setup
{
"generationRadius": 256,
"maxActiveTasks": 50,
"saveNormalChunks": false
}
JVM Tuning
Allocate sufficient heap memory:
# Minimum for radius 128
java -Xms4G -Xmx4G -jar server.jar
# Recommended for radius 256
java -Xms8G -Xmx8G -jar server.jar
# High-end setup
java -Xms16G -Xmx16G -jar server.jar
G1GC tuning (optional):
java -Xms8G -Xmx8G \
-XX:+UseG1GC \
-XX:MaxGCPauseMillis=50 \
-XX:G1HeapRegionSize=16M \
-jar server.jar
Dimension Priorities
The system automatically switches to the dimension with the most players:
// ChunkGenerationManager.java:410-423
ServerLevel majorLevel = currentLevel;
int maxCount = levelCounts.getOrDefault(currentLevel, 0);
for (var entry : levelCounts.entrySet()) {
if (entry.getValue() > maxCount) {
maxCount = entry.getValue();
majorLevel = entry.getKey();
}
}
if (majorLevel != currentLevel && majorLevel != null) {
setupLevel(majorLevel);
return;
}
This ensures generation focuses where players are active.
Pause Check
Plugins can pause generation during heavy operations:
// ChunkGenerationManager.java:582-584
public void setPauseCheck(BooleanSupplier check) {
this.pauseCheck = check;
}
Example use:
ChunkGenerationManager.getInstance().setPauseCheck(() -> {
return bossFightActive || eventRunning;
});
The worker checks this every iteration:
// ChunkGenerationManager.java:168-171
if (tpsMonitor.isThrottled() || pauseCheck.getAsBoolean()) {
Thread.sleep(500);
continue;
}
The pause check is evaluated on the worker thread every ~100ms, providing fine-grained control over generation scheduling.
Monitoring and Debugging
Active Task Count
// ChunkGenerationManager.java:571
public int getActiveTaskCount() { return activeTaskCount.get(); }
Shows how many chunks are currently being generated. If this stays at maxActiveTasks for extended periods, increase the limit.
Remaining in Radius
// ChunkGenerationManager.java:572-576
public int getRemainingInRadius() {
if (currentDimensionKey == null) return 0;
DimensionState state = dimensionStates.get(currentDimensionKey);
return state != null ? state.remainingInRadius.get() : 0;
}
Tracks how many chunks are missing within the configured radius. Decreases as generation progresses.
Throttle Status
// ChunkGenerationManager.java:577
public boolean isThrottled() { return tpsMonitor.isThrottled(); }
Returns true when TPS drops below threshold. If frequently throttled, reduce maxActiveTasks or optimize other server operations.
LOD Skip Count
// LodChunkTracker.java:51-53
public long getSkippedSaveCount() {
return savedSkipCount.get();
}
Tracks how many chunk saves were skipped due to LOD-only status. High values indicate effective storage savings.
Enable showF3MenuStats in config to display generation statistics in the F3 debug menu.