Overview
The NBT implementation (src/nbt_utils.h/cpp) handles:
- Reading compressed chunk data from disk
- Writing chunk data with proper structure and compression
- Coordinate remapping between engine and Minecraft formats
- Big-endian encoding for compatibility
API
File Structure
Chunk files use the following NBT hierarchy:- Array length:
16 * 128 * 16 = 32768 bytes - Each byte represents one block ID (0-255)
Coordinate Remapping
The engine usesblocks[x][y][z] internally, but NBT uses Minecraft’s standard format where Y is the fastest-changing index.
Minecraft NBT Order
Fromsrc/nbt_utils.cpp:81-90:
Engine to NBT
When writing (src/nbt_utils.cpp:193-199):
Big-Endian Encoding
NBT uses big-endian byte order for integers. Helper functions (src/nbt_utils.cpp:8-21):
Reading
Writing
Reading NBT Files
Theread_blocks_from_gzip() function (src/nbt_utils.cpp:23-145) implements:
1. Decompression
2. Tag Parsing
The parser skips the root compound and searches for the “Blocks” tag:3. Fallback Brute-Force Search
If structured parsing fails, the reader searches for the “Blocks” signature (src/nbt_utils.cpp:117-142):
Writing NBT Files
Thewrite_blocks_to_gzip() function (src/nbt_utils.cpp:166-213) constructs the NBT structure:
1. Build NBT Structure
2. Gzip Compression
Tag Types
The implementation uses these NBT tag types:| Type | ID | Description |
|---|---|---|
| End | 0 | Marks end of compound |
| ByteArray | 7 | Array of bytes (block data) |
| Compound | 10 (0x0A) | Named container for tags |
Usage Example
Fromsrc/save.cpp:58-62 (saving):
src/save.cpp:114-117 (loading):
Error Handling
Both functions returnbool to indicate success/failure:
- Read failures: File not found, decompression error, wrong size, parse error
- Write failures: Cannot open file, gzwrite error
src/save.cpp:121-130).
Compression Details
Using zlib’s gzip interface:- Read:
gzopen(path, "rb")+gzread() - Write:
gzopen(path, "wb")+gzwrite() - Buffer size: 512KB for decompression
- Compression level: Default (zlib automatic)
Performance Considerations
-
Synchronous I/O: All file operations are blocking
- Chunk saves during unload can cause brief hitches
- Consider async I/O for production use
-
Memory allocation: 512KB decompression buffer per read
- Reusable buffer could reduce allocations
-
Coordinate remapping: Inner loop executes 32,768 times per chunk
- Could be optimized with memcpy + in-place swizzle
-
Fallback search: Scans entire decompressed buffer if parsing fails
- Usually not needed for well-formed files