Skip to main content
The kate_blockLength method retrieves the dimensions and configuration parameters of the Kate commitment data availability matrix for a specific block.

Method Signature

async fn query_block_length(
    &self,
    at: Option<HashOf<Block>>
) -> RpcResult<BlockLength>

Parameters

at
Hash
default:"null"
Block hash at which to query the block length parameters. If not provided, uses the best (latest finalized) block.Format: 32-byte hexadecimal hash prefixed with 0xExample: "0xa1b2c3d4e5f6789012345678901234567890123456789012345678901234567890"

Returns

result
BlockLength
Block length configuration containing matrix dimensions and chunk size.Structure:
pub struct BlockLength {
    pub rows: BlockLengthRows,
    pub cols: BlockLengthColumns,
    pub chunk_size: u32,
}
Fields:
rows
BlockLengthRows
Number of rows in the data availability matrix.Type: BlockLengthRows(u32)Typical Value: 256This represents the maximum number of rows that can be used in the block’s Kate commitment matrix.
cols
BlockLengthColumns
Number of columns in the data availability matrix.Type: BlockLengthColumns(u32)Typical Value: 256This represents the maximum number of columns in the block’s Kate commitment matrix.
chunk_size
u32
Size of each data chunk in bytes.Typical Value: 32Each cell in the matrix contains a chunk of this size. The total block capacity is rows × cols × chunk_size bytes.

Example Request

curl -X POST http://localhost:9944 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "kate_blockLength",
    "params": [
      "0xa1b2c3d4e5f6789012345678901234567890123456789012345678901234567890"
    ],
    "id": 1
  }'

Example Response

{
  "jsonrpc": "2.0",
  "result": {
    "rows": 256,
    "cols": 256,
    "chunk_size": 32
  },
  "id": 1
}

Query Latest Block

Query block length for the most recent finalized block:
{
  "jsonrpc": "2.0",
  "method": "kate_blockLength",
  "params": [],
  "id": 1
}

Understanding Block Dimensions

Matrix Size

The data availability matrix is a 2D grid where:
  • Rows: Typically 256 rows maximum
  • Columns: Typically 256 columns maximum (after erasure coding extension)
  • Total Cells: rows × cols = 256 × 256 = 65,536 cells

Block Capacity

The theoretical maximum data capacity of a block is:
capacity = rows × cols × chunk_size
         = 256 × 256 × 32 bytes
         = 2,097,152 bytes
         = 2 MB
Note: Actual capacity may be lower due to:
  • Erasure coding overhead
  • Header and metadata
  • Inherent extrinsics

Chunk Size

Each cell in the matrix contains a 32-byte chunk. These chunks are:
  • Field elements in the KZG polynomial commitment scheme
  • Erasure-coded for data availability guarantees
  • Individually verifiable with KZG proofs

Error Responses

Invalid Block Hash

{
  "jsonrpc": "2.0",
  "error": {
    "code": 1,
    "message": "Length of best block(0xa1b2...7890): ..."
  },
  "id": 1
}

Missing Block

{
  "jsonrpc": "2.0",
  "error": {
    "code": 1,
    "message": "Missing block 0xa1b2...7890"
  },
  "id": 1
}

Implementation Details

Source Code Reference

From /rpc/kate-rpc/src/lib.rs:302-312:
async fn query_block_length(&self, at: Option<HashOf<Block>>) -> RpcResult<BlockLength> {
    let _metric_observer = MetricObserver::new(ObserveKind::KateQueryBlockLength);

    let at = self.at_or_best(at);
    let api = self.client.runtime_api();
    let block_length = api
        .block_length(at)
        .map_err(|e| internal_err!("Length of best block({at:?}): {e:?}"))?;

    Ok(block_length)
}

Key Points

  1. No finalization check: Unlike other Kate RPC methods, kate_blockLength does not require the block to be finalized
  2. Fast query: This is a lightweight operation that only queries block metadata
  3. Metrics: Records query performance if metrics are enabled

Use Cases

Determine Sampling Strategy

Use block dimensions to plan data availability sampling:
// Get block dimensions
const blockLength = await rpc('kate_blockLength', [blockHash]);
const { rows, cols } = blockLength;

// Calculate number of cells to sample (e.g., 10%)
const sampleSize = Math.floor((rows * cols) * 0.1);

// Generate random cell coordinates
const cells = [];
for (let i = 0; i < sampleSize; i++) {
  cells.push({
    row: Math.floor(Math.random() * rows),
    col: Math.floor(Math.random() * cols)
  });
}

// Query proofs for sampled cells
const proofs = await rpc('kate_queryProof', [cells, blockHash]);

Calculate Block Utilization

Determine how much of the block capacity is being used:
const blockLength = await rpc('kate_blockLength', [blockHash]);
const header = await rpc('chain_getHeader', [blockHash]);

// From header extension (V3)
const actualRows = header.extension.commitment.rows;
const actualCols = header.extension.commitment.cols;

const maxCapacity = blockLength.rows * blockLength.cols * blockLength.chunk_size;
const usedCapacity = actualRows * actualCols * blockLength.chunk_size;

const utilization = (usedCapacity / maxCapacity) * 100;
console.log(`Block utilization: ${utilization.toFixed(2)}%`);

Validate Cell Coordinates

Ensure cell coordinates are within valid bounds:
function validateCellCoordinates(cell, blockHash) {
  const blockLength = await rpc('kate_blockLength', [blockHash]);
  
  if (cell.row >= blockLength.rows || cell.col >= blockLength.cols) {
    throw new Error(
      `Invalid cell coordinates: (${cell.row}, ${cell.col}). ` +
      `Valid range: rows [0-${blockLength.rows-1}], cols [0-${blockLength.cols-1}]`
    );
  }
  
  return true;
}

Estimate Data Size

Calculate how much data can fit in a block:
const blockLength = await rpc('kate_blockLength', []);

// Maximum theoretical capacity
const maxBytes = blockLength.rows * blockLength.cols * blockLength.chunk_size;

// Accounting for overhead (erasure coding uses ~50% for redundancy)
const effectiveCapacity = maxBytes / 2;

console.log(`Max block capacity: ${maxBytes} bytes (${maxBytes / 1024 / 1024} MB)`);
console.log(`Effective capacity: ${effectiveCapacity} bytes (${effectiveCapacity / 1024 / 1024} MB)`);

Matrix Dimensions Over Time

Track how block dimensions change (if dynamically adjusted):
async function trackBlockDimensions(startBlock, endBlock) {
  const dimensions = [];
  
  for (let i = startBlock; i <= endBlock; i++) {
    const hash = await rpc('chain_getBlockHash', [i]);
    const length = await rpc('kate_blockLength', [hash]);
    
    dimensions.push({
      blockNumber: i,
      blockHash: hash,
      rows: length.rows,
      cols: length.cols,
      capacity: length.rows * length.cols * length.chunk_size
    });
  }
  
  return dimensions;
}

Response Format Details

BlockLength Structure

The response uses wrapped integer types for type safety:
// Type definitions from runtime
pub struct BlockLengthRows(pub u32);
pub struct BlockLengthColumns(pub u32);

pub struct BlockLength {
    pub rows: BlockLengthRows,
    pub cols: BlockLengthColumns,
    pub chunk_size: u32,
}
In JSON-RPC responses, these are serialized as plain numbers:
{
  "rows": 256,        // BlockLengthRows(256)
  "cols": 256,        // BlockLengthColumns(256)
  "chunk_size": 32    // u32
}

Performance Characteristics

  • Query Speed: Very fast (< 1ms typically)
  • No Computation: Simply reads block metadata
  • No Finalization Wait: Can query any block immediately
  • No Rate Limits: Can be called frequently without performance impact

Build docs developers (and LLMs) love