Skip to main content
Version 0.15 of the ScyllaDB Rust Driver introduced a new deserialization API that significantly improves type safety and performance. This guide will help you migrate from the old API to the new one.

Overview

The new deserialization API addresses two main limitations of the old approach:
  1. Performance: The old API parsed all data into CqlValue enums first, then converted them to target types. The new API deserializes directly from raw bytes.
  2. Type Safety: The new API has full type information during deserialization, enabling better compile-time and runtime type checking.
Starting with version 1.0, the old deserialization API has been completely removed. You must migrate to the new API to use driver version 1.0 and later.

Old Traits

The legacy API worked by deserializing query responses to a sequence of Rows, where Row is just a Vec<Option<CqlValue>>, and CqlValue is an enum representing any CQL value.

FromRow

pub trait FromRow: Sized {
    fn from_row(row: Row) -> Result<Self, FromRowError>;
}
Used to convert a Row into a custom type.

FromCqlVal

// T is either CqlValue or Option<CqlValue>
pub trait FromCqlVal<T>: Sized {
    fn from_cql(cql_val: T) -> Result<Self, FromCqlValError>;
}
Used to convert a single CQL value into a Rust type.

Derive Macros

  • #[derive(FromRow)]: Generated FromRow implementations for structs
  • #[derive(FromUserType)]: Generated FromCqlVal implementations for UDT deserialization

New Traits

The new API introduces two analogous traits that work with raw serialized data instead of pre-parsed values.

DeserializeRow<'frame, 'metadata>

pub trait DeserializeRow<'frame, 'metadata>
where
    Self: Sized,
{
    fn type_check(specs: &[ColumnSpec]) -> Result<(), TypeCheckError>;
    fn deserialize(row: ColumnIterator<'frame, 'metadata>) -> Result<Self, DeserializationError>;
}
Key differences:
  • type_check: Validates types before deserialization
  • Receives raw data via ColumnIterator instead of parsed Row
  • Lifetime parameters enable zero-copy deserialization

DeserializeValue<'frame, 'metadata>

pub trait DeserializeValue<'frame, 'metadata>
where
    Self: Sized,
{
    fn type_check(typ: &ColumnType) -> Result<(), TypeCheckError>;
    fn deserialize(
        typ: &'metadata ColumnType<'metadata>,
        v: Option<FrameSlice<'frame>>,
    ) -> Result<Self, DeserializationError>;
}
Key differences:
  • Separate type_check method for validation
  • Works with raw FrameSlice instead of CqlValue
  • Supports zero-copy deserialization for borrowed types

Derive Macros

  • #[derive(DeserializeRow)]: Generates DeserializeRow implementations
  • #[derive(DeserializeValue)]: Generates DeserializeValue implementations for UDTs
The new macros have different default behavior: they match struct fields to column/UDT field names by default, rather than relying solely on position.

Migration Examples

Basic Query Results

Before (0.14 and earlier):
use scylla::FromRow;

#[derive(FromRow)]
struct Person {
    name: String,
    age: i32,
}

let iter = session
    .query_unpaged("SELECT name, age FROM people", &[])
    .await?
    .rows_typed::<Person>()?;

for row in iter {
    let person = row?;
    println!("{} is {} years old", person.name, person.age);
}
After (0.15+):
use scylla::DeserializeRow;

#[derive(DeserializeRow)]
struct Person {
    name: String,
    age: i32,
}

let result = session
    .query_unpaged("SELECT name, age FROM people", &[])
    .await?
    .into_rows_result()?;  // Convert to rows result first

// Use .rows() instead of .rows_typed()
for row in result.rows::<Person>()? {
    let person = row?;
    println!("{} is {} years old", person.name, person.age);
}
Key changes:
  1. Call .into_rows_result()? on the query result
  2. Use .rows() instead of .rows_typed()
  3. The iterator borrows from the result instead of consuming it

Zero-Copy Deserialization

The new API supports borrowing directly from the result frame to avoid allocations: Before:
let iter = session
    .query_unpaged("SELECT name, age FROM people", &[])
    .await?
    .rows_typed::<(String, i32)>()?;

for row in iter {
    let (name, age) = row?;
    println!("{} is {} years old", name, age);
}
After (with zero-copy optimization):
let result = session
    .query_unpaged("SELECT name, age FROM people", &[])
    .await?
    .into_rows_result()?;

// Use &str instead of String to avoid allocations
for row in result.rows::<(&str, i32)>()? {
    let (name, age) = row?;
    println!("{} is {} years old", name, age);
}
Use borrowed types (&str, &[u8], etc.) when possible to avoid unnecessary allocations and improve performance.

Paged Query Iterator

Before:
use futures::stream::StreamExt;

let mut rows_stream = session
    .query_iter("SELECT name, age FROM people", &[])
    .await?
    .into_typed::<(String, i32)>();

while let Some(next_row_res) = rows_stream.next().await {
    let (name, age) = next_row_res?;
    println!("{} is {} years old", name, age);
}
After:
use futures::stream::StreamExt;

let mut rows_stream = session
    .query_iter("SELECT name, age FROM people", &[])
    .await?
    .rows_stream()?;  // Convert to typed stream

// Type can be inferred or specified with turbofish:
// .rows_stream::<(String, i32)>()?;

while let Some(next_row_res) = rows_stream.next().await {
    let (name, age): (String, i32) = next_row_res?;
    println!("{} is {} years old", name, age);
}
Key changes:
  1. Use .rows_stream() instead of .into_typed()
  2. Type can be inferred from usage or specified with turbofish
Currently, QueryPager/TypedRowStream do not support deserialization of borrowed types due to Rust limitations with lending streams. For zero-copy deserialization, use the single-page API (query_single_page).

User-Defined Types (UDTs)

Before:
use scylla::FromUserType;

#[derive(FromUserType)]
struct Address {
    street: String,
    city: String,
    zip: i32,
}
After:
use scylla::DeserializeValue;

#[derive(DeserializeValue)]
struct Address {
    // Fields are now matched by name, not just position!
    city: String,   // Can be in any order
    street: String,
    zip: i32,
}

Macro Attribute Changes

FromRow vs. DeserializeRow

The old FromRow macro expected columns in the same order as struct fields. To replicate this behavior with DeserializeRow:
use scylla::DeserializeRow;

#[derive(DeserializeRow)]
#[scylla(flavor = "enforce_order", skip_name_checks)]
struct Person {
    name: String,  // Must match column order in query
    age: i32,
}

FromUserType vs. DeserializeValue

The old FromUserType expected UDT fields in the same order as struct fields. To replicate this behavior:
use scylla::DeserializeValue;

#[derive(DeserializeValue)]
#[scylla(flavor = "enforce_order")]  // Note: skip_name_checks not needed for UDTs
struct Address {
    street: String,  // Must match UDT field order
    city: String,
    zip: i32,
}

Available Attributes

For DeserializeRow:
  • flavor = "enforce_order": Columns must appear in the same order as struct fields
  • skip_name_checks: Don’t validate field names against column names
For DeserializeValue:
  • flavor = "enforce_order": UDT fields must appear in the same order as struct fields

QueryResult API Changes

Removed Direct Field Access

The QueryResult::rows field is no longer publicly accessible. Before:
let result = session.query_unpaged("SELECT * FROM people", &[]).await?;
if let Some(rows) = result.rows {
    for row in rows {
        // Process row...
    }
}
After:
let result = session
    .query_unpaged("SELECT * FROM people", &[])
    .await?
    .into_rows_result()?;

for row in result.rows::<Person>()? {
    let person = row?;
    // Process person...
}

Helper Methods

All helper methods now require type parameters:
Old MethodNew Method
.rows().rows::<T>() where T: DeserializeRow
.rows_typed::<T>().rows::<T>()
.first_row().first_row::<T>()
.first_row_typed::<T>().first_row::<T>()

Custom Trait Implementations

If you have hand-written implementations of the old traits, you’ll need to write new implementations for the new traits. Example - Custom FromRow implementation:
// Old implementation
impl FromRow for CustomType {
    fn from_row(row: Row) -> Result<Self, FromRowError> {
        // Manual parsing logic...
    }
}
New implementation:
use scylla::deserialize::row::ColumnIterator;
use scylla::deserialize::{DeserializationError, TypeCheckError};
use scylla::frame::response::result::ColumnSpec;

impl<'frame, 'metadata> DeserializeRow<'frame, 'metadata> for CustomType {
    fn type_check(specs: &[ColumnSpec]) -> Result<(), TypeCheckError> {
        // Validate column types...
        Ok(())
    }
    
    fn deserialize(
        mut row: ColumnIterator<'frame, 'metadata>
    ) -> Result<Self, DeserializationError> {
        // Deserialize from raw data...
    }
}

Performance Benefits

The new deserialization API provides significant performance improvements:
  1. Zero-copy deserialization: Borrowed types (&str, &[u8]) can reference data directly from the response frame
  2. No intermediate allocations: Data is deserialized directly to the target type without creating CqlValue instances
  3. Better type checking: Type validation happens before deserialization, avoiding wasted work on type mismatches
Performance comparison:
// Old API: String allocated from CqlValue
let rows = result.rows_typed::<(String, i32)>()?;

// New API: String allocated from CqlValue equivalent
let rows = result.rows::<(String, i32)>()?;

// New API: Zero-copy - &str references frame data directly (faster!)
let rows = result.rows::<(&str, i32)>()?;

Summary

The new deserialization API in version 0.15 provides:
  • Better performance through zero-copy deserialization
  • Improved type safety with explicit type checking
  • More ergonomic API with name-based field matching
  • Clearer separation between type validation and deserialization
Key migration steps:
  1. Replace FromRow with DeserializeRow
  2. Replace FromCqlVal with DeserializeValue
  3. Update query result handling to use .into_rows_result() and .rows::<T>()
  4. Update iterator queries to use .rows_stream()
  5. Consider using borrowed types (&str) for better performance
  6. Add #[scylla(flavor = "enforce_order")] if you need position-based matching
While migration requires updating trait names and result handling code, the improvements in performance and type safety make it a valuable upgrade.

Build docs developers (and LLMs) love