Tokenizes multiple fields in an identity record in a single operation. This is more efficient than tokenizing fields individually when you need to protect multiple PII fields.
Path Parameters
The unique identifier of the identity
Request Body
Array of field names to tokenize. Each field must be one of the tokenizable fields. Example: ["first_name", "last_name", "email_address", "phone_number"]
Tokenizable Fields
Only the following fields can be tokenized:
first_name (or FirstName)
last_name (or LastName)
other_names (or OtherNames)
email_address (or EmailAddress)
phone_number (or PhoneNumber)
street (or Street)
post_code (or PostCode)
Response
Success message confirming the fields were tokenized
Tokenize Basic PII
Tokenize Contact Information
Tokenize Address Information
Tokenize All PII Fields
curl -X POST https: //YOUR_BLNK_INSTANCE_URL/identities/idt_1234567890/tokenize \
-H "Content-Type: application/json" \
-H "X-Blnk-Key: YOUR_API_KEY" \
-d '{
"fields" : [ "first_name" , "last_name" , "email_address" ]
}'
200 - OK
400 - Bad Request (Missing ID)
400 - Bad Request (No Fields)
400 - Bad Request (Invalid Field)
400 - Bad Request (Already Tokenized)
{
"message" : "Fields tokenized successfully"
}
Behavior
Sequential Processing
Fields are tokenized sequentially in the order provided. If any field fails to tokenize:
The operation stops immediately
An error is returned
Previously tokenized fields in the same request remain tokenized
Fields not yet processed remain unchanged
Example : If you request ["first_name", "last_name", "email_address"] and last_name is already tokenized:
first_name will be successfully tokenized
Error returned for last_name (already tokenized)
email_address will not be processed
Atomicity Consideration
This operation is not atomic . If it fails partway through, some fields may be tokenized while others are not. Check the Get Tokenized Fields endpoint to verify the current state.
Use Cases
New Customer Onboarding
Tokenize all PII immediately after customer registration:
// After creating identity
const response = await fetch (
`https://YOUR_BLNK_INSTANCE_URL/identities/ ${ identityId } /tokenize` ,
{
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'X-Blnk-Key' : 'YOUR_API_KEY'
},
body: JSON . stringify ({
fields: [
'first_name' ,
'last_name' ,
'email_address' ,
'phone_number' ,
'street' ,
'post_code'
]
})
}
);
Compliance Migration
Migrate existing records to tokenized format:
// Bulk tokenize existing identities
const identities = await fetchAllIdentities ();
for ( const identity of identities ) {
// Check which fields need tokenization
const untokenizedFields = getUntokenizedPII ( identity );
if ( untokenizedFields . length > 0 ) {
await tokenizeFields ( identity . identity_id , untokenizedFields );
await sleep ( 100 ); // Rate limiting
}
}
Selective Protection
Tokenize only contact information while keeping name fields accessible:
{
"fields" : [ "email_address" , "phone_number" ]
}
Best Practices
1. Verify Before Tokenizing
Check the current tokenization state before attempting to tokenize:
const { tokenized_fields } = await getTokenizedFields ( identityId );
const fieldsToTokenize = [ 'email_address' , 'phone_number' ]
. filter ( field => ! tokenized_fields . includes ( field ));
if ( fieldsToTokenize . length > 0 ) {
await tokenizeFields ( identityId , fieldsToTokenize );
}
2. Handle Errors Gracefully
Implement retry logic with exponential backoff:
async function tokenizeWithRetry ( identityId , fields , maxRetries = 3 ) {
for ( let i = 0 ; i < maxRetries ; i ++ ) {
try {
return await tokenizeFields ( identityId , fields );
} catch ( error ) {
if ( i === maxRetries - 1 ) throw error ;
await sleep ( Math . pow ( 2 , i ) * 1000 );
}
}
}
3. Batch Processing
When tokenizing many identities, process in batches with rate limiting:
const BATCH_SIZE = 10 ;
const DELAY_MS = 100 ;
for ( let i = 0 ; i < identities . length ; i += BATCH_SIZE ) {
const batch = identities . slice ( i , i + BATCH_SIZE );
await Promise . all (
batch . map ( identity =>
tokenizeFields ( identity . identity_id , [ 'email_address' , 'phone_number' ])
)
);
await sleep ( DELAY_MS );
}
4. Audit and Logging
Log all tokenization operations:
await auditLog . create ({
action: 'TOKENIZE_FIELDS' ,
identityId: identityId ,
fields: fields ,
userId: currentUser . id ,
timestamp: new Date (),
reason: 'compliance_requirement'
});
await tokenizeFields ( identityId , fields );
Processing Time : ~50-100ms per field
Database Updates : One update per field
Recommended Batch Size : 5-10 fields at a time
Rate Limiting : Consider limiting to 10 requests/second
Error Recovery
If tokenization fails partway through:
Check which fields were successfully tokenized:
GET /identities/:id/tokenized-fields
Retry with only the failed fields:
{
"fields" : [ "remaining_field_1" , "remaining_field_2" ]
}
Comparison with Single Field Tokenization
Aspect Multiple Fields Single Field API Calls 1 request N requests (one per field) Performance Faster for multiple fields Better for single field Error Handling Stops on first error Independent failures Atomicity Not atomic Each field independent Use Case Bulk operations Selective tokenization
Recommendation : Use this endpoint when tokenizing 2+ fields. For single fields, use Tokenize Field for better error isolation.
Security Notes
Ensure BLNK_TOKENIZATION_SECRET is set to a secure 32-byte value before using tokenization features.
All tokenization operations are logged (recommended)
Tokenization is irreversible without the encryption key
Backup your encryption key securely
Rotate encryption keys periodically (requires re-tokenization)