Skip to main content
GET
/
identities
/
:id
/
tokenized-fields
curl -X GET https://YOUR_BLNK_INSTANCE_URL/identities/idt_1234567890/tokenized-fields \
  -H "X-Blnk-Key: YOUR_API_KEY"
{
  "tokenized_fields": [
    "FirstName",
    "LastName",
    "EmailAddress",
    "PhoneNumber",
    "Street",
    "PostCode"
  ]
}
Returns a list of field names that are currently tokenized for a specific identity. This is useful for understanding the current tokenization state before performing tokenization or detokenization operations.

Path Parameters

id
string
required
The unique identifier of the identity

Response

tokenized_fields
array
Array of field names that are currently tokenized
debug_info
object
Optional debug information (only included if there are parsing issues)
curl -X GET https://YOUR_BLNK_INSTANCE_URL/identities/idt_1234567890/tokenized-fields \
  -H "X-Blnk-Key: YOUR_API_KEY"
{
  "tokenized_fields": [
    "FirstName",
    "LastName",
    "EmailAddress",
    "PhoneNumber",
    "Street",
    "PostCode"
  ]
}

Understanding the Response

Field Name Format

Field names are returned in PascalCase format (Go struct field names):
  • FirstName (not first_name)
  • LastName (not last_name)
  • EmailAddress (not email_address)
  • PhoneNumber (not phone_number)
  • PostCode (not post_code)
However, when using tokenization and detokenization endpoints, you can use either format (case-insensitive).

Empty Array

If no fields are tokenized, the response contains an empty array:
{
  "tokenized_fields": []
}
This means:
  • The identity exists
  • No fields have been tokenized yet
  • All PII is stored in plain text

Use Cases

Pre-Tokenization Check

Before tokenizing fields, check which ones are already tokenized:
// Get current tokenization state
const { tokenized_fields } = await getTokenizedFields('idt_1234567890');

// Determine which fields need tokenization
const fieldsToTokenize = ['first_name', 'last_name', 'email_address']
  .filter(field => {
    const pascalCase = field.charAt(0).toUpperCase() + 
                      field.slice(1).replace(/_([a-z])/g, (m, c) => c.toUpperCase());
    return !tokenized_fields.includes(pascalCase);
  });

// Tokenize only untokenized fields
if (fieldsToTokenize.length > 0) {
  await tokenizeFields('idt_1234567890', fieldsToTokenize);
}

Compliance Audit

Verify that required fields are tokenized for compliance:
// Check if all required PII fields are tokenized
const { tokenized_fields } = await getTokenizedFields('idt_1234567890');

const requiredTokenizedFields = [
  'EmailAddress',
  'PhoneNumber',
  'Street',
  'PostCode'
];

const missingTokenization = requiredTokenizedFields
  .filter(field => !tokenized_fields.includes(field));

if (missingTokenization.length > 0) {
  console.warn(`Missing tokenization for: ${missingTokenization.join(', ')}`);
  // Alert compliance team
}

Detokenization Preparation

Verify fields are tokenized before attempting to detokenize:
async function safeDetokenize(identityId, fieldsToDetokenize) {
  // Get current tokenization state
  const { tokenized_fields } = await getTokenizedFields(identityId);
  
  // Convert to PascalCase for comparison
  const tokenizedSet = new Set(tokenized_fields);
  
  // Check each requested field
  const invalidFields = [];
  const validFields = [];
  
  for (const field of fieldsToDetokenize) {
    const pascalCase = toPascalCase(field);
    if (tokenizedSet.has(pascalCase)) {
      validFields.push(field);
    } else {
      invalidFields.push(field);
    }
  }
  
  if (invalidFields.length > 0) {
    console.warn(`Fields not tokenized: ${invalidFields.join(', ')}`);
  }
  
  if (validFields.length > 0) {
    return await detokenizeFields(identityId, validFields);
  }
  
  return { fields: {} };
}

Bulk Tokenization Status

Check tokenization status across multiple identities:
async function auditTokenizationStatus(identityIds) {
  const results = await Promise.all(
    identityIds.map(async (id) => {
      const { tokenized_fields } = await getTokenizedFields(id);
      return {
        identity_id: id,
        tokenized_count: tokenized_fields.length,
        tokenized_fields: tokenized_fields,
        is_fully_protected: tokenized_fields.length >= 4 // Example threshold
      };
    })
  );
  
  return results;
}

// Generate report
const report = await auditTokenizationStatus(identityIds);
const unprotectedIdentities = report.filter(r => !r.is_fully_protected);

console.log(`${unprotectedIdentities.length} identities need tokenization`);

Field Name Conversion

Helper function to convert between naming conventions:
function toPascalCase(snakeCase) {
  return snakeCase
    .split('_')
    .map(part => part.charAt(0).toUpperCase() + part.slice(1))
    .join('');
}

function toSnakeCase(pascalCase) {
  return pascalCase
    .replace(/([A-Z])/g, '_$1')
    .toLowerCase()
    .replace(/^_/, '');
}

// Examples
toPascalCase('email_address');  // "EmailAddress"
toPascalCase('phone_number');   // "PhoneNumber"
toSnakeCase('EmailAddress');    // "email_address"
toSnakeCase('PhoneNumber');     // "phone_number"

Metadata Structure

The tokenized fields list is stored in the identity’s metadata:
{
  "identity_id": "idt_1234567890",
  "email_address": "FPT:[email protected]:aGVsbG8=",
  "phone_number": "FPT:+9876543210:d29ybGQ=",
  "meta_data": {
    "customer_tier": "premium",
    "tokenized_fields": {
      "EmailAddress": true,
      "PhoneNumber": true
    }
  }
}
The endpoint extracts the tokenized_fields object and returns only the field names where the value is true.

Debug Information

If the endpoint encounters an unexpected data structure in meta_data.tokenized_fields, it returns debug information:
{
  "tokenized_fields": [],
  "debug_info": {
    "has_metadata": true,
    "tokenized_field_type": "string",
    "raw_metadata": {
      "tokenized_fields": "invalid_format"
    }
  }
}
This helps diagnose issues with metadata structure. If you see debug info:
  1. Check the identity’s metadata structure
  2. Verify tokenization operations completed successfully
  3. Ensure the metadata wasn’t manually modified

Integration Examples

React Component

import { useState, useEffect } from 'react';

function TokenizationStatus({ identityId }) {
  const [tokenizedFields, setTokenizedFields] = useState([]);
  const [loading, setLoading] = useState(true);
  
  useEffect(() => {
    async function fetchStatus() {
      const response = await fetch(
        `https://YOUR_BLNK_INSTANCE_URL/identities/${identityId}/tokenized-fields`,
        {
          headers: { 'X-Blnk-Key': 'YOUR_API_KEY' }
        }
      );
      
      const data = await response.json();
      setTokenizedFields(data.tokenized_fields);
      setLoading(false);
    }
    
    fetchStatus();
  }, [identityId]);
  
  if (loading) return <div>Loading...</div>;
  
  return (
    <div>
      <h3>Tokenization Status</h3>
      {tokenizedFields.length > 0 ? (
        <ul>
          {tokenizedFields.map(field => (
            <li key={field}>
              <span className="badge badge-success">{field}</span>
              <span> - Protected</span>
            </li>
          ))}
        </ul>
      ) : (
        <p>No fields are currently tokenized</p>
      )}
    </div>
  );
}

CLI Tool

#!/bin/bash
# check-tokenization.sh

IDENTITY_ID=$1

if [ -z "$IDENTITY_ID" ]; then
  echo "Usage: ./check-tokenization.sh <identity_id>"
  exit 1
fi

RESPONSE=$(curl -s -X GET \
  "https://YOUR_BLNK_INSTANCE_URL/identities/$IDENTITY_ID/tokenized-fields" \
  -H "X-Blnk-Key: $BLNK_API_KEY")

TOKENIZED_COUNT=$(echo $RESPONSE | jq '.tokenized_fields | length')

echo "Identity: $IDENTITY_ID"
echo "Tokenized fields: $TOKENIZED_COUNT"
echo $RESPONSE | jq '.tokenized_fields[]'

Best Practices

  1. Check Before Operations: Always check tokenization status before tokenizing or detokenizing
  2. Cache Wisely: Cache the result for short periods (e.g., 5 minutes) to reduce API calls
  3. Handle Both Formats: Support both PascalCase and snake_case in your application
  4. Monitor Changes: Track changes to tokenization status over time
  5. Audit Regularly: Periodically audit all identities to ensure compliance

Performance

  • Response Time: <50ms typically
  • Database Queries: 1 query to fetch identity
  • Caching: Safe to cache for short periods (5-15 minutes)
  • Rate Limiting: No special limits (subject to general API rate limits)

Build docs developers (and LLMs) love