Skip to main content
In some situations, you may need to modify the observability level of your tracing. For instance, you may want to keep sensitive information from being logged for security reasons, or you may want to limit the size of the base64 encoded images logged to reduce payload size.

Environment Variables

The OpenInference Specification defines a set of environment variables you can configure to suit your observability needs:
Environment Variable NameEffectTypeDefault
OPENINFERENCE_HIDE_LLM_INVOCATION_PARAMETERSHides LLM invocation parameters (independent of input/output hiding)boolFalse
OPENINFERENCE_HIDE_INPUTSHides input.value and all input messages (input messages are hidden if either HIDE_INPUTS OR HIDE_INPUT_MESSAGES is true)boolFalse
OPENINFERENCE_HIDE_OUTPUTSHides output.value and all output messages (output messages are hidden if either HIDE_OUTPUTS OR HIDE_OUTPUT_MESSAGES is true)boolFalse
OPENINFERENCE_HIDE_INPUT_MESSAGESHides all input messages (independent of HIDE_INPUTS)boolFalse
OPENINFERENCE_HIDE_OUTPUT_MESSAGESHides all output messages (independent of HIDE_OUTPUTS)boolFalse
OPENINFERENCE_HIDE_INPUT_IMAGESHides images from input messages (only applies when input messages are not already hidden)boolFalse
OPENINFERENCE_HIDE_INPUT_TEXTHides text from input messages (only applies when input messages are not already hidden)boolFalse
OPENINFERENCE_HIDE_PROMPTSHides LLM prompts (completions API)boolFalse
OPENINFERENCE_HIDE_OUTPUT_TEXTHides text from output messages (only applies when output messages are not already hidden)boolFalse
OPENINFERENCE_HIDE_CHOICESHides LLM choices (completions API outputs)boolFalse
OPENINFERENCE_HIDE_EMBEDDING_VECTORSDeprecated: use OPENINFERENCE_HIDE_EMBEDDINGS_VECTORSboolFalse
OPENINFERENCE_HIDE_EMBEDDINGS_VECTORSReplaces embedding.embeddings.*.embedding.vector values with "__REDACTED__"boolFalse
OPENINFERENCE_HIDE_EMBEDDINGS_TEXTReplaces embedding.embeddings.*.embedding.text values with "__REDACTED__"boolFalse
OPENINFERENCE_BASE64_IMAGE_MAX_LENGTHLimits characters of a base64 encoding of an imageint32,000

Redacted Content

When content is hidden due to privacy configuration settings, the value "__REDACTED__" is used as a placeholder. This constant value allows consumers of the trace data to identify that content was intentionally hidden rather than missing or empty.

Usage

To set up this configuration you can either:
  • Set environment variables as specified above
  • Define the configuration in code as shown below
  • Do nothing and fall back to the default values
  • Use a combination of the three, the order of precedence is:
    • Values set in the TraceConfig in code
    • Environment variables
    • Default values

Python Configuration

If you are working in Python, and want to set up a configuration different than the default you can define the configuration in code as shown below, passing it to the instrument() method of your instrumentator:
from openinference.instrumentation import TraceConfig

config = TraceConfig(
    hide_llm_invocation_parameters=False,
    hide_inputs=False,
    hide_outputs=False,
    hide_input_messages=False,
    hide_output_messages=False,
    hide_input_images=False,
    hide_input_text=False,
    hide_output_text=False,
    hide_embeddings_vectors=False,
    hide_embeddings_text=False,
    base64_image_max_length=32000,
    hide_prompts=False,  # Hides LLM prompts (completions API)
    hide_choices=False,  # Hides LLM choices (completions API outputs)
)

from openinference.instrumentation.openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument(
    tracer_provider=tracer_provider,
    config=config,
)

JavaScript Configuration

If you are working in JavaScript, and want to set up a configuration different than the default you can define the configuration as shown below and pass it into any OpenInference instrumentation:
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai"

/**
 * Everything left out of here will fallback to
 * environment variables then defaults
 */
const traceConfig = { 
  hideInputs: true,
  hideOutputs: false,
  hideInputMessages: false,
  hideOutputMessages: false,
  hideInputImages: false,
  hideInputText: false,
  hideOutputText: false,
  hideEmbeddingsVectors: false,
  hideEmbeddingsText: false,
  base64ImageMaxLength: 32000
}

const instrumentation = new OpenAIInstrumentation({ traceConfig })

Privacy Best Practices

When working with sensitive data, always configure privacy controls before instrumenting your application to prevent accidental exposure of personal information.

Common Privacy Scenarios

Hide all user inputs and outputs:
config = TraceConfig(
    hide_inputs=True,
    hide_outputs=True,
)
Hide only message content, keep metadata:
config = TraceConfig(
    hide_input_messages=True,
    hide_output_messages=True,
)
Hide embeddings but keep model parameters:
config = TraceConfig(
    hide_embeddings_vectors=True,
    hide_embeddings_text=True,
)
Limit image data size:
config = TraceConfig(
    base64_image_max_length=10000,  # Smaller limit for bandwidth
)

Build docs developers (and LLMs) love