Skip to main content
RF-DETR supports integration with popular experiment tracking and visualization platforms. You can enable one or more loggers by passing boolean flags to model.train(). A CSV logger is always active regardless of any flags. It requires no extra packages and writes all metrics to {output_dir}/metrics.csv on every validation step.

Installation

TensorBoard, W&B, and MLflow all require the loggers extras:
pip install "rfdetr[loggers]"

Loggers

TensorBoard is a toolkit for visualizing and tracking training metrics locally.TensorBoard logging is enabled by default. Pass tensorboard=False to disable it.
If the tensorboard package is not installed, training continues without error — a UserWarning is emitted and TensorBoard logging is silently suppressed. Install rfdetr[loggers] to avoid this.

Usage

from rfdetr import RFDETRMedium

model = RFDETRMedium()

model.train(
    dataset_dir="path/to/dataset",
    epochs=100,
    batch_size=4,
    grad_accum_steps=4,
    lr=1e-4,
    output_dir="output",
    # tensorboard=True is the default; pass tensorboard=False to disable
)

Viewing logs

Local environment:
tensorboard --logdir output
Then open http://localhost:6006/ in your browser.Google Colab:
%load_ext tensorboard
%tensorboard --logdir output

Using multiple loggers

You can enable multiple logging systems simultaneously:
model.train(
    dataset_dir="path/to/dataset",
    epochs=100,
    tensorboard=True,
    wandb=True,
    mlflow=True,
    project="my-project",
    run="experiment-001",
)
This lets you leverage the strengths of different platforms:
  • TensorBoard: Local visualization and debugging
  • W&B: Cloud-based collaboration and experiment comparison
  • MLflow: Model registry and deployment tracking
clearml=True is accepted but has no effect in the current version. Use the ClearML SDK workaround shown above instead.

Attaching custom loggers

To attach a logger not supported by TrainConfig (for example Neptune or Comet), build it yourself and append it to trainer.loggers before calling trainer.fit:
from rfdetr.config import RFDETRMediumConfig, TrainConfig
from rfdetr.training import RFDETRModelModule, RFDETRDataModule, build_trainer

model_config = RFDETRMediumConfig(num_classes=10)
train_config = TrainConfig(
    dataset_dir="path/to/dataset",
    epochs=100,
    output_dir="output",
    tensorboard=True,  # built-in loggers still work
)

module = RFDETRModelModule(model_config, train_config)
datamodule = RFDETRDataModule(model_config, train_config)
trainer = build_trainer(train_config, model_config)

# Attach any additional PTL-compatible logger
from pytorch_lightning.loggers import CSVLogger  # example — use any PTL logger

trainer.loggers.append(CSVLogger(save_dir="output", name="extra"))

trainer.fit(module, datamodule)

Logged metrics reference

All active loggers receive the same set of metric keys:
KeyWhen loggedDescription
train/lossEvery step / epochTotal weighted training loss
train/<term>Every step / epochIndividual loss terms (e.g. train/loss_bbox)
val/lossEach epochValidation loss (if compute_val_loss=True)
val/mAP_50_95Each eval epochCOCO box mAP@[.50:.05:.95]
val/mAP_50Each eval epochCOCO box [email protected]
val/mAP_75Each eval epochCOCO box [email protected]
val/mAREach eval epochCOCO mean average recall
val/ema_mAP_50_95Each eval epochEMA-model mAP@[.50:.05:.95] (if EMA active)
val/F1Each eval epochMacro F1 at best confidence threshold
val/precisionEach eval epochPrecision at best F1 threshold
val/recallEach eval epochRecall at best F1 threshold
val/AP/<class>Each eval epochPer-class AP (if log_per_class_metrics=True)
val/segm_mAP_50_95Each eval epochSegmentation mAP (segmentation models only)
val/segm_mAP_50Each eval epochSegmentation [email protected] (segmentation models only)
test/*After test runMirror of val/* keys

Build docs developers (and LLMs) love