Train with LibAUC Trainer
Author: Siqi Guo, Gang Li, Tianbao Yang
Introduction
This tutorial shows a two-stage workflow using the built-in LibAUC Trainer:
Pretrain a ResNet18 model with cross-entropy style binary training.
Load that checkpoint and optimize AUROC with AUCMLoss + PESG.
The Trainer CLI entry point is:
python -m libauc.trainer.run_trainer --config_file <path_to_yaml>
Step 1: CE pretraining config
Create a YAML file named ce_config.yaml:
dataset:
name: cifar10
eval_splits: [val, test]
kwargs:
imratio: 0.1
model:
name: resnet18
pretrained: false
num_classes: 1
in_channels: 3
metrics:
- AUROC
training:
project_name: libauc
experiment_name: resnet18_ce_cifar10
SEED: 2026
epochs: 100
batch_size: 128
eval_batch_size: 256
sampling_rate: 0.5
num_workers: 0
decay_epochs: [0.5, 0.75]
loss: BCELoss
optimizer: Adam
optimizer_kwargs:
lr: 1.0e-3
weight_decay: 1.0e-4
output_path: ./output
resume_from_checkpoint: false
save_checkpoint_every: 5
Run CE pretraining
python -m libauc.trainer.run_trainer --config_file ce_config.yaml
After training, the checkpoint is expected at:
./output/resnet18_ce_cifar10/epoch_100.pt
Step 2: AUCMLoss optimization config
Create a second YAML file named aucmloss_config.yaml:
dataset:
name: cifar10
eval_splits: [val, test]
kwargs:
imratio: 0.1
model:
name: resnet18
pretrained: true
pretrained_path: "./output/resnet18_ce_cifar10/epoch_100.pt"
num_classes: 1
in_channels: 3
metrics:
- AUROC
training:
project_name: libauc
experiment_name: resnet18_AUCMLoss_cifar10
SEED: 2026
epochs: 100
batch_size: 128
eval_batch_size: 256
sampling_rate: 0.2
num_workers: 0
decay_epochs: [0.5, 0.75]
loss: AUCMLoss
optimizer: PESG
output_path: ./output
resume_from_checkpoint: false
save_checkpoint_every: 5
Run AUROC optimization
python -m libauc.trainer.run_trainer --config_file aucmloss_config.yaml
Expected outputs
CE stage checkpoints: ./output/resnet18_ce_cifar10/
AUCMLoss stage checkpoints: ./output/resnet18_AUCMLoss_cifar10/
Validation/test AUROC logs shown during Trainer evaluation callbacks.
Tips
If you train on CPU, set smaller batch_size values.
You can override selected training args from CLI, e.g.:
python -m libauc.trainer.run_trainer --config_file aucmloss_config.yaml --epochs 20 --batch_size 64