libauc.trainer
The trainer package provides a high-level training interface
for classification tasks, including YAML-driven training entry points,
callback-based loops, and dataset/model/loss/optimizer orchestration.
libauc.trainer package
libauc.trainer.helpers
libauc.trainer.run_trainer
libauc.trainer.run_gnn
libauc.trainer.core package
libauc.trainer.core.callbacks
- class CLICallback[source]
Console and Weights & Biases logging callback.
On
on_train_beginit initialises a W&B run (silently falls back to console-only when W&B is not installed) and pretty-prints the fullTrainingArgumentsconfig.On
on_epoch_endit:appends a structured entry to
state.train_log;renders a progress bar (
verbose=1) or a per-epoch line (verbose=2) to stdout;ships the flat log dict to W&B via
wandb.log.
On
on_train_endit prints a training summary (best validation and test scores) and callswandb.finish().:param (none — all configuration is read from
TrainingArguments: :param at runtime):Note
W&B logging is silently disabled when
wandbis not installed or whenwandb.lograises an exception.Example:
>>> trainer = Trainer(..., callbacks=[CLICallback()]) >>> trainer.train() ============================================================ {'batch_size': 128, 'epochs': 50, ...} ============================================================ Epoch [██████████████████············] 20/50 | Loss: 0.3241 | AUROC: 0.8712 | LR: 0.100000
- on_epoch_begin(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the beginning of an epoch.
- on_epoch_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of an epoch.
- on_step_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of a training step.
- on_train_begin(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the beginning of training.
- on_train_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of training.
- class CallbackHandler(callbacks: List[TrainerCallback], model, optimizer, loss_fn)[source]
Multiplexer that owns a list of
TrainerCallbackinstances and fans out every lifecycle event to each of them in registration order.CallbackHandleritself inherits fromTrainerCallbackso it can be used polymorphically, but its primary role is orchestration rather than providing hook implementations of its own.- Parameters:
callbacks (list[TrainerCallback]) – Initial callback list.
model – The model being trained (forwarded to every callback via
kwargs["model"]).optimizer – The active optimizer (forwarded via
kwargs["optimizer"]).loss_fn – The active loss function (forwarded via
kwargs["loss_fn"]).
Example:
>>> handler = CallbackHandler( ... [CLICallback()], ... model=model, optimizer=optimizer, loss_fn=loss_fn, ... ) >>> handler.on_train_begin(args, state)
- property callback_list
Get a string representation of all callbacks.
- on_epoch_begin(args: TrainingArguments, state: TrainerState)[source]
Event called at the beginning of an epoch.
- on_epoch_end(args: TrainingArguments, state: TrainerState, metrics, **kwargs)[source]
Event called at the end of an epoch.
- on_evaluate(args: TrainingArguments, state: TrainerState)[source]
Event called after an evaluation phase.
- on_init_end(args: TrainingArguments, state: TrainerState)[source]
Event called at the end of trainer initialization.
- on_log(args: TrainingArguments, state: TrainerState, logs)[source]
Event called after logging the last logs.
- on_predict(args: TrainingArguments, state: TrainerState, metrics)[source]
Event called after a successful prediction.
- on_prediction_step(args: TrainingArguments, state: TrainerState)[source]
Event called after a prediction step.
- on_save(args: TrainingArguments, state: TrainerState)[source]
Event called after a checkpoint save.
- on_step_begin(args: TrainingArguments, state: TrainerState)[source]
Event called at the beginning of a training step.
- on_step_end(args: TrainingArguments, state: TrainerState)[source]
Event called at the end of a training step.
- on_substep_end(args: TrainingArguments, state: TrainerState)[source]
Event called at the end of a substep during gradient accumulation.
- on_train_begin(args: TrainingArguments, state: TrainerState)[source]
Event called at the beginning of training.
- on_train_end(args: TrainingArguments, state: TrainerState)[source]
Event called at the end of training.
- class DefaultCallback[source]
Default callback with basic functionality.
- on_epoch_begin(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the beginning of an epoch.
- on_epoch_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of an epoch.
- on_step_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of a training step.
- class TrainerCallback[source]
Base class for training lifecycle callbacks.
Every method is a no-op by default, so subclasses only need to override the hooks they care about. Instances are registered with
CallbackHandler, which calls each hook in registration order and forwards a consistent set of keyword arguments (model,optimizer,loss_fn, plus any extra kwargs theTrainersupplies for that event).Lifecycle order during a typical training run:
on_init_end on_train_begin for each epoch: on_epoch_begin for each step: on_step_begin on_step_end on_evaluate on_epoch_end [on_save — called periodically inside the epoch loop] on_train_endAll callback methods are optional and can be overridden in subclasses.
Example:
>>> class MyCallback(TrainerCallback): ... def on_epoch_end(self, args, state, **kwargs): ... print(f"Epoch {state.epoch} done, loss={kwargs['train_loss']:.4f}") ... >>> trainer = Trainer(..., callbacks=[MyCallback()])
- on_epoch_begin(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the beginning of an epoch.
- on_epoch_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of an epoch.
- on_evaluate(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called after an evaluation phase.
- on_init_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of trainer initialization.
- on_log(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called after logging the last logs.
- on_predict(args: TrainingArguments, state: TrainerState, metrics, **kwargs)[source]
Event called after a successful prediction.
- on_prediction_step(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called after a prediction step.
- on_save(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called after a checkpoint save.
- on_step_begin(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the beginning of a training step.
- on_step_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of a training step.
- on_substep_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of a substep during gradient accumulation.
- on_train_begin(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the beginning of training.
- on_train_end(args: TrainingArguments, state: TrainerState, **kwargs)[source]
Event called at the end of training.
libauc.trainer.core.trainer
- class Trainer(train_args: TrainingArguments, model_cfg: dict, train_dataset: Dataset, eval_dataset: List[Dataset] | None = None, metric: Callable[[Tensor, Tensor], Mapping[str, float]] | None = None, callbacks: List[TrainerCallback] | None = None)[source]
Full training loop for image-classification models supported by libauc.
Trainerwires together a model, an AUC-aware loss function, a libauc optimizer, dual/tri-sampled data loaders, and an optional evaluation pipeline behind a unifiedtrain()entry point. Progress is surfaced through aCallbackHandlerso any number ofTrainerCallbacksubclasses can observe or alter the training loop without touchingTrainerinternals.The class is intentionally thin: heavy lifting (data loading, model construction, loss/optimizer instantiation) is delegated to private helpers so subclasses like
GNNTrainercan override only the parts they need.- Parameters:
train_args (TrainingArguments) – Fully populated training configuration produced by
TrainingArguments.model_cfg (dict) – Architecture config forwarded to
_build_model(). Must contain at least a"name"key matching one of the registered architectures (resnet20,resnet18,densenet121).train_dataset (Dataset) – PyTorch
Datasetfor the training split. Must expose a.targetsattribute (list or array of labels).eval_dataset (list[Dataset], optional) – One or more evaluation datasets.
Nonedisables evaluation (default:None).metric (callable, optional) –
(y_true, y_pred) -> dict[str, float]function returned bybuild_metric().Nonedisables metric computation (default:None).callbacks (list[TrainerCallback], optional) – Callbacks invoked at every lifecycle hook. When
Nonethe handler is created with an empty list (default:None).
Example:
>>> from trainer.config.args import TrainingArguments >>> from trainer.core.trainer import Trainer >>> from trainer.core.callbacks import CLICallback >>> train_args = TrainingArguments( ... optimizer="PESG", optimizer_kwargs={"lr": 0.1}, ... loss="AUCMLoss", loss_kwargs={"margin": 1.0}, ... SEED=42, batch_size=128, eval_batch_size=128, ... sampling_rate=0.5, epochs=50, decay_epochs=[], ... num_workers=2, output_path="./output", num_tasks=1, ... resume_from_checkpoint=False, save_checkpoint_every=5, ... project_name="libauc", experiment_name="demo", verbose=1, ... ) >>> trainer = Trainer( ... train_args=train_args, ... model_cfg={"name": "resnet18", "num_classes": 1}, ... train_dataset=train_ds, ... eval_dataset=[val_ds], ... metric=metric_fn, ... callbacks=[CLICallback()], ... ) >>> log = trainer.train()
- evaluate(loader, model)[source]
Evaluate model on a given data loader.
- Parameters:
loader – Data loader for evaluation
model – Model to evaluate
- Returns:
Tuple of (dictionary of evaluation metrics, test_true, test_pred)
libauc.trainer.core.gnn_trainer
- class GNNTrainer(train_args: TrainingArguments, model_cfg: dict, train_dataset, eval_dataset: List | None = None, metric: Callable[[...], Mapping[str, float]] | None = None, callbacks: List[TrainerCallback] | None = None, decay_epochs: List[int] | None = None, decay_factor: float = 10.0, train_eval_dataset=None)[source]
Training loop for graph neural networks built with libauc’s GNN model zoo.
GNNTrainerextendsTrainerwith graph-aware overrides:_build_model()— looks up the requested GNN architecture in_GNN_REGISTRYand constructs it vialibauc.models, then infers whether the model expects edge features (supports_edge_attr)._get_train_dataloader()/_get_eval_dataloader()— usetorch_geometric.loader.DataLoaderinstead of the standard PyTorch one, while keeping the sameDualSamplerfor positive/negative balancing._forward()— dispatches to the correct GNN forward signature (with or withoutedge_attr).train()— adds optional learning-rate decay at specified epochs viaoptimizer.update_lr.
Supported GNN architectures
gcn,gin,gine,graphsage,gat,mpnn,deepergcn,pna- param train_args:
Training configuration.
- type train_args:
TrainingArguments
- param model_cfg:
GNN model configuration. Required key:
name(one of the architectures listed above). Optional keys:num_tasks(default1),emb_dim(default256),num_layers(default5),graph_pooling,dropout,atom_features_dims,bond_features_dims,act,norm,jk,v2(GAT-only),aggr/t/learn_t/p/learn_p/block(DeeperGCN-only),pretrained(bool),pretrained_path(str).- type model_cfg:
dict
- param train_dataset:
PyG-compatible graph dataset (train split).
- param eval_dataset:
PyG-compatible graph datasets for evaluation splits (default:
None).- type eval_dataset:
list, optional
- param metric:
(y_true, y_pred) -> dict[str, float]- type metric:
callable, optional
- param callbacks:
Training callbacks.
- type callbacks:
list[TrainerCallback], optional
- param decay_epochs:
Epoch indices at which
optimizer.update_lr(decay_factor=decay_factor)is called (default: no decay).- type decay_epochs:
list[int], optional
- param decay_factor:
LR divisor at each decay epoch (default:
10.0).- type decay_factor:
float
- param train_eval_dataset:
Optional dataset for an unbiased train-split evaluation; falls back to
train_datasetwhenNone.
Example:
>>> trainer = GNNTrainer( ... train_args=train_args, ... model_cfg={"name": "gin", "num_tasks": 1, "emb_dim": 300}, ... train_dataset=train_ds, ... eval_dataset=[val_ds, test_ds], ... metric=metric_fn, ... callbacks=[CLICallback()], ... decay_epochs=[100, 150], ... decay_factor=10.0, ... ) >>> log = trainer.train()
- evaluate(loader, model)[source]
Override base Trainer.evaluate() to use the GNN forward pass.
- Parameters:
loader (PyG DataLoader)
model (GNN model)
- Return type:
(metrics_dict, y_true, y_pred)
- train()[source]
GNN training loop.
- Steps each epoch:
Optional LR decay if epoch is in decay_epochs.
Forward / backward over the training loader.
Evaluation on the training split (unbiased loader).
Evaluation on all registered eval loaders.
Callbacks and periodic checkpointing.
- Returns:
list
- Return type:
training log produced by the state / callback system
libauc.trainer.config package
libauc.trainer.config.args
- class TrainingArguments(**kwargs)[source]
Container for all hyperparameters and settings that govern a single training run.
All fields map one-to-one to keys in the
trainingsection of a YAML config file and can be overridden from the CLI viaapply_cli_overrides.- Parameters:
optimizer (str) – Name of the libauc optimizer class, e.g.
"PESG","PDSCA","SOAP".optimizer_kwargs (dict) – Extra keyword arguments forwarded verbatim to the optimizer constructor (e.g.
lr,momentum,weight_decay).loss (str) – Name of the loss-function class, e.g.
"AUCMLoss","CompositionalAUCLoss". Looked up first inlibauc.losses, thentorch.nn.loss_kwargs (dict) – Extra keyword arguments forwarded verbatim to the loss constructor.
SEED (int) – Global random seed for NumPy, PyTorch and cuDNN (default:
42).batch_size (int) – Mini-batch size for training (default:
128).eval_batch_size (int) – Mini-batch size for evaluation (default:
128).sampling_rate (float) – Positive-class sampling rate passed to
DualSampler/TriSampler(default:0.5).epochs (int) – Total number of training epochs (default:
50).decay_epochs (list) – Epoch indices (or fractional multiples of
epochs) at which the learning-rate / regulariser is decayed. Floats are converted toint(f * epochs)at construction time.num_workers (int) – Number of DataLoader worker processes (default:
2).output_path (str) – Root directory for checkpoints and logs (default:
"./output").num_tasks (int) – Number of output tasks / classes.
1→ binary;≥ 3→ multi-label withTriSampler.resume_from_checkpoint (bool) – Whether to resume from the latest checkpoint found in
output_path/experiment_name(default:True).save_checkpoint_every (int) – Save a checkpoint every N epochs (default:
5).project_name (str) – Weights & Biases project name (default:
"libauc").experiment_name (str) – Weights & Biases run name; also used as the checkpoint sub-directory.
verbose (int) – Verbosity level.
0= silent;1= progress bar;2= one line per epoch (default:1).
Example:
>>> args = TrainingArguments( ... optimizer="PESG", ... optimizer_kwargs={"lr": 0.1, "momentum": 0.9}, ... loss="AUCMLoss", ... loss_kwargs={"margin": 1.0}, ... SEED=42, ... batch_size=128, ... eval_batch_size=128, ... sampling_rate=0.5, ... epochs=50, ... decay_epochs=[], ... num_workers=2, ... output_path="./output", ... num_tasks=1, ... resume_from_checkpoint=True, ... save_checkpoint_every=5, ... project_name="libauc", ... experiment_name="my_experiment", ... verbose=1, ... )
- parse_defaultconfig(type_name: str, multilabel: bool = False, kwargs: dict = {})[source]
Resolve a loss or optimizer name to its canonical
{optimizer, loss}configuration dict by looking up the correspondingspacesclass.The mapping covers every loss/optimizer pair supported by libauc:
type_nameSpace class
AUCMLoss/PESGAUCMLossSpace(MultiLabelAUCMLossSpacewhenmultilabel=True)CompositionalAUCLoss/PDSCACompositionalAUCLossSpaceAPLoss/SOAPAPLossSpace(mAPLossSpacewhenmultilabel=True)pAUC_CVaR_Loss/SOPA/pAUCLossmodeSOPApAUC_CVaR_LossSpace(MultiLabel…variant)pAUC_DRO_Loss/SOPAs/pAUCLossmode1wpAUC_DRO_LossSpace(MultiLabel…variant)tpAUC_KL_Loss/SOTAs/pAUCLossmode2wtpAUC_KL_LossSpace(MultiLabel…variant)tpAUC_CVaR_loss/STACOtpAUC_CVaR_lossSpaceNDCGLoss/SONGNDCGLossSpaceCrossEntropyLoss/SGDSGDSpaceAdamAdamSpaceBCELossBCELossSpace- Parameters:
- Returns:
{"optimizer": <optimizer_cfg>, "loss": <loss_cfg>}where each config is a dict with at least a"type"key and a"space"key containing the hyperparameter search space.- Return type:
- Raises:
ValueError – If type_name is not recognised.
Example:
>>> cfg = parse_defaultconfig("AUCMLoss", multilabel=False) >>> cfg["optimizer"]["type"] 'PESG' >>> cfg["loss"]["type"] 'AUCMLoss'
libauc.trainer.config.spaces
- class APLossSpace[source]
- loss = {'space': {'gamma': {'default': 0.9, 'val': (0.0, 1.0)}, 'margin': {'default': 1.0, 'val': [0.6, 0.8, 1.0]}}, 'type': 'APLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 1e-05, 'val': (0.0, 0.0002)}}, 'type': 'SOAP'}
- class AUCMLossSpace[source]
- loss = {'space': {'margin': {'default': 1.0, 'val': [0.6, 0.8, 1.0]}}, 'type': 'AUCMLoss'}
- optimizer = {'space': {'epoch_decay': {'default': 0.002, 'val': (0.0, 0.01)}, 'lr': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 1e-05, 'val': (0.0, 0.0002)}}, 'type': 'PESG'}
- class AdamSpace[source]
- loss = {'space': {}, 'type': 'CrossEntropyLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'Adam'}
- class CompositionalAUCLossSpace[source]
- loss = {'space': {'k': {'default': 1, 'val': [1, 2, 4]}, 'margin': {'default': 1.0, 'val': [0.6, 0.8, 1.0]}}, 'type': 'CompositionalAUCLoss'}
- optimizer = {'space': {'epoch_decay': {'default': 0.002, 'val': (0.0, 0.01)}, 'lr': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'weight_decay': {'default': 1e-05, 'val': (0.0, 0.0002)}}, 'type': 'PDSCA'}
- class MultiLabelAUCMLossSpace[source]
- loss = {'space': {'margin': {'default': 1.0, 'val': [0.6, 0.8, 1.0]}}, 'type': 'MultiLabelAUCMLoss'}
- optimizer = {'space': {'epoch_decay': {'default': 0.002, 'val': (0.0, 0.01)}, 'lr': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 1e-05, 'val': (0.0, 0.0002)}}, 'type': 'PESG'}
- class MultiLabelpAUC_CVaR_LossSpace[source]
- loss = {'space': {'beta': {'val': 0.2}, 'eta': {'default': 0.1, 'log': True, 'val': (0.01, 10)}, 'margin': {'default': 1.0, 'val': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]}, 'mode': {'val': 'SOPA'}}, 'type': 'MultiLabelpAUCLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'SOPA'}
- class MultiLabelpAUC_DRO_LossSpace[source]
- loss = {'space': {'Lambda': {'default': 1.0, 'log': True, 'val': (0.1, 10.0)}, 'gamma': {'default': 0.9, 'val': (0.0, 1.0)}, 'margin': {'default': 1.0, 'val': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]}, 'mode': {'val': 'SOPAs'}}, 'type': 'MultiLabelpAUCLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 1e-05, 'val': (0.0, 0.0002)}}, 'type': 'SOPAs'}
- class MultiLabeltpAUC_KL_LossSpace[source]
- loss = {'space': {'Lambda': {'default': 1.0, 'log': True, 'val': (0.1, 10.0)}, 'gammas': {'default': (0.9, 0.9), 'val': [(0.1, 0.1), (0.5, 0.5), (0.9, 0.9)]}, 'margin': {'default': 1.0, 'val': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]}, 'mode': {'val': 'SOTAs'}, 'tau': {'default': 1.0, 'log': True, 'val': (0.1, 10.0)}}, 'type': 'MultiLabelpAUCLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'SOTAs'}
- class NDCGLossSpace[source]
- loss = {'space': {'eta0': {'default': 0.01, 'log': True, 'val': (0.001, 0.1)}, 'gamma0': {'default': 0.9, 'val': (0.0, 1.0)}, 'gamma1': {'val': 0.9}, 'margin': {'default': 1.0, 'val': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]}, 'sigmoid_alpha': {'default': 2.0, 'val': (1.0, 2.0)}}, 'type': 'NDCGLoss'}
- optimizer = {'space': {'lr': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'SONG'}
- class SGDSpace[source]
- loss = {'space': {}, 'type': 'CrossEntropyLoss'}
- optimizer = {'space': {'lr': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0, 'val': [0, 0.9]}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'SGD'}
- class mAPLossSpace[source]
- loss = {'space': {'gamma': {'default': 0.9, 'val': (0.0, 1.0)}, 'margin': {'default': 1.0, 'val': [0.6, 0.8, 1.0]}}, 'type': 'mAPLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 1e-05, 'val': (0.0, 0.0002)}}, 'type': 'SOAP'}
- class pAUC_CVaR_LossSpace[source]
- loss = {'space': {'beta': {'val': 0.2}, 'eta': {'default': 0.1, 'log': True, 'val': (0.01, 10)}, 'margin': {'default': 1.0, 'val': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]}, 'mode': {'val': 'SOPA'}}, 'type': 'pAUCLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'SOPA'}
- class pAUC_DRO_LossSpace[source]
- loss = {'space': {'Lambda': {'default': 1.0, 'log': True, 'val': (0.1, 10.0)}, 'gamma': {'default': 0.9, 'val': (0.0, 1.0)}, 'margin': {'default': 1.0, 'val': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]}, 'mode': {'val': 'SOPAs'}}, 'type': 'pAUCLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 1e-05, 'val': (0.0, 0.0002)}}, 'type': 'SOPAs'}
- class tpAUC_CVaR_lossSpace[source]
- loss = {'space': {'alpha': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'beta_0': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'beta_1': {'default': 0.1, 'log': True, 'val': (0.0001, 0.1)}, 'theta_0': {'default': 0.5, 'val': [0.3, 0.5, 0.7]}, 'theta_1': {'default': 0.5, 'val': [0.3, 0.5, 0.7]}, 'threshold': {'default': 0.5, 'val': [0.3, 0.5, 0.7]}}, 'type': 'tpAUC_CVaR_loss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.01)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'STACO'}
- class tpAUC_KL_LossSpace[source]
- loss = {'space': {'Lambda': {'default': 1.0, 'log': True, 'val': (0.1, 10.0)}, 'gammas': {'default': (0.9, 0.9), 'val': [(0.1, 0.1), (0.5, 0.5), (0.9, 0.9)]}, 'margin': {'default': 1.0, 'val': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]}, 'mode': {'val': 'SOTAs'}, 'tau': {'default': 1.0, 'log': True, 'val': (0.1, 10.0)}}, 'type': 'pAUCLoss'}
- optimizer = {'space': {'lr': {'default': 0.001, 'log': True, 'val': (0.0001, 0.1)}, 'momentum': {'default': 0.9, 'val': (0.8, 0.99)}, 'weight_decay': {'default': 0, 'val': (0.0, 0.0002)}}, 'type': 'SOTAs'}
libauc.trainer.data package
libauc.trainer.data.datasets
- class GraphDataset(name, root='dataset', transform=None, pre_transform=None, meta_dict=None)[source]
- class MedicalImageCSVDataset(csv_path: str, image_root: str, image_col: str, label_col: str, transform)[source]
General-purpose CSV-backed medical image dataset.
Expects a CSV with at least an image path column and a binary label column. Image paths in the CSV may be relative (resolved against
image_root) or absolute.- Parameters:
csv_path – Path to the metadata CSV.
image_root – Directory that image paths are resolved against when they are not absolute. Ignored for absolute paths.
image_col – Column name containing the image filename / path.
label_col – Column name containing the binary label (0 / 1).
transform – torchvision transform applied to each PIL image.
- load_dataset(name: str, splits: List[str], **kwargs) Dataset[source]
Load a dataset by name and split.
- Parameters:
name – Dataset identifier (e.g. “catvsdog”, “chexpert”).
splits – Evaluation splits.
**kwargs – Extra dataset-specific keyword arguments from the config.
- Returns:
A torch.utils.data.Dataset whose __getitem__ yields (data, label, index) tuples, as expected by the Trainer.
TODO: Implement each dataset branch below.