The bag-level prediction aggregated from all the instances from the input bag. Notice that MIL_aggregation is not recommended for back-propagation, which may exceede GPU memory limits.
Parameters:
bag_X (array-like, required) – data features for all instances from a bag with shape [number_of_instance, …].
model (pytorch model, required) – model that generates predictions (or more generally related outputs) from instance-level.
mode (str, required) – the stochastic pooling mode for MIL, default: mean.
tau (float, optional) – the temperature parameter for stochastic softmax (smoothed-max) pooling, default: 0.1.
device (torch.device, optional) – device for running the code. default: none (use GPU if available)
The multiple instance sampling for the stochastic pooling operations. It uniformly randomly samples instances from each bag and take different pooling calculations for different pooling methods.
Parameters:
bag_X (array-like, required) – data features for all instances from a bag with shape [number_of_instance, …].
model (pytorch model, required) – model that generates predictions (or more generally related outputs) from instance-level.
instance_batch_size (int, required) – the maximal instance batch size for each bag, default: 4.
mode (str, required) – the stochastic pooling mode for MIL, default: mean.
tau (float, optional) – the temperature parameter for stochastic softmax (smoothed-max) pooling, default: 0.1.
device (torch.device, optional) – device for running the code. default: none (use GPU if available)
A scheduler base class that can be used to schedule any optimizer parameter groups.
Unlike the builtin PyTorch schedulers, this is intended to be consistently called
At the END of each epoch, before incrementing the epoch count, to calculate next epoch’s value
At the END of each optimizer update, after incrementing the update count, to calculate next update’s value
The schedulers built on this should try to remain as stateless as possible (for simplicity).
This family of schedulers is attempting to avoid the confusion of the meaning of ‘last_epoch’
and -1 values for special behaviour. All epoch and update counts must be tracked in the training
code and explicitly passed in to the schedulers on the corresponding step or step_update call.
The returned prediction is a 2D-array, each row corresponds to all the candidates,
and the ground-truth item poses the first.
Example: ground-truth items: [1, 2], 2 negative items for each instance: [[3,4], [5,6]]