libauc.models
This module aims to provide several popular deep neural network implementations collected and adapted from public codebases. We recommend users to cite the original papers when using these models. Here is an overview of this module:
Model |
Reference |
|
|
|
|
|
|
|
|
|
yuan2023libauc (our implementation) |
Please refer to the source code for more details about each implementation.
libauc.models.densenet
- class DenseNet(growth_rate=32, block_config=(6, 12, 24, 16), num_init_features=64, bn_size=4, drop_rate=0.0, num_classes=1, memory_efficient=False, last_activation=None)[source]
Densenet-BC model class, based on “Densely Connected Convolutional Networks”
- Parameters:
growth_rate (int) – how many filters to add each layer (k in paper)
block_config (list of 4 ints) – how many layers in each pooling block
num_init_features (int) – the number of filters to learn in the first convolution layer
bn_size (int) – multiplicative factor for number of bottle neck layers (i.e. bn_size * k features in the bottleneck layer)
drop_rate (float) – dropout rate after each dense layer
num_classes (int) – number of classification classes
memory_efficient (bool) – If True, uses checkpointing. Much more memory efficient, but slower. Default: False. See “paper”
- densenet121(pretrained=False, progress=True, activations='relu', **kwargs)
Densenet-121 model from “Densely Connected Convolutional Networks”
- densenet161(pretrained=False, progress=True, activations='relu', **kwargs)
Densenet-161 model from “Densely Connected Convolutional Networks”
- densenet169(pretrained=False, progress=True, activations='relu', **kwargs)
Densenet-169 model from “Densely Connected Convolutional Networks”
- densenet201(pretrained=False, progress=True, activations='relu', **kwargs)
Densenet-201 model from “Densely Connected Convolutional Networks”
libauc.models.neumf
libauc.models.perceptron
libauc.models.resnet
- class ResNet(block, layers, num_classes=1, zero_init_residual=False, groups=1, width_per_group=64, replace_stride_with_dilation=None, norm_layer=None, last_activation=None)[source]
- resnet101(pretrained=False, progress=True, activations='relu', **kwargs)[source]
ResNet-101 model from “Deep Residual Learning for Image Recognition”
- resnet152(pretrained=False, progress=True, activations='relu', **kwargs)[source]
ResNet-152 model from “Deep Residual Learning for Image Recognition”
- resnet18(pretrained=False, progress=True, activations='relu', **kwargs)[source]
ResNet-18 model from “Deep Residual Learning for Image Recognition”
- resnet34(pretrained=False, progress=True, activations='relu', **kwargs)[source]
ResNet-34 model from “Deep Residual Learning for Image Recognition”
- resnet50(pretrained=False, progress=True, activations='relu', **kwargs)[source]
ResNet-50 model from “Deep Residual Learning for Image Recognition”
- resnext101_32x8d(pretrained=False, progress=True, activations='relu', **kwargs)[source]
ResNeXt-101 32x8d model from “Aggregated Residual Transformation for Deep Neural Networks”
- resnext50_32x4d(pretrained=False, progress=True, activations='relu', **kwargs)[source]
ResNeXt-50 32x4d model from “Aggregated Residual Transformation for Deep Neural Networks”
- wide_resnet101_2(pretrained=False, progress=True, activations='relu', **kwargs)[source]
Wide ResNet-101-2 model from “Wide Residual Networks”
The model is the same as ResNet except for the bottleneck number of channels which is twice larger in every block. The number of channels in outer 1x1 convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 channels, and in Wide ResNet-50-2 has 2048-1024-2048.
- wide_resnet50_2(pretrained=False, progress=True, activations='relu', **kwargs)[source]
Wide ResNet-50-2 model from “Wide Residual Networks”
The model is the same as ResNet except for the bottleneck number of channels which is twice larger in every block. The number of channels in outer 1x1 convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 channels, and in Wide ResNet-50-2 has 2048-1024-2048.