DenseNet

densenet sub-module within the ketos.neural_networks module

This module provides classes to implement Dense Networks (DenseNets).

Contents: ConvBlock class

DenseBlock class TransitionBlock class DenseNetArch class DenseNetInterface class

class ketos.neural_networks.densenet.ConvBlock(growth_rate)[source]

Bases: tensorflow.python.keras.engine.training.Model

Convolutional Blocks used in the Dense Blocks.

Args:
growth_rate:int

The growth rate for the number of filters (i.e.: channels) between convolutional layers

call(inputs, training=False)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Arguments:

inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run

the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be

either a tensor or None (no mask).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

class ketos.neural_networks.densenet.DenseBlock(growth_rate, n_blocks)[source]

Bases: tensorflow.python.keras.engine.training.Model

Dense block for DenseNet architectures

Args:
growth_rate: int

The growth rate between blocks

n_blocks:

The number of convolutional blocks within the dense block

call(inputs, training=False)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Arguments:

inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run

the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be

either a tensor or None (no mask).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

class ketos.neural_networks.densenet.DenseNetArch(dense_blocks, growth_rate, compression_factor, n_classes, dropout_rate, pre_trained_base=None)[source]

Bases: tensorflow.python.keras.engine.training.Model

Implements a DenseNet architecture, building on top of Dense and tansition blocks

Args:
block_sets: list of ints

A list specifying the block sets and how many blocks each set contains. Example: [6, 12, 24, 16] will create a DenseNet with 4 block sets containing 6, 12, 24 and 16 dense blocks, with a total of 58 blocks.

growth_rate:int

The factor by which the number of filters (i.e.: channels) within each dense block grows.

compression_factor: float

The factor by which transition blocks reduce the number of filters (i.e.: channels) between dense blocks (between 0 and 1).

dropout_rate: float

The droput rate (between 0 and 1) used in each transition block. Use 0 for no dropout.

n_classes:int

The number of classes. The output layer uses a Softmax activation and will contain this number of nodes, resulting in model outputs with this many values summing to 1.0.

pre_trained_base: instance of DenseNetArch

A pre-trained densenet model from which the residual blocks will be taken. Use by the the clone_with_new_top method when creating a clone for transfer learning

call(inputs, training=False)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Arguments:

inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run

the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be

either a tensor or None (no mask).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

clone_with_new_top(n_classes=None, freeze_base=True)[source]

Clone this instance but replace the original classification top with a new (untrained) one

Args:
n_classes:int

The number of classes the new classification top should output. If None(default), the original number of classes will be used.

freeze_base:bool

If True, the weights of the feature extraction base will be froze (untrainable) in the new model.

Returns:
cloned_model: instance of DenseNetArch

The new model with the old feature extraction base and new classification top.

freeze_block(block_ids)[source]

Freeze specific dense blocks

Args:
blocks_ids: list of ints

The block numbers to be freezed (starting from zero)

freeze_init_layer()[source]

Freeze the initial convolutional layer

freeze_top()[source]

Freeze the classification block

get_feature_extraction_base()[source]

Retrive the feature extraction base (initial convolutional layer + dense blocks)

Returns:

list containing the feature extraction layers

unfreeze_block(block_ids)[source]

Unfreeze specific dense blocks

Args:
blocks_ids: list of ints

The block numbers to be freezed (starting from zero)

unfreeze_init_layer()[source]

Unfreeze the initial convolutional layer

unfreeze_top()[source]

Unfreeze the classification block

class ketos.neural_networks.densenet.DenseNetInterface(dense_blocks=[6, 12, 24, 16], growth_rate=32, compression_factor=0.5, n_classes=2, dropout_rate=0.2, optimizer=Adam ketos recipe, loss_function=CategoricalCrossentropy ketos recipe, metrics=[BinaryAccuracy ketos recipe, Precision ketos recipe, Recall ketos recipe])[source]

Bases: ketos.neural_networks.dev_utils.nn_interface.NNInterface

class ketos.neural_networks.densenet.TransitionBlock(n_channels, compression_factor, dropout_rate=0.2)[source]

Bases: tensorflow.python.keras.engine.training.Model

Transition Blocks for the DenseNet architecture

Args:
n_filters:int

Number of filters (i,e,: channels)

compression_factor: float

The compression factor used within the transition block (i.e.: the reduction of filters/channels from the previous dense block to the next)

dropout_rate:float

Dropout rate for the convolutional layer (between 0 and 1, use 0 for no dropout)

call(inputs, training=False)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Arguments:

inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to run

the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be

either a tensor or None (no mask).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.