Architectures
- class multiviewae.architectures.Encoder(input_dim, z_dim, non_linear, bias, enc_dist, **kwargs)[source]
Configurable convolutional encoder.
- Parameters
input_dim (list) – Dimensionality of the input data.
num_filters (list[int]) – Number of filters for each convolutional layer.
input_shape (list[int]) – Input shape to first conv layer.
kernel_size (list[int]) – kernel_size
stride (list[int]) –
padding (list[int]) –
padding_mode (str) –
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class multiviewae.architectures.VariationalEncoder(input_dim, z_dim, non_linear, bias, sparse, log_alpha, enc_dist, **kwargs)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class multiviewae.architectures.ConditionalVariationalEncoder(input_dim, z_dim, hidden_layer_dim, non_linear, bias, sparse, log_alpha, enc_dist, num_cat, one_hot, multiple_latents=False, u_dim=None, w_dim=None)[source]
MLP Variational Conditional Encoder
- Parameters
y (list) –
input_dim (list) – Dimensionality of the input data.
z_dim (int) – Number of latent dimensions.
hidden_layer_dim (list) – Number of nodes per hidden layer.
non_linear (bool) – Whether to include a ReLU() function between layers.
bias (bool) – Whether to include a bias term in hidden layers.
sparse (bool) – Whether to enforce sparsity of the encoding distribution.
log_alpha (float) – Log of the dropout parameter.
enc_dist (multiviewae.base.distributions.Normal, multiviewae.base.distributions.MultivariateNormal) – Encoder distribution.
num_cat (int) – Number of categories of the labels.
one_hot (bool) – Whether to one-hot encode the labels.
multiple_latents (bool, optional) – Whether the model using a separate linear layers for shared and private latent spaces.
u_dim (int, optional) – Dimensionality of the shared latent space.
w_dim (int, optional) – Dimensionality of the private latent space.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class multiviewae.architectures.Decoder(input_dim, z_dim, non_linear, bias, dec_dist, init_logvar=None, **kwargs)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(z)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class multiviewae.architectures.VariationalDecoder(input_dim, z_dim, hidden_layer_dim, bias, non_linear, init_logvar, dec_dist)[source]
MLP Variational Decoder
- Parameters
input_dim (list) – Dimensionality of the input data.
z_dim (int) – Number of latent dimensions.
hidden_layer_dim (list) – Number of nodes per hidden layer. The layer order is reversed e.g. [100, 50, 5] becomes [5, 50, 100].
non_linear (bool) – Whether to include a ReLU() function between layers.
bias (bool) – Whether to include a bias term in hidden layers.
init_logvar (int, float) – Initial value for log variance of decoder.
dec_dist (multiviewae.base.distributions.Normal, multiviewae.base.distributions.MultivariateNormal) – Decoder distribution.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(z)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class multiviewae.architectures.ConditionalVariationalDecoder(input_dim, z_dim, hidden_layer_dim, bias, non_linear, init_logvar, dec_dist, num_cat, one_hot)[source]
MLP Conditinal Variational Decoder
- Parameters
input_dim (list) – Dimensionality of the input data.
z_dim (int) – Number of latent dimensions.
hidden_layer_dim (list) – Number of nodes per hidden layer. The layer order is reversed e.g. [100, 50, 5] becomes [5, 50, 100].
non_linear (bool) – Whether to include a ReLU() function between layers.
bias (bool) – Whether to include a bias term in hidden layers.
init_logvar (int, float) – Initial value for log variance of decoder.
dec_dist (multiviewae.base.distributions.Normal, multiviewae.base.distributions.MultivariateNormal) – Decoder distribution.
num_cat (int) – Number of categories of the labels.
one_hot (bool) – Whether to one-hot encode the labels.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(z)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class multiviewae.architectures.Discriminator(input_dim, output_dim, hidden_layer_dim, non_linear, bias, dropout_threshold, is_wasserstein)[source]
MLP Discriminator
- Parameters
input_dim (list) – Dimensionality of the input data.
z_dim (int) – Number of output dimensions.
hidden_layer_dim (list) – Number of nodes per hidden layer.
non_linear (bool) – Whether to include a ReLU() function between layers.
bias (bool) – Whether to include a bias term in hidden layers.
dropout_threshold (float) – Dropout threshold of layers.
is_wasserstein (bool) – Whether model employs a wasserstein loss.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.