Skip to content

VIDR Encoder & Decoder Classes and Associated Methods in modules.py

This section provides an overview of the VIDR Encoder and Decoder classes and their associated methods.

VIDR Encoder

VIDR Encoder

Bases: Module

Variational Encoder for scVIDR (Single-Cell Variational Inference for Dose Response).

This class implements the encoder portion of a variational autoencoder (VAE), which encodes input data into a latent representation by learning mean and variance for reparameterization.

Attributes:

Name Type Description
eps float

Small constant added to variance for numerical stability.

fclayers Sequential

Fully connected layers with batch normalization, dropout, and activation.

encoder Sequential

The full encoder model consisting of input and hidden layers.

mean Linear

Linear layer to compute the mean of the latent distribution.

log_var Linear

Linear layer to compute the log-variance of the latent distribution.

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the input data.

required
latent_dim int

Dimensionality of the latent representation.

required
hidden_dim int

Number of hidden units in the fully connected layers.

required
n_hidden_layers int

Number of hidden layers.

required
momentum float

Momentum parameter for batch normalization. Defaults to 0.01.

0.01
eps float

Epsilon for numerical stability in batch normalization. Defaults to 0.001.

0.001
dropout_rate float

Dropout rate for regularization. Defaults to 0.2.

0.2
reparam_eps float

Epsilon added to variance during reparameterization. Defaults to 1e-4.

0.0001
nonlin callable

Non-linear activation function. Defaults to nn.LeakyReLU.

LeakyReLU

Methods:

Name Description
forward

Encodes the input data, computes mean and variance, and performs reparameterization.

Source code in vidr/modules.py
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
class VIDREncoder(nn.Module):
    '''  Variational Encoder for scVIDR (Single-Cell Variational Inference for Dose Response).

    This class implements the encoder portion of a variational autoencoder (VAE),
    which encodes input data into a latent representation by learning mean and
    variance for reparameterization.

    Attributes:
        eps (float): Small constant added to variance for numerical stability.
        fclayers (nn.Sequential): Fully connected layers with batch normalization,
            dropout, and activation.
        encoder (nn.Sequential): The full encoder model consisting of input and hidden layers.
        mean (nn.Linear): Linear layer to compute the mean of the latent distribution.
        log_var (nn.Linear): Linear layer to compute the log-variance of the latent distribution.

    Args:
        input_dim (int): Dimensionality of the input data.
        latent_dim (int): Dimensionality of the latent representation.
        hidden_dim (int): Number of hidden units in the fully connected layers.
        n_hidden_layers (int): Number of hidden layers.
        momentum (float, optional): Momentum parameter for batch normalization. Defaults to 0.01.
        eps (float, optional): Epsilon for numerical stability in batch normalization. Defaults to 0.001.
        dropout_rate (float, optional): Dropout rate for regularization. Defaults to 0.2.
        reparam_eps (float, optional): Epsilon added to variance during reparameterization. Defaults to 1e-4.
        nonlin (callable, optional): Non-linear activation function. Defaults to nn.LeakyReLU.

    Methods:
        forward(inputs):
            Encodes the input data, computes mean and variance, and performs reparameterization.
    '''
    def __init__(
        self,
        input_dim: int,
        latent_dim: int,
        hidden_dim: int,
        n_hidden_layers: int,
        momentum: float = 0.01,
        eps: float = 0.001,
        dropout_rate: float = 0.2,
        reparam_eps: float = 1e-4,
        nonlin=nn.LeakyReLU,
    ):
        super(VIDREncoder, self).__init__()
        self.eps = reparam_eps
        self.fclayers = nn.Sequential(
            nn.Linear(hidden_dim, hidden_dim),
            nn.BatchNorm1d(hidden_dim, momentum=momentum, eps=eps),
            nonlin(),
            nn.Dropout(p=dropout_rate),
        )

        # Encoder
        # Include Input Module
        modules = [
            nn.Sequential(
                nn.Linear(input_dim, hidden_dim),
                nn.BatchNorm1d(hidden_dim, momentum=momentum, eps=eps),
                nonlin(),
                nn.Dropout(p=dropout_rate),
            )
        ]

        # Add hidden fully connected layers
        for i in range(n_hidden_layers - 1):
            modules.append(self.fclayers)

        self.encoder = nn.Sequential(*modules)

        self.mean = nn.Linear(hidden_dim, latent_dim)

        self.log_var = nn.Linear(hidden_dim, latent_dim)

    def forward(self, inputs):
        """Forward pass of the encoder.

        Encodes the input data into a latent representation by
        computing the mean and variance, then sampling using 
        the reparameterization trick.

        Args:
            inputs (torch.Tensor): The input data.

        Returns:
            tuple: A tuple containing:
                - mean (torch.Tensor): The mean of the latent distribution.
                - var (torch.Tensor): The variance of the latent distribution.
                - latent_rep (torch.Tensor): The reparameterized latent representation.
        """
        # encode
        results = self.encoder(inputs)
        mean = self.mean(results)
        log_var = self.log_var(results)

        var = torch.exp(log_var) + self.eps

        # reparameterize
        latent_rep = Normal(mean, var.sqrt()).rsample()

        return mean, var, latent_rep

forward(inputs)

Forward pass of the encoder.

Encodes the input data into a latent representation by computing the mean and variance, then sampling using the reparameterization trick.

Parameters:

Name Type Description Default
inputs Tensor

The input data.

required

Returns:

Name Type Description
tuple

A tuple containing: - mean (torch.Tensor): The mean of the latent distribution. - var (torch.Tensor): The variance of the latent distribution. - latent_rep (torch.Tensor): The reparameterized latent representation.

Source code in vidr/modules.py
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def forward(self, inputs):
    """Forward pass of the encoder.

    Encodes the input data into a latent representation by
    computing the mean and variance, then sampling using 
    the reparameterization trick.

    Args:
        inputs (torch.Tensor): The input data.

    Returns:
        tuple: A tuple containing:
            - mean (torch.Tensor): The mean of the latent distribution.
            - var (torch.Tensor): The variance of the latent distribution.
            - latent_rep (torch.Tensor): The reparameterized latent representation.
    """
    # encode
    results = self.encoder(inputs)
    mean = self.mean(results)
    log_var = self.log_var(results)

    var = torch.exp(log_var) + self.eps

    # reparameterize
    latent_rep = Normal(mean, var.sqrt()).rsample()

    return mean, var, latent_rep

VIDR Decoder

VIDR Decoder

Bases: Module

Variational Decoder for scVIDR (Single-Cell Variational Inference for Dose Response).

This class implements the decoder portion of a variational autoencoder (VAE), which decodes latent representations back into input space.

Attributes:

Name Type Description
fclayers Sequential

Fully connected layers with batch normalization, dropout, and activation.

decoder Sequential

The full decoder model consisting of latent input, hidden layers, and output layer.

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the output data.

required
latent_dim int

Dimensionality of the latent representation.

required
hidden_dim int

Number of hidden units in the fully connected layers.

required
n_hidden_layers int

Number of hidden layers.

required
momentum float

Momentum parameter for batch normalization. Defaults to 0.01.

0.01
eps float

Epsilon for numerical stability in batch normalization. Defaults to 0.001.

0.001
dropout_rate float

Dropout rate for regularization. Defaults to 0.2.

0.2
nonlin callable

Non-linear activation function. Defaults to nn.LeakyReLU.

LeakyReLU

Methods:

Name Description
forward

Decodes the latent representation back into the input space.

Source code in vidr/modules.py
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
class VIDRDecoder(nn.Module):
    ''' Variational Decoder for scVIDR (Single-Cell Variational Inference for Dose Response).

    This class implements the decoder portion of a variational autoencoder (VAE),
    which decodes latent representations back into input space.

    Attributes:
        fclayers (nn.Sequential): Fully connected layers with batch normalization,
            dropout, and activation.
        decoder (nn.Sequential): The full decoder model consisting of latent input, hidden layers,
            and output layer.

    Args:
        input_dim (int): Dimensionality of the output data.
        latent_dim (int): Dimensionality of the latent representation.
        hidden_dim (int): Number of hidden units in the fully connected layers.
        n_hidden_layers (int): Number of hidden layers.
        momentum (float, optional): Momentum parameter for batch normalization. Defaults to 0.01.
        eps (float, optional): Epsilon for numerical stability in batch normalization. Defaults to 0.001.
        dropout_rate (float, optional): Dropout rate for regularization. Defaults to 0.2.
        nonlin (callable, optional): Non-linear activation function. Defaults to nn.LeakyReLU.

    Methods:
        forward(latent_rep):
            Decodes the latent representation back into the input space.
    '''
    def __init__(
        self,
        input_dim: int,
        latent_dim: int,
        hidden_dim: int,
        n_hidden_layers: int,
        momentum: float = 0.01,
        eps: float = 0.001,
        dropout_rate: float = 0.2,
        nonlin=nn.LeakyReLU,
    ):
        super(VIDRDecoder, self).__init__()
        self.fclayers = nn.Sequential(
            nn.Linear(hidden_dim, hidden_dim),
            nn.BatchNorm1d(hidden_dim, momentum=momentum, eps=eps),
            nonlin(),
            nn.Dropout(p=dropout_rate),
        )

        # Encoder
        # Include Input Module
        modules = [
            nn.Sequential(
                nn.Linear(latent_dim, hidden_dim),
                nn.BatchNorm1d(hidden_dim, momentum=momentum, eps=eps),
                nonlin(),
                nn.Dropout(p=dropout_rate),
            )
        ]

        # Add hidden fully connected layers
        for i in range(n_hidden_layers - 1):
            modules.append(self.fclayers)

        modules.append(nn.Linear(hidden_dim, input_dim))
        self.decoder = nn.Sequential(*modules)

    def forward(self, latent_rep):
        """Forward pass of the decoder.

        Decodes the latent representation back into
        the original input space.

        Args:
            latent_rep (torch.Tensor): The latent representation to decode.

        Returns:
            torch.Tensor: The reconstructed data.
        """
        # decode
        x_hat = self.decoder(latent_rep)
        return x_hat

forward(latent_rep)

Forward pass of the decoder.

Decodes the latent representation back into the original input space.

Parameters:

Name Type Description Default
latent_rep Tensor

The latent representation to decode.

required

Returns:

Type Description

torch.Tensor: The reconstructed data.

Source code in vidr/modules.py
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
def forward(self, latent_rep):
    """Forward pass of the decoder.

    Decodes the latent representation back into
    the original input space.

    Args:
        latent_rep (torch.Tensor): The latent representation to decode.

    Returns:
        torch.Tensor: The reconstructed data.
    """
    # decode
    x_hat = self.decoder(latent_rep)
    return x_hat