VIDR Encoder & Decoder Classes and Associated Methods in modules.py¶
This section provides an overview of the VIDR Encoder and Decoder classes and their associated methods.
VIDR Encoder¶
VIDR Encoder¶
Bases: Module
Variational Encoder for scVIDR (Single-Cell Variational Inference for Dose Response).
This class implements the encoder portion of a variational autoencoder (VAE), which encodes input data into a latent representation by learning mean and variance for reparameterization.
Attributes:
Name | Type | Description |
---|---|---|
eps |
float
|
Small constant added to variance for numerical stability. |
fclayers |
Sequential
|
Fully connected layers with batch normalization, dropout, and activation. |
encoder |
Sequential
|
The full encoder model consisting of input and hidden layers. |
mean |
Linear
|
Linear layer to compute the mean of the latent distribution. |
log_var |
Linear
|
Linear layer to compute the log-variance of the latent distribution. |
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dim
|
int
|
Dimensionality of the input data. |
required |
latent_dim
|
int
|
Dimensionality of the latent representation. |
required |
hidden_dim
|
int
|
Number of hidden units in the fully connected layers. |
required |
n_hidden_layers
|
int
|
Number of hidden layers. |
required |
momentum
|
float
|
Momentum parameter for batch normalization. Defaults to 0.01. |
0.01
|
eps
|
float
|
Epsilon for numerical stability in batch normalization. Defaults to 0.001. |
0.001
|
dropout_rate
|
float
|
Dropout rate for regularization. Defaults to 0.2. |
0.2
|
reparam_eps
|
float
|
Epsilon added to variance during reparameterization. Defaults to 1e-4. |
0.0001
|
nonlin
|
callable
|
Non-linear activation function. Defaults to nn.LeakyReLU. |
LeakyReLU
|
Methods:
Name | Description |
---|---|
forward |
Encodes the input data, computes mean and variance, and performs reparameterization. |
Source code in vidr/modules.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
forward(inputs)
¶
Forward pass of the encoder.
Encodes the input data into a latent representation by computing the mean and variance, then sampling using the reparameterization trick.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
Tensor
|
The input data. |
required |
Returns:
Name | Type | Description |
---|---|---|
tuple |
A tuple containing: - mean (torch.Tensor): The mean of the latent distribution. - var (torch.Tensor): The variance of the latent distribution. - latent_rep (torch.Tensor): The reparameterized latent representation. |
Source code in vidr/modules.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
VIDR Decoder¶
VIDR Decoder¶
Bases: Module
Variational Decoder for scVIDR (Single-Cell Variational Inference for Dose Response).
This class implements the decoder portion of a variational autoencoder (VAE), which decodes latent representations back into input space.
Attributes:
Name | Type | Description |
---|---|---|
fclayers |
Sequential
|
Fully connected layers with batch normalization, dropout, and activation. |
decoder |
Sequential
|
The full decoder model consisting of latent input, hidden layers, and output layer. |
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_dim
|
int
|
Dimensionality of the output data. |
required |
latent_dim
|
int
|
Dimensionality of the latent representation. |
required |
hidden_dim
|
int
|
Number of hidden units in the fully connected layers. |
required |
n_hidden_layers
|
int
|
Number of hidden layers. |
required |
momentum
|
float
|
Momentum parameter for batch normalization. Defaults to 0.01. |
0.01
|
eps
|
float
|
Epsilon for numerical stability in batch normalization. Defaults to 0.001. |
0.001
|
dropout_rate
|
float
|
Dropout rate for regularization. Defaults to 0.2. |
0.2
|
nonlin
|
callable
|
Non-linear activation function. Defaults to nn.LeakyReLU. |
LeakyReLU
|
Methods:
Name | Description |
---|---|
forward |
Decodes the latent representation back into the input space. |
Source code in vidr/modules.py
107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
|
forward(latent_rep)
¶
Forward pass of the decoder.
Decodes the latent representation back into the original input space.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
latent_rep
|
Tensor
|
The latent representation to decode. |
required |
Returns:
Type | Description |
---|---|
torch.Tensor: The reconstructed data. |
Source code in vidr/modules.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 |
|