A transposed convolutional layer is an upsampling layer that generates the output function map better than the enter function map. It’s just like a deconvolutional layer. A deconvolutional layer reverses the layer to a customary convolutional layer. If the output of the usual convolution layer is deconvolved with the deconvolutional layer then the output would be the identical as the unique worth, Whereas in transposed convolutional worth is not going to be the identical, it could possibly reverse to the identical dimension,
Transposed convolutional layers are utilized in a wide range of duties, together with picture era, picture superresolution, and picture segmentation. They’re significantly helpful for duties that contain upsampling the enter information, reminiscent of changing a lowresolution picture to a highresolution one or producing a picture from a set of noise vectors.
The operation of a transposed convolutional layer is just like that of a standard convolutional layer, besides that it performs the convolution operation in the wrong way. As a substitute of sliding the kernel over the enter and performing elementwise multiplication and summation, a transposed convolutional layer slides the enter over the kernel and performs elementwise multiplication and summation. This ends in an output that’s bigger than the enter, and the dimensions of the output will be managed by the stride and padding parameters of the layer.
In a transposed convolutional layer, the enter is a function map of dimension , the place and are the peak and width of the enter and the kernel dimension is , the place and are the peak and width of the kernel.
If the stride form is and the padding is p, The stride of the transposed convolutional layer determines the step dimension for the enter indices p and q, and the padding determines the variety of pixels so as to add to the sides of the enter earlier than performing the convolution. Then the output of the transposed convolutional layer will probably be
the place and are the peak and width of the output.
Instance 1:
Suppose we’ve a grayscale picture of dimension 2 X 2, and we need to upsample it utilizing a transposed convolutional layer with a kernel dimension of 2 x 2, a stride of 1, and 0 padding (or no padding). The enter picture and the kernel for the transposed convolutional layer could be as follows:
The output will probably be:
Technique 1: Manually with TensorFlow
Code Explanations:
 Import needed libraries (TensorFlow and NumPy)
 Outline Enter tensor and customized kernel
 Apply Transpose convolution with kernel dimension =2, stride = 1.
 Write the customized features for transpose convolution
 Apply Transpose convolution on enter information.
Python3

Output:
<tf.Tensor: form=(3, 3), dtype=float64, numpy= array([[ 0., 4., 1.], [ 8., 16., 6.], [ 4., 12., 9.]])>
The output form will be calculated as :
Technique 2: With PyTorch:
Code Explanations:
 Import needed libraries (torch and nn from torch)
 Outline Enter tensor and customized kernel
 Redefine the form in 4 dimensions as a result of PyTorch takes 4D shapes in inputs.
 Apply Transpose convolution with enter and output channel =1,1, kernel dimension =2, stride = 1, padding = 0 means legitimate padding.
 Set the shopper kernel weight by utilizing Transpose.weight.information
 Apply Transpose convolution on enter information.
Python3

Output:
tensor([[[[ 0., 4., 1.], [ 8., 16., 6.], [ 4., 12., 9.]]]], grad_fn=<ConvolutionBackward0>)
Transposed convolutional layers are sometimes used along with different kinds of layers, reminiscent of pooling layers and absolutely linked layers, to construct deep convolutional networks for numerous duties.
Instance 2: Legitimate Padding
In legitimate padding, no further layer of zeros will probably be added.
Python3

Output:
(1, 9, 9, 1)
Instance 3: Identical Padding
In identical padding, an Additional layer of zeros (often called the padding layer) will probably be added.
Python3

Output:
(1, 8, 8, 1)