CycleGAN是一种用于图像转换的无监督深度学习算法。与传统的图像转换方法不同,CycleGAN可以在没有对应训练集的情况下进行图像转换。本文将从算法原理、模型结构、实现细节、代码示例等多个方面详细介绍CycleGAN算法。
一、算法原理
CycleGAN算法基于生成对抗网络(GAN)的思想。GAN包括一个生成器和一个判别器,生成器用于生成假图片,判别器用于判断真假。CycleGAN在此基础上引入了两个生成器和两个判别器,用于进行两个域之间的图像转换。
具体地,假设有两个域X和Y,其中X为照片域,Y为素描域。CycleGAN需要学习两个映射函数:G:X→Y和F:Y→X。G将X域中的照片转换为Y域中的素描,F将Y域中的素描转换为X域中的照片。同时,CycleGAN还需要学习两个判别器DY和DX,DY用于判断Y域中的图片是否为真实素描,DX用于判断X域中的图片是否为真实照片。
为了实现这种域之间的图像转换,CycleGAN使用了对抗性损失和循环一致性损失。对抗性损失是指生成器G和判别器DY之间的对抗性损失,这意味着G在产生Y域图像时需要尽可能欺骗DY,而DY需要尽可能地将真实的Y域图片和G生成的假图片区分开。循环一致性损失是指,在图像从X域到Y域的转换结束后,再反向回到X域时,转换的结果应该与最初的X域图像尽可能一致;同理,在图像从Y域到X域的转换结束后,再反向回到Y域时转换的结果应该与最初的Y域图像尽可能一致。这样可以避免在进行域之间的图像转换时出现不一致的情况。
二、模型结构
CycleGAN的模型结构主要包括两个生成器和两个判别器。其中,每个生成器包括一些卷积层、反卷积层和残差块;每个判别器包括一些卷积层。
以将照片转换为素描为例(即从域X到域Y的转换),生成器G包括从X域图像到素描图像的转换部分和从素描图像到X域图像的反转换部分。G的转换部分包括5个卷积层和6个残差块,反转换部分包括5个反卷积层和6个残差块。
class ResidualBlock(nn.Module): def __init__(self, in_channels): super(ResidualBlock, self).__init__() conv_block = [ nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1, bias=False), nn.InstanceNorm2d(in_channels, affine=True, track_running_stats=True), nn.ReLU(inplace=True), nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1, bias=False), nn.InstanceNorm2d(in_channels, affine=True, track_running_stats=True) ] self.conv_block = nn.Sequential(*conv_block) def forward(self, x): return x + self.conv_block(x) class Generator(nn.Module): def __init__(self, input_nc, output_nc, n_residual_blocks=9): super(Generator, self).__init__() # Convolutional layers self.conv1 = nn.Conv2d(input_nc, 64, kernel_size=7, stride=1, padding=3, bias=False) self.norm1 = nn.InstanceNorm2d(64, affine=True, track_running_stats=True) self.relu1 = nn.ReLU(inplace=True) # Residual blocks self.res_blocks = nn.Sequential(*[ResidualBlock(64) for _ in range(n_residual_blocks)]) # Deconvolutional layers self.deconv1 = nn.ConvTranspose2d(64, 64, kernel_size=3, stride=2, padding=1, output_padding=1, bias=False) self.norm2 = nn.InstanceNorm2d(64, affine=True, track_running_stats=True) self.relu2 = nn.ReLU(inplace=True) self.deconv2 = nn.ConvTranspose2d(64, 32, kernel_size=3, stride=2, padding=1, output_padding=1, bias=False) self.norm3 = nn.InstanceNorm2d(32, affine=True, track_running_stats=True) self.relu3 = nn.ReLU(inplace=True) self.deconv3 = nn.ConvTranspose2d(32, output_nc, kernel_size=7, stride=1, padding=3, bias=False) self.tanh = nn.Tanh() def forward(self, x): # Encoder x = self.conv1(x) x = self.norm1(x) x = self.relu1(x) # Residual blocks x = self.res_blocks(x) # Decoder x = self.deconv1(x) x = self.norm2(x) x = self.relu2(x) x = self.deconv2(x) x = self.norm3(x) x = self.relu3(x) x = self.deconv3(x) x = self.tanh(x) return x
判别器D包括5个卷积层,每个卷积层后跟一个LeakyReLU层(用于允许梯度向后传播)。所有卷积核大小为4×4,输入图像大小为256×256,通道数为3(即RGB图像)。
class Discriminator(nn.Module): def __init__(self, input_nc): super(Discriminator, self).__init__() # Convolution layers self.conv1 = nn.Conv2d(input_nc, 64, kernel_size=4, stride=2, padding=1) self.leaky_relu1 = nn.LeakyReLU(0.2, inplace=True) self.conv2 = nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False) self.norm1 = nn.InstanceNorm2d(128, affine=True, track_running_stats=True) self.leaky_relu2 = nn.LeakyReLU(0.2, inplace=True) self.conv3 = nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False) self.norm2 = nn.InstanceNorm2d(256, affine=True, track_running_stats=True) self.leaky_relu3 = nn.LeakyReLU(0.2, inplace=True) self.conv4 = nn.Conv2d(256, 512, kernel_size=4, stride=1, padding=1, bias=False) self.norm3 = nn.InstanceNorm2d(512, affine=True, track_running_stats=True) self.leaky_relu4 = nn.LeakyReLU(0.2, inplace=True) self.conv5 = nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=1, bias=False) def forward(self, x): x = self.conv1(x) x = self.leaky_relu1(x) x = self.conv2(x) x = self.norm1(x) x = self.leaky_relu2(x) x = self.conv3(x) x = self.norm2(x) x = self.leaky_relu3(x) x = self.conv4(x) x = self.norm3(x) x = self.leaky_relu4(x) x = self.conv5(x) return x
三、实现细节
在实现CycleGAN时,需要注意以下几个细节:
1、在训练过程中,需要分别训练G、F、DX和DY。在每个训练轮次中,需要随机选择一个batch的X域图片和一个batch的Y域图片,分别作为G和F的输入,生成出G(X)和F(Y),然后进行对抗性训练。
# Train G optimizer_G.zero_grad() # GAN loss(对抗性损失) pred_fake = D_Y(G_X) loss_GAN = criterion_GAN(pred_fake, target_real) # Cycle loss(循环一致性损失) recovered_X = F_Y(G_X) loss_cycle_XY = criterion_cycle(recovered_X, real_X) * lambda_cycle # Total loss loss_G = loss_GAN + loss_cycle_XY loss_G.backward() optimizer_G.step()
2、在训练过程中,需要对生成器的输出进行归一化(对于RGB图片,需要将像素值从[0,255]归一化到[-1,1])。
# Generate fake X fake_X = G_Y(real_Y) # Normalize outputs to be in [-1, 1] fake_X = (fake_X + 1) / 2.0
3、对于CycleGAN中的循环一致性损失,一般设置一个权重系数lambda_cycle来进行平衡。lambda_cycle的值可以根据具体任务进行调整,不同的任务可能需要不同的lambda_cycle。
四、代码示例
下面是CycleGAN的一个简单代码示例,用于将马的图像转换为斑马的图像。
# Loss function criterion_GAN = nn.MSELoss() criterion_cycle = nn.L1Loss() # Optimizers optimizer_G = torch.optim.Adam(G.parameters(), lr=0.0002, betas=(0.5, 0.999)) optimizer_D = torch.optim.Adam(D.parameters(), lr=0.0002, betas=(0.5, 0.999)) # Training loop for epoch in range(num_epochs): for i, (real_A, real_B) in enumerate(dataloader): # Adversarial ground truths valid = torch.Tensor(np.ones((real_A.size(0), *D_output_shape))) fake = torch.Tensor(np.zeros((real_A.size(0), *D_output_shape))) # Configure input tensor real_A = real_A.to(device) real_B = real_B.to(device) # ------------------ # Train Generators # ------------------ optimizer_G.zero_grad() # GAN loss fake_B = G(real_A) pred_fake = D(fake_B) loss_GAN_AB = criterion_GAN(pred_fake, valid) fake_A = F(real_B) pred_fake = D(fake_A) loss_GAN_BA = criterion_GAN(pred_fake, valid) loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2 # Cycle loss recovered_A = F(fake_B) loss_cycle_A = criterion_cycle(recovered_A, real_A) recovered_B = G(fake_A) loss_cycle_B = criterion_cycle(recovered_B, real_B) loss_cycle = (loss_cycle_A + loss_cycle_B) / 2 # Total loss loss_G = loss_GAN + lambda_cycle * loss_cycle loss_G.backward() optimizer_G.step() # ---------------------- # Train Discriminators # ---------------------- optimizer_D.zero_grad() # Real loss pred_real = D(real_B) loss_real = criterion_GAN(pred_real, valid) # Fake loss pred_fake = D(fake_B.detach()) loss_fake = criterion_GAN(pred_fake, fake) # Total loss loss_D = (loss_real + loss_fake) / 2 loss_D.backward() optimizer_D.step()
原创文章,作者:小蓝,如若转载,请注明出处:https://www.506064.com/n/244622.html