一、殘差結構的原理
殘差結構在深度學習中的應用越來越廣泛,其核心原理是將輸入特徵和參考特徵拼接在一起進行訓練,以增強模型的學習能力和泛化能力。
具體地,殘差結構引入了跨層連接,使得模型可以直接利用淺層的信息跟蹤梯度,從而更好地學習到深層次特徵表示。這種跨越層次的連接機制將輸入和輸出進行殘差學習,通過殘差的計算和積累,模型可以更快速、準確地逼近真實函數。
下面是一個簡單的殘差結構示例:
import torch.nn as nn
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1):
super(ResidualBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)
self.bn1 = nn.BatchNorm2d(out_channels)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.bn2 = nn.BatchNorm2d(out_channels)
self.relu2 = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_channels != out_channels:
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels)
)
def forward(self, x):
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out += self.shortcut(x)
out = self.relu2(out)
return out
二、殘差結構的改進
雖然殘差結構在很多模型中表現出了優異的性能,但它也存在著一些問題,例如如果網路結構複雜度過高,很容易產生梯度消失問題。
因此,一些改進方法被提出,例如Res2Net、PVT、HRNet等。Res2Net與ResNeXt類似,但是它將多個並行分支的特徵更多地交互,以增強網路表達能力。PVT(Pyramid Vision Transformer)則是基於Transformer的一種新型網路模型,它擁有多級金字塔特徵融合機制,可以更好地應對不同大小的對象。HRNet(High-Resolution Network)則是一種具有多分支特徵提取模塊的網路結構,可以更好地保留高解析度信息。
三、殘差結構的應用
殘差結構已經被廣泛應用於圖像分類、目標檢測、語音識別等各種領域。以下是在圖像分類上的一個例子:
import torch
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
train_transforms = transforms.Compose([transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
train_dataset = datasets.CIFAR10('data/', train=True, transform=train_transforms, download=True)
test_dataset = datasets.CIFAR10('data/', train=False, transform=test_transforms, download=True)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=4)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False, num_workers=4)
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_channels = 64
self.conv = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.layer1 = self.make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self.make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self.make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self.make_layer(block, 512, num_blocks[3], stride=2)
self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
def make_layer(self, block, out_channels, num_blocks, stride):
strides = [stride] + [1] * (num_blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.in_channels, out_channels, stride))
self.in_channels = out_channels * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv(x)
out = self.bn(out)
out = self.relu(out)
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = self.avg_pool(out)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
def ResNet18():
return ResNet(ResidualBlock, [2, 2, 2, 2])
model = ResNet18()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=1e-4)
lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[150, 225], gamma=0.1)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
criterion.to(device)
num_epochs = 300
for epoch in range(num_epochs):
model.train()
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
lr_scheduler.step()
model.eval()
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
accuracy = 100 * correct / total
print('Epoch [{}/{}], Loss: {:.4f}, Accuracy: {:.2f}%'.format(epoch+1, num_epochs, loss.item(), accuracy))
原創文章,作者:DMZUX,如若轉載,請註明出處:https://www.506064.com/zh-tw/n/371481.html
微信掃一掃
支付寶掃一掃