一、概述
GoogleNet是2014年Google發布的深度神經網路,在ImageNet圖像識別任務中表現優異。它是Inception系列模型的第一代,以其高效的參數和計算量成為當時最先進的模型。
GoogleNet並不是一般CNN的平鋪式層次,而是通過並行的Inception模塊十分複雜地組成。Inception模塊通過採用1×1、3×3、5×5的卷積核,從不同感受野大小的採樣中得到不同網路層的特徵表達,通過concatenate將特徵圖並串起來,從而獲得更好的特徵表達能力。
二、Inception模塊
Inception模塊最核心的思想是並行計算。在不同大小感受野下通過不同大小的卷積核得到的特徵圖,然後通過concatenate並表達出完整的特徵圖,從而獲得更高維的特徵信息。
示例代碼如下:
from keras.layers import Conv2D, MaxPooling2D, concatenate def InceptionModule(x, nb_filter): branch1x1 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x) branch3x3 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x) branch3x3 = Conv2D(nb_filter, (3, 3), padding='same', activation='relu')(branch3x3) branch5x5 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x) branch5x5 = Conv2D(nb_filter, (5, 5), padding='same', activation='relu')(branch5x5) branch_MaxPooling = MaxPooling2D(pool_size=(3, 3), strides=(1, 1), padding='same')(x) branch_MaxPooling = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(branch_MaxPooling) branches = [branch1x1, branch3x3, branch5x5, branch_MaxPooling] out = concatenate(branches, axis=-1) return out
三、完整模型架構
GoogleNet共22層,前15層採用常規的卷積、池化和歸一化層。其後的分類層採用了全局平均池化、dropout和softmax,使得提取的特徵圖更具魯棒性。完整模型代碼如下:
from keras.layers import Input, Dense, Dropout, Flatten, concatenate from keras.layers.convolutional import Conv2D, MaxPooling2D, AveragePooling2D from keras.layers.normalization import BatchNormalization from keras.models import Model def InceptionModule(x, nb_filter): branch1x1 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x) branch3x3 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x) branch3x3 = Conv2D(nb_filter, (3, 3), padding='same', activation='relu')(branch3x3) branch5x5 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x) branch5x5 = Conv2D(nb_filter, (5, 5), padding='same', activation='relu')(branch5x5) branch_MaxPooling = MaxPooling2D(pool_size=(3, 3), strides=(1, 1), padding='same')(x) branch_MaxPooling = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(branch_MaxPooling) branches = [branch1x1, branch3x3, branch5x5, branch_MaxPooling] out = concatenate(branches, axis=-1) return out inputs = Input(shape=(224, 224, 3)) x = Conv2D(64, (7, 7), strides=(2, 2), padding='same', activation='relu')(inputs) x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x) x = Conv2D(64, (1, 1), strides=(1, 1), padding='same', activation='relu')(x) x = Conv2D(192, (3, 3), strides=(1, 1), padding='same', activation='relu')(x) x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x) x = InceptionModule(x, 64) x = InceptionModule(x, 120) x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x) x = InceptionModule(x, 128) x = InceptionModule(x, 128) x = InceptionModule(x, 128) x = InceptionModule(x, 132) x = InceptionModule(x, 208) x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x) x = InceptionModule(x, 208) x = InceptionModule(x, 256) x = AveragePooling2D(pool_size=(7, 7), strides=(7, 7), padding='same')(x) x = Dropout(0.4)(x) x = Flatten()(x) outputs = Dense(1000, activation='softmax')(x) model = Model(inputs=inputs, outputs=outputs)
四、轉移學習
在實際應用中,考慮到數據量和算力的問題,可通過利用已有訓練好的模型進行微調,即轉移學習。將已有模型的前N層凍結,只對模型後面幾層進行微調,加速模型訓練過程,提高模型準確率。代碼如下:
from keras.applications.inception_v3 import InceptionV3 from keras.layers import Dense, GlobalAveragePooling2D from keras.models import Model from keras import backend as K K.clear_session() base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(299, 299, 3)) x = base_model.output x = GlobalAveragePooling2D()(x) x = Dense(1024, activation='relu')(x) predictions = Dense(1000, activation='softmax')(x) # 凍結前面249層,只訓練後面的層 for layer in base_model.layers[:249]: layer.trainable = False model = Model(inputs=base_model.input, outputs=predictions)
五、優化和預處理
在訓練GoogleNet時,常用的優化器為SGD(隨機梯度下降),使得模型參數更好的收斂。在預處理方面,ImageNet中的圖像分為ILSVRC2014-train和ILSVRC2014-val兩個數據集。在訓練時,對於ILSVRC2014-train數據集中的圖像,進行數據增強,如左右翻轉、隨機裁剪等預處理方式,以增強模型的魯棒性和泛化能力。在預測時,對於圖像進行中心化和大小正則化處理。
六、總結
GoogleNet以其高效的模型架構和並行計算Inception模塊的特點,被廣泛應用在圖像識別領域。在實際應用中,通過微調已有的訓練好的模型,可以有效提高模型的準確率和加速訓練過程。
原創文章,作者:WDVPD,如若轉載,請註明出處:https://www.506064.com/zh-tw/n/361573.html