<strong id="33j4t"><kbd id="33j4t"></kbd></strong><span id="33j4t"><pre id="33j4t"></pre></span>
<em id="33j4t"></em>
    1. 您當前的位置:首頁 > IT編程 > 深度學習
      | C語言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 學術與代碼 | cnn卷積神經網絡 | gnn | 圖像修復 | Keras | 數據集 | Neo4j | 自然語言處理 | 深度學習 | 醫學CAD | 醫學影像 | 超參數 | pointnet |

      keras, TensorFlow中加入注意力機制

      51自學網 2022-02-01 17:37:57
        深度學習

      keras, TensorFlow中加入注意力機制

      原文:https://blog.csdn.net/qq_38410428/article/details/103695032

      第一步:找到要修改文件的源代碼

      在里面添加通道注意力機制和空間注意力機制

      所需庫
      from keras.layers import GlobalAveragePooling2D, GlobalMaxPooling2D, Reshape, Dense, multiply, Permute, Concatenate, Conv2D, Add, Activation, Lambda
      from keras import backend as K
      from keras.activations import sigmoid
      
      • 1
      • 2
      • 3

      通道注意力機制

      def channel_attention(input_feature, ratio=8):
      	
      	channel_axis = 1 if K.image_data_format() == "channels_first" else -1
      	channel = input_feature._keras_shape[channel_axis]
      	
      	shared_layer_one = Dense(channel//ratio,
      							 kernel_initializer='he_normal',
      							 activation = 'relu',
      							 use_bias=True,
      							 bias_initializer='zeros')
      
      	shared_layer_two = Dense(channel,
      							 kernel_initializer='he_normal',
      							 use_bias=True,
      							 bias_initializer='zeros')
      	
      	avg_pool = GlobalAveragePooling2D()(input_feature)    
      	avg_pool = Reshape((1,1,channel))(avg_pool)
      	assert avg_pool._keras_shape[1:] == (1,1,channel)
      	avg_pool = shared_layer_one(avg_pool)
      	assert avg_pool._keras_shape[1:] == (1,1,channel//ratio)
      	avg_pool = shared_layer_two(avg_pool)
      	assert avg_pool._keras_shape[1:] == (1,1,channel)
      	
      	max_pool = GlobalMaxPooling2D()(input_feature)
      	max_pool = Reshape((1,1,channel))(max_pool)
      	assert max_pool._keras_shape[1:] == (1,1,channel)
      	max_pool = shared_layer_one(max_pool)
      	assert max_pool._keras_shape[1:] == (1,1,channel//ratio)
      	max_pool = shared_layer_two(max_pool)
      	assert max_pool._keras_shape[1:] == (1,1,channel)
      	
      	cbam_feature = Add()([avg_pool,max_pool])
      	cbam_feature = Activation('hard_sigmoid')(cbam_feature)
      	
      	if K.image_data_format() == "channels_first":
      		cbam_feature = Permute((3, 1, 2))(cbam_feature)
      	
      	return multiply([input_feature, cbam_feature])
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39

      空間注意力機制

      def spatial_attention(input_feature):
      	kernel_size = 7
      	
      	if K.image_data_format() == "channels_first":
      		channel = input_feature._keras_shape[1]
      		cbam_feature = Permute((2,3,1))(input_feature)
      	else:
      		channel = input_feature._keras_shape[-1]
      		cbam_feature = input_feature
      	
      	avg_pool = Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(cbam_feature)
      	assert avg_pool._keras_shape[-1] == 1
      	max_pool = Lambda(lambda x: K.max(x, axis=3, keepdims=True))(cbam_feature)
      	assert max_pool._keras_shape[-1] == 1
      	concat = Concatenate(axis=3)([avg_pool, max_pool])
      	assert concat._keras_shape[-1] == 2
      	cbam_feature = Conv2D(filters = 1,
      					kernel_size=kernel_size,
      					activation = 'hard_sigmoid',
      					strides=1,
      					padding='same',
      					kernel_initializer='he_normal',
      					use_bias=False)(concat)
      	assert cbam_feature._keras_shape[-1] == 1
      	
      	if K.image_data_format() == "channels_first":
      		cbam_feature = Permute((3, 1, 2))(cbam_feature)
      		
      	return multiply([input_feature, cbam_feature])
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29

      構建CBAM

      def cbam_block(cbam_feature,ratio=8):
      	"""Contains the implementation of Convolutional Block Attention Module(CBAM) block.
      	As described in https://arxiv.org/abs/1807.06521.
      	"""
      	
      	cbam_feature = channel_attention(cbam_feature, ratio)
      	cbam_feature = spatial_attention(cbam_feature, )
      	return cbam_feature
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      在相應的位置添加CBAM
        inputs = x
        residual = layers.Conv2D(filter, kernel_size = (1, 1), strides = strides, padding = 'same')(inputs)
        residual = layers.BatchNormalization(axis = bn_axis)(residual)
        cbam = cbam_block(residual)
        x = layers.add([x, residual, cbam])
      	
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6

      這樣就在任意位置加入了注意力機制啦。


      CNN圖像語義分割詳解大全(網絡收集轉載)
      華為云比賽-垃圾分類挑戰-數據集、源代碼解析與下載
      51自學網,即我要自學網,自學EXCEL、自學PS、自學CAD、自學C語言、自學css3實例,是一個通過網絡自主學習工作技能的自學平臺,網友喜歡的軟件自學網站。
      京ICP備13026421號-1
      亚洲第一网站男人都懂2021,中文字幕无码久久精品,大胸美女又黄又w网站,全免费a级毛片免费看