<strong id="33j4t"><kbd id="33j4t"></kbd></strong><span id="33j4t"><pre id="33j4t"></pre></span>
<em id="33j4t"></em>
    1. 您當前的位置:首頁 > IT編程 > Keras
      | C語言 | Java | VB | VC | python | Android | TensorFlow | C++ | oracle | 學術與代碼 | cnn卷積神經網絡 | gnn | 圖像修復 | Keras | 數據集 | Neo4j | 自然語言處理 | 深度學習 | 醫學CAD | 醫學影像 | 超參數 | pointnet |

      keras根據層名稱來初始化網絡

      51自學網 2021-12-02 12:11:38
        Keras

      keras根據層名稱來初始化網絡

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      89
      90
      91
      92
      def get_model(input_shape1=[75, 75, 3], input_shape2=[1], weights=None):
       bn_model = 0
       trainable = True
       # kernel_regularizer = regularizers.l2(1e-4)
       kernel_regularizer = None
       activation = 'relu'
       
       img_input = Input(shape=input_shape1)
       angle_input = Input(shape=input_shape2)
       
       # Block 1
       x = Conv2D(64, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block1_conv1')(img_input)
       x = Conv2D(64, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block1_conv2')(x)
       x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
       
       # Block 2
       x = Conv2D(128, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block2_conv1')(x)
       x = Conv2D(128, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block2_conv2')(x)
       x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
       
       # Block 3
       x = Conv2D(256, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block3_conv1')(x)
       x = Conv2D(256, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block3_conv2')(x)
       x = Conv2D(256, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block3_conv3')(x)
       x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
       
       # Block 4
       x = Conv2D(512, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block4_conv1')(x)
       x = Conv2D(512, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block4_conv2')(x)
       x = Conv2D(512, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block4_conv3')(x)
       x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
       
       # Block 5
       x = Conv2D(512, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block5_conv1')(x)
       x = Conv2D(512, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block5_conv2')(x)
       x = Conv2D(512, (3, 3), activation=activation, padding='same',
          trainable=trainable, kernel_regularizer=kernel_regularizer,
          name='block5_conv3')(x)
       x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
       
       branch_1 = GlobalMaxPooling2D()(x)
       # branch_1 = BatchNormalization(momentum=bn_model)(branch_1)
       
       branch_2 = GlobalAveragePooling2D()(x)
       # branch_2 = BatchNormalization(momentum=bn_model)(branch_2)
       
       branch_3 = BatchNormalization(momentum=bn_model)(angle_input)
       
       x = (Concatenate()([branch_1, branch_2, branch_3]))
       x = Dense(1024, activation=activation, kernel_regularizer=kernel_regularizer)(x)
       # x = Dropout(0.5)(x)
       x = Dense(1024, activation=activation, kernel_regularizer=kernel_regularizer)(x)
       x = Dropout(0.6)(x)
       output = Dense(1, activation='sigmoid')(x)
       
       model = Model([img_input, angle_input], output)
       optimizer = Adam(lr=1e-5, beta_1=0.9, beta_2=0.999, epsilon=1e-8, decay=0.0)
       model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
       
       if weights is not None:
        # 將by_name設置成True
        model.load_weights(weights, by_name=True)
        # layer_weights = h5py.File(weights, 'r')
        # for idx in range(len(model.layers)):
        #  model.set_weights()
       print 'have prepared the model.'
       
       return model

      補充知識:keras.layers.Dense()方法

      keras.layers.Dense()是定義網絡層的基本方法,執行的操作是:output = activation(dot(input,kernel)+ bias。

      其中activation是激活函數,kernel是權重矩陣,bias是偏向量。如果層輸入大于2,在進行初始點積之前會將其展平。

      代碼如下:

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      89
      90
      91
      92
      93
      94
      95
      96
      97
      98
      99
      100
      101
      102
      103
      104
      105
      106
      107
      108
      109
      110
      111
      112
      113
      114
      115
      116
      117
      118
      119
      120
      121
      122
      123
      124
      125
      126
      127
      128
      129
      130
      131
      132
      133
      class Dense(Layer):
       """Just your regular densely-connected NN layer.
       `Dense` implements the operation:
       `output = activation(dot(input, kernel) + bias)`
       where `activation` is the element-wise activation function
       passed as the `activation` argument, `kernel` is a weights matrix
       created by the layer, and `bias` is a bias vector created by the layer
       (only applicable if `use_bias` is `True`).
       Note: if the input to the layer has a rank greater than 2, then
       it is flattened prior to the initial dot product with `kernel`.
       # Example
       ```python
        # as first layer in a sequential model:
        model = Sequential()
        model.add(Dense(32, input_shape=(16,)))
        # now the model will take as input arrays of shape (*, 16)
        # and output arrays of shape (*, 32)
        # after the first layer, you don't need to specify
        # the size of the input anymore:
        model.add(Dense(32))
       ```
       # Arguments
        units: Positive integer, dimensionality of the output space.
        activation: Activation function to use
         (see [activations](../activations.md)).
         If you don't specify anything, no activation is applied
         (ie. "linear" activation: `a(x) = x`).
        use_bias: Boolean, whether the layer uses a bias vector.
        kernel_initializer: Initializer for the `kernel` weights matrix
         (see [initializers](../initializers.md)).
        bias_initializer: Initializer for the bias vector
         (see [initializers](../initializers.md)).
        kernel_regularizer: Regularizer function applied to
         the `kernel` weights matrix
         (see [regularizer](../regularizers.md)).
        bias_regularizer: Regularizer function applied to the bias vector
         (see [regularizer](../regularizers.md)).
        activity_regularizer: Regularizer function applied to
         the output of the layer (its "activation").
         (see [regularizer](../regularizers.md)).
        kernel_constraint: Constraint function applied to
         the `kernel` weights matrix
         (see [constraints](../constraints.md)).
        bias_constraint: Constraint function applied to the bias vector
         (see [constraints](../constraints.md)).
       # Input shape
        nD tensor with shape: `(batch_size, ..., input_dim)`.
        The most common situation would be
        a 2D input with shape `(batch_size, input_dim)`.
       # Output shape
        nD tensor with shape: `(batch_size, ..., units)`.
        For instance, for a 2D input with shape `(batch_size, input_dim)`,
        the output would have shape `(batch_size, units)`.
       """
        
       @interfaces.legacy_dense_support
       def __init__(self, units,
           activation=None,
           use_bias=True,
           kernel_initializer='glorot_uniform',
           bias_initializer='zeros',
           kernel_regularizer=None,
           bias_regularizer=None,
           activity_regularizer=None,
           kernel_constraint=None,
           bias_constraint=None,
           **kwargs):
        if 'input_shape' not in kwargs and 'input_dim' in kwargs:
         kwargs['input_shape'] = (kwargs.pop('input_dim'),)
        super(Dense, self).__init__(**kwargs)
        self.units = units
        self.activation = activations.get(activation)
        self.use_bias = use_bias
        self.kernel_initializer = initializers.get(kernel_initializer)
        self.bias_initializer = initializers.get(bias_initializer)
        self.kernel_regularizer = regularizers.get(kernel_regularizer)
        self.bias_regularizer = regularizers.get(bias_regularizer)
        self.activity_regularizer = regularizers.get(activity_regularizer)
        self.kernel_constraint = constraints.get(kernel_constraint)
        self.bias_constraint = constraints.get(bias_constraint)
        self.input_spec = InputSpec(min_ndim=2)
        self.supports_masking = True
        
       def build(self, input_shape):
        assert len(input_shape) >= 2
        input_dim = input_shape[-1]
        
        self.kernel = self.add_weight(shape=(input_dim, self.units),
                initializer=self.kernel_initializer,
                name='kernel',
                regularizer=self.kernel_regularizer,
                constraint=self.kernel_constraint)
        if self.use_bias:
         self.bias = self.add_weight(shape=(self.units,),
                initializer=self.bias_initializer,
                name='bias',
                regularizer=self.bias_regularizer,
                constraint=self.bias_constraint)
        else:
         self.bias = None
        self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim})
        self.built = True
        
       def call(self, inputs):
        output = K.dot(inputs, self.kernel)
        if self.use_bias:
         output = K.bias_add(output, self.bias)
        if self.activation is not None:
         output = self.activation(output)
        return output
        
       def compute_output_shape(self, input_shape):
        assert input_shape and len(input_shape) >= 2
        assert input_shape[-1]
        output_shape = list(input_shape)
        output_shape[-1] = self.units
        return tuple(output_shape)
        
       def get_config(self):
        config = {
         'units': self.units,
         'activation': activations.serialize(self.activation),
         'use_bias': self.use_bias,
         'kernel_initializer': initializers.serialize(self.kernel_initializer),
         'bias_initializer': initializers.serialize(self.bias_initializer),
         'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
         'bias_regularizer': regularizers.serialize(self.bias_regularizer),
         'activity_regularizer': regularizers.serialize(self.activity_regularizer),
         'kernel_constraint': constraints.serialize(self.kernel_constraint),
         'bias_constraint': constraints.serialize(self.bias_constraint)
        }
        base_config = super(Dense, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

      參數說明如下:

      units:正整數,輸出空間的維數。

      activation: 激活函數。如果未指定任何內容,則不會應用任何激活函數。即“線性”激活:a(x)= x)。

      use_bias:Boolean,該層是否使用偏向量。

      kernel_initializer:權重矩陣的初始化方法。

      bias_initializer:偏向量的初始化方法。

      kernel_regularizer:權重矩陣的正則化方法。

      bias_regularizer:偏向量的正則化方法。

      activity_regularizer:輸出層正則化方法。

      kernel_constraint:權重矩陣約束函數。

      bias_constraint:偏向量約束函數。

      以上這篇使用keras根據層名稱來初始化網絡就是小編分享給大家的全部內容


      ImageDataGenerator參數詳解及用法
      Python layers.ActivityRegularization方法代碼示例
      51自學網,即我要自學網,自學EXCEL、自學PS、自學CAD、自學C語言、自學css3實例,是一個通過網絡自主學習工作技能的自學平臺,網友喜歡的軟件自學網站。
      京ICP備13026421號-1
      亚洲第一网站男人都懂2021,中文字幕无码久久精品,大胸美女又黄又w网站,全免费a级毛片免费看