1.使用预训练模型,需要修改训练的prototxt,将layer name改为与要使用模型的layer name相同即可。

Borrowing Weights from a Pretrained Network

To borrow the weights of an already trained model, we need to do two things:

  • Rename our layer to match the name of the original model's layer. The weights are assigned by layer name, thus using the original network's layer name, we get it's weights.

For example, let say the original model had a layer name ip1, then we should name our layer ip1:

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

 

  • Train our new hybrid model declaring the location of the weights:
caffe train —solver ourSolver.prototxt —weights theirModel.caffemodel

 

What About the Other Layers of Our Network?

The other layers of our network will be initialized just like any other brand new layer (usually ~zero).

2.Fine-Tuning 将prototxt某层的lr 置为0,这层即不学习

 

Fine-Tuning is the process of training specific sections of a network to improve results.

Making Layers Not Learn

To stop a layer from learning further, you can set it's param attributes in your prototxt.

For example:

layer {
  name: "example"
  type: "example" 
  ...
  param {
    lr_mult: 0    #learning rate of weights
    decay_mult: 1
  }
  param {
    lr_mult: 0    #learning rate of bias
    decay_mult: 0
  }
}

 

参考:

https://github.com/BVLC/caffe/wiki/Fine-Tuning-or-Training-Certain-Layers-Exclusively

https://github.com/BVLC/caffe/wiki/Borrowing-Weights-from-a-Pretrained-Network

 

扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄