资讯专栏INFORMATION COLUMN

python深度神经网络tensorflow卷积层范例实例教程

89542767 / 444人阅读

  本文关键给大家介绍了python深度神经网络tensorflow卷积层范例实例教程,感兴趣的小伙伴可以参考借鉴一下,希望可以有一定的帮助,祝愿大家多多的发展,尽早涨薪。


  一、旧版本(1.0以下)的卷积函数:tf.nn.conv2d


  在tf1.0中,对卷积层重新进行了封装,比原来版本的卷积层有了很大的简化。


  conv2d(
  input,
  filter,
  strides,
  padding,
  use_cudnn_on_gpu=None,
  data_format=None,
  name=None
  )


  该函数定义在tensorflow/python/ops/gen_nn_ops.py。


  参数:


  input:一个4维Tensor(N,H,W,C).类型必须是以下几种类型之一:half,float32,float64.


  filter:卷积核.类型和input必须相同,


  4维tensor,[filter_height,filter_width,in_channels,out_channels],如[5,5,3,32]


  strides:在input上切片采样时,每个方向上的滑窗步长,必须和format指定的维度同阶,如[1,2,2,1]


  padding:指定边缘填充类型:"SAME","VALID".SAME表示卷积后图片保持不变,VALID则会缩小。


  use_cudnn_on_gpu:可选项,bool型。表示是否在GPU上用cudnn进行加速,默认为True.


  data_format:可选项,指定输入数据的格式:"NHWC"或"NCHW",默认为"NHWC"。


  NHWC格式指[batch,in_height,in_width,in_channels]NCHW格式指[batch,in_channels,in_height,in_width]


  name:操作名,可选.


  示例


 conv1=tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')

  二、1.0版本中的卷积函数:tf.layers.conv2d


  conv2d(
  inputs,
  filters,
  kernel_size,
  strides=(1,1),
  padding='valid',
  data_format='channels_last',
  dilation_rate=(1,1),
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=tf.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  reuse=None
  )
  定义
  #Copyright 2015 The TensorFlow Authors.All Rights Reserved.
  #
  #Licensed under the Apache License,Version 2.0(the"License");
  #you may not use this file except in compliance with the License.
  #You may obtain a copy of the License at
  #
  #http://www.apache.org/licenses/LICENSE-2.0
  #
  #Unless required by applicable law or agreed to in writing,software
  #distributed under the License is distributed on an"AS IS"BASIS,
  #WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied.
  #See the License for the specific language governing permissions and
  #limitations under the License.
  #=============================================================================
  #pylint:disable=unused-import,g-bad-import-order
  """Contains the convolutional layer classes and their functional aliases.
  """
  from __future__ import absolute_import
  from __future__ import division
  from __future__ import print_function
  import six
  from six.moves import xrange#pylint:disable=redefined-builtin
  import numpy as np
  from tensorflow.python.framework import ops
  from tensorflow.python.ops import array_ops
  from tensorflow.python.ops import control_flow_ops
  from tensorflow.python.ops import nn
  from tensorflow.python.ops import math_ops
  from tensorflow.python.ops import init_ops
  from tensorflow.python.ops import standard_ops
  from tensorflow.python.ops import variable_scope as vs
  from tensorflow.python.layers import base
  from tensorflow.python.layers import utils
  class _Conv(base._Layer):#pylint:disable=protected-access
  """Abstract nD convolution layer(private,used as implementation base).
  This layer creates a convolution kernel that is convolved
  (actually cross-correlated)with the layer input to produce a tensor of
  outputs.If`use_bias`is True(and a`bias_initializer`is provided),
  a bias vector is created and added to the outputs.Finally,if
  `activation`is not`None`,it is applied to the outputs as well.
  Arguments:
  rank:An integer,the rank of the convolution,e.g."2"for 2D convolution.
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:An integer or tuple/list of n integers,specifying the
  length of the convolution window.
  strides:An integer or tuple/list of n integers,
  specifying the stride length of the convolution.
  Specifying any stride value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,...,channels)`while`channels_first`corresponds to
  inputs with shape`(batch,channels,...)`.
  dilation_rate:An integer or tuple/list of n integers,specifying
  the dilation rate to use for dilated convolution.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any`strides`value!=1.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  kernel_initializer:An initializer for the convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  kernel_regularizer:Optional regularizer for the convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  """
  def __init__(self,rank,
  filters,
  kernel_size,
  strides=1,
  padding='valid',
  data_format='channels_last',
  dilation_rate=1,
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  **kwargs):
  super(_Conv,self).__init__(trainable=trainable,
  name=name,**kwargs)
  self.rank=rank
  self.filters=filters
  self.kernel_size=utils.normalize_tuple(kernel_size,rank,'kernel_size')
  self.strides=utils.normalize_tuple(strides,rank,'strides')
  self.padding=utils.normalize_padding(padding)
  self.data_format=utils.normalize_data_format(data_format)
  self.dilation_rate=utils.normalize_tuple(
  dilation_rate,rank,'dilation_rate')
  self.activation=activation
  self.use_bias=use_bias
  self.kernel_initializer=kernel_initializer
  self.bias_initializer=bias_initializer
  self.kernel_regularizer=kernel_regularizer
  self.bias_regularizer=bias_regularizer
  self.activity_regularizer=activity_regularizer
  def build(self,input_shape):
  if len(input_shape)!=self.rank+2:
  raise ValueError('Inputs should have rank'+
  str(self.rank+2)+
  'Received input shape:',str(input_shape))
  if self.data_format=='channels_first':
  channel_axis=1
  else:
  channel_axis=-1
  if input_shape[channel_axis]is None:
  raise ValueError('The channel dimension of the inputs'
  'should be defined.Found`None`.')
  input_dim=input_shape[channel_axis]
  kernel_shape=self.kernel_size+(input_dim,self.filters)
  self.kernel=vs.get_variable('kernel',
  shape=kernel_shape,
  initializer=self.kernel_initializer,
  regularizer=self.kernel_regularizer,
  trainable=True,
  dtype=self.dtype)
  if self.use_bias:
  self.bias=vs.get_variable('bias',
  shape=(self.filters,),
  initializer=self.bias_initializer,
  regularizer=self.bias_regularizer,
  trainable=True,
  dtype=self.dtype)
  else:
  self.bias=None
  def call(self,inputs):
  outputs=nn.convolution(
  input=inputs,
  filter=self.kernel,
  dilation_rate=self.dilation_rate,
  strides=self.strides,
  padding=self.padding.upper(),
  data_format=utils.convert_data_format(self.data_format,self.rank+2))
  if self.bias is not None:
  if self.rank!=2 and self.data_format=='channels_first':
  #bias_add does not support channels_first for non-4D inputs.
  if self.rank==1:
  bias=array_ops.reshape(self.bias,(1,self.filters,1))
  if self.rank==3:
  bias=array_ops.reshape(self.bias,(1,self.filters,1,1))
  outputs+=bias
  else:
  outputs=nn.bias_add(
  outputs,
  self.bias,
  data_format=utils.convert_data_format(self.data_format,4))
  #Note that we passed rank=4 because bias_add will only accept
  #NHWC and NCWH even if the rank of the inputs is 3 or 5.
  if self.activation is not None:
  return self.activation(outputs)
  return outputs
  class Conv1D(_Conv):
  """1D convolution layer(e.g.temporal convolution).
  This layer creates a convolution kernel that is convolved
  (actually cross-correlated)with the layer input to produce a tensor of
  outputs.If`use_bias`is True(and a`bias_initializer`is provided),
  a bias vector is created and added to the outputs.Finally,if
  `activation`is not`None`,it is applied to the outputs as well.
  Arguments:
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:An integer or tuple/list of a single integer,specifying the
  length of the 1D convolution window.
  strides:An integer or tuple/list of a single integer,
  specifying the stride length of the convolution.
  Specifying any stride value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,length,channels)`while`channels_first`corresponds to
  inputs with shape`(batch,channels,length)`.
  dilation_rate:An integer or tuple/list of a single integer,specifying
  the dilation rate to use for dilated convolution.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any`strides`value!=1.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  kernel_initializer:An initializer for the convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  kernel_regularizer:Optional regularizer for the convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  """
  def __init__(self,filters,
  kernel_size,
  strides=1,
  padding='valid',
  data_format='channels_last',
  dilation_rate=1,
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  **kwargs):
  super(Convolution1D,self).__init__(
  rank=1,
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  activation=activation,
  use_bias=use_bias,
  kernel_initializer=kernel_initializer,
  bias_initializer=bias_initializer,
  kernel_regularizer=kernel_regularizer,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,**kwargs)
  def conv1d(inputs,
  filters,
  kernel_size,
  strides=1,
  padding='valid',
  data_format='channels_last',
  dilation_rate=1,
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  reuse=None):
  """Functional interface for 1D convolution layer(e.g.temporal convolution).
  This layer creates a convolution kernel that is convolved
  (actually cross-correlated)with the layer input to produce a tensor of
  outputs.If`use_bias`is True(and a`bias_initializer`is provided),
  a bias vector is created and added to the outputs.Finally,if
  `activation`is not`None`,it is applied to the outputs as well.
  Arguments:
  inputs:Tensor input.
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:An integer or tuple/list of a single integer,specifying the
  length of the 1D convolution window.
  strides:An integer or tuple/list of a single integer,
  specifying the stride length of the convolution.
  Specifying any stride value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,length,channels)`while`channels_first`corresponds to
  inputs with shape`(batch,channels,length)`.
  dilation_rate:An integer or tuple/list of a single integer,specifying
  the dilation rate to use for dilated convolution.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any`strides`value!=1.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  kernel_initializer:An initializer for the convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  kernel_regularizer:Optional regularizer for the convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  reuse:Boolean,whether to reuse the weights of a previous layer
  by the same name.
  Returns:
  Output tensor.
  """
  layer=Conv1D(
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  activation=activation,
  use_bias=use_bias,
  kernel_initializer=kernel_initializer,
  bias_initializer=bias_initializer,
  kernel_regularizer=kernel_regularizer,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,
  _reuse=reuse,
  _scope=name)
  return layer.apply(inputs)
  class Conv2D(_Conv):
  """2D convolution layer(e.g.spatial convolution over images).
  This layer creates a convolution kernel that is convolved
  (actually cross-correlated)with the layer input to produce a tensor of
  outputs.If`use_bias`is True(and a`bias_initializer`is provided),
  a bias vector is created and added to the outputs.Finally,if
  `activation`is not`None`,it is applied to the outputs as well.
  Arguments:
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:An integer or tuple/list of 2 integers,specifying the
  width and height of the 2D convolution window.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  strides:An integer or tuple/list of 2 integers,
  specifying the strides of the convolution along the height and width.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Specifying any stride value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,height,width,channels)`while`channels_first`corresponds to
  inputs with shape`(batch,channels,height,width)`.
  dilation_rate:An integer or tuple/list of 2 integers,specifying
  the dilation rate to use for dilated convolution.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any stride value!=1.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  kernel_initializer:An initializer for the convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  kernel_regularizer:Optional regularizer for the convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  """
  def __init__(self,filters,
  kernel_size,
  strides=(1,1),
  padding='valid',
  data_format='channels_last',
  dilation_rate=(1,1),
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  **kwargs):
  super(Conv2D,self).__init__(
  rank=2,
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  activation=activation,
  use_bias=use_bias,
  kernel_initializer=kernel_initializer,
  bias_initializer=bias_initializer,
  kernel_regularizer=kernel_regularizer,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,**kwargs)
  def conv2d(inputs,
  filters,
  kernel_size,
  strides=(1,1),
  padding='valid',
  data_format='channels_last',
  dilation_rate=(1,1),
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  reuse=None):
  """Functional interface for the 2D convolution layer.
  This layer creates a convolution kernel that is convolved
  (actually cross-correlated)with the layer input to produce a tensor of
  outputs.If`use_bias`is True(and a`bias_initializer`is provided),
  a bias vector is created and added to the outputs.Finally,if
  `activation`is not`None`,it is applied to the outputs as well.
  Arguments:
  inputs:Tensor input.
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:An integer or tuple/list of 2 integers,specifying the
  width and height of the 2D convolution window.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  strides:An integer or tuple/list of 2 integers,
  specifying the strides of the convolution along the height and width.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Specifying any stride value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,height,width,channels)`while`channels_first`corresponds to
  inputs with shape`(batch,channels,height,width)`.
  dilation_rate:An integer or tuple/list of 2 integers,specifying
  the dilation rate to use for dilated convolution.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any stride value!=1.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  kernel_initializer:An initializer for the convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  kernel_regularizer:Optional regularizer for the convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  reuse:Boolean,whether to reuse the weights of a previous layer
  by the same name.
  Returns:
  Output tensor.
  """
  layer=Conv2D(
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  activation=activation,
  use_bias=use_bias,
  kernel_initializer=kernel_initializer,
  bias_initializer=bias_initializer,
  kernel_regularizer=kernel_regularizer,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,
  _reuse=reuse,
  _scope=name)
  return layer.apply(inputs)
  class Conv3D(_Conv):
  """3D convolution layer(e.g.spatial convolution over volumes).
  This layer creates a convolution kernel that is convolved
  (actually cross-correlated)with the layer input to produce a tensor of
  outputs.If`use_bias`is True(and a`bias_initializer`is provided),
  a bias vector is created and added to the outputs.Finally,if
  `activation`is not`None`,it is applied to the outputs as well.
  Arguments:
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:An integer or tuple/list of 3 integers,specifying the
  depth,height and width of the 3D convolution window.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  strides:An integer or tuple/list of 3 integers,
  specifying the strides of the convolution along the depth,
  height and width.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Specifying any stride value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,depth,height,width,channels)`while`channels_first`
  corresponds to inputs with shape
  `(batch,channels,depth,height,width)`.
  dilation_rate:An integer or tuple/list of 3 integers,specifying
  the dilation rate to use for dilated convolution.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any stride value!=1.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  kernel_initializer:An initializer for the convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  kernel_regularizer:Optional regularizer for the convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  """
  def __init__(self,filters,
  kernel_size,
  strides=(1,1,1),
  padding='valid',
  data_format='channels_last',
  dilation_rate=(1,1,1),
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  **kwargs):
  super(Conv3D,self).__init__(
  rank=3,
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  activation=activation,
  use_bias=use_bias,
  kernel_initializer=kernel_initializer,
  bias_initializer=bias_initializer,
  kernel_regularizer=kernel_regularizer,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,**kwargs)
  def conv3d(inputs,
  filters,
  kernel_size,
  strides=(1,1,1),
  padding='valid',
  data_format='channels_last',
  dilation_rate=(1,1,1),
  activation=None,
  use_bias=True,
  kernel_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  kernel_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  reuse=None):
  """Functional interface for the 3D convolution layer.
  This layer creates a convolution kernel that is convolved
  (actually cross-correlated)with the layer input to produce a tensor of
  outputs.If`use_bias`is True(and a`bias_initializer`is provided),
  a bias vector is created and added to the outputs.Finally,if
  `activation`is not`None`,it is applied to the outputs as well.
  Arguments:
  inputs:Tensor input.
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:An integer or tuple/list of 3 integers,specifying the
  depth,height and width of the 3D convolution window.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  strides:An integer or tuple/list of 3 integers,
  specifying the strides of the convolution along the depth,
  height and width.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Specifying any stride value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,depth,height,width,channels)`while`channels_first`
  corresponds to inputs with shape
  `(batch,channels,depth,height,width)`.
  dilation_rate:An integer or tuple/list of 3 integers,specifying
  the dilation rate to use for dilated convolution.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any stride value!=1.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  kernel_initializer:An initializer for the convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  kernel_regularizer:Optional regularizer for the convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  reuse:Boolean,whether to reuse the weights of a previous layer
  by the same name.
  Returns:
  Output tensor.
  """
  layer=Conv3D(
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  activation=activation,
  use_bias=use_bias,
  kernel_initializer=kernel_initializer,
  bias_initializer=bias_initializer,
  kernel_regularizer=kernel_regularizer,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,
  _reuse=reuse,
  _scope=name)
  return layer.apply(inputs)
  class SeparableConv2D(Conv2D):
  """Depthwise separable 2D convolution.
  This layer performs a depthwise convolution that acts separately on
  channels,followed by a pointwise convolution that mixes channels.
  If`use_bias`is True and a bias initializer is provided,
  it adds a bias vector to the output.
  It then optionally applies an activation function to produce the final output.
  Arguments:
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:A tuple or list of 2 integers specifying the spatial
  dimensions of of the filters.Can be a single integer to specify the same
  value for all spatial dimensions.
  strides:A tuple or list of 2 positive integers specifying the strides
  of the convolution.Can be a single integer to specify the same value for
  all spatial dimensions.
  Specifying any`stride`value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,height,width,channels)`while`channels_first`corresponds to
  inputs with shape`(batch,channels,height,width)`.
  dilation_rate:An integer or tuple/list of 2 integers,specifying
  the dilation rate to use for dilated convolution.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any stride value!=1.
  depth_multiplier:The number of depthwise convolution output channels for
  each input channel.The total number of depthwise convolution output
  channels will be equal to`num_filters_in*depth_multiplier`.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  depthwise_initializer:An initializer for the depthwise convolution kernel.
  pointwise_initializer:An initializer for the pointwise convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  depthwise_regularizer:Optional regularizer for the depthwise
  convolution kernel.
  pointwise_regularizer:Optional regularizer for the pointwise
  convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  """
  def __init__(self,filters,
  kernel_size,
  strides=(1,1),
  padding='valid',
  data_format='channels_last',
  dilation_rate=(1,1),
  depth_multiplier=1,
  activation=None,
  use_bias=True,
  depthwise_initializer=None,
  pointwise_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  depthwise_regularizer=None,
  pointwise_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  **kwargs):
  super(SeparableConv2D,self).__init__(
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  activation=activation,
  use_bias=use_bias,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,
  **kwargs)
  self.depth_multiplier=depth_multiplier
  self.depthwise_initializer=depthwise_initializer
  self.pointwise_initializer=pointwise_initializer
  self.depthwise_regularizer=depthwise_regularizer
  self.pointwise_regularizer=pointwise_regularizer
  def build(self,input_shape):
  if len(input_shape)<4:
  raise ValueError('Inputs to`SeparableConv2D`should have rank 4.'
  'Received input shape:',str(input_shape))
  if self.data_format=='channels_first':
  channel_axis=1
  else:
  channel_axis=3
  if input_shape[channel_axis]is None:
  raise ValueError('The channel dimension of the inputs to'
  '`SeparableConv2D`'
  'should be defined.Found`None`.')
  input_dim=int(input_shape[channel_axis])
  depthwise_kernel_shape=(self.kernel_size[0],
  self.kernel_size[1],
  input_dim,
  self.depth_multiplier)
  pointwise_kernel_shape=(1,1,
  self.depth_multiplier*input_dim,
  self.filters)
  self.depthwise_kernel=vs.get_variable(
  'depthwise_kernel',
  shape=depthwise_kernel_shape,
  initializer=self.depthwise_initializer,
  regularizer=self.depthwise_regularizer,
  trainable=True,
  dtype=self.dtype)
  self.pointwise_kernel=vs.get_variable(
  'pointwise_kernel',
  shape=pointwise_kernel_shape,
  initializer=self.pointwise_initializer,
  regularizer=self.pointwise_regularizer,
  trainable=True,
  dtype=self.dtype)
  if self.use_bias:
  self.bias=vs.get_variable('bias',
  shape=(self.filters,),
  initializer=self.bias_initializer,
  regularizer=self.bias_regularizer,
  trainable=True,
  dtype=self.dtype)
  else:
  self.bias=None
  def call(self,inputs):
  if self.data_format=='channels_first':
  #Reshape to channels last
  inputs=array_ops.transpose(inputs,(0,2,3,1))
  #Apply the actual ops.
  outputs=nn.separable_conv2d(
  inputs,
  self.depthwise_kernel,
  self.pointwise_kernel,
  strides=(1,)+self.strides+(1,),
  padding=self.padding.upper(),
  rate=self.dilation_rate)
  if self.data_format=='channels_first':
  #Reshape to channels first
  outputs=array_ops.transpose(outputs,(0,3,1,2))
  if self.bias is not None:
  outputs=nn.bias_add(
  outputs,
  self.bias,
  data_format=utils.convert_data_format(self.data_format,ndim=4))
  if self.activation is not None:
  return self.activation(outputs)
  return outputs
  def separable_conv2d(inputs,
  filters,
  kernel_size,
  strides=(1,1),
  padding='valid',
  data_format='channels_last',
  dilation_rate=(1,1),
  depth_multiplier=1,
  activation=None,
  use_bias=True,
  depthwise_initializer=None,
  pointwise_initializer=None,
  bias_initializer=init_ops.zeros_initializer(),
  depthwise_regularizer=None,
  pointwise_regularizer=None,
  bias_regularizer=None,
  activity_regularizer=None,
  trainable=True,
  name=None,
  reuse=None):
  """Functional interface for the depthwise separable 2D convolution layer.
  This layer performs a depthwise convolution that acts separately on
  channels,followed by a pointwise convolution that mixes channels.
  If`use_bias`is True and a bias initializer is provided,
  it adds a bias vector to the output.
  It then optionally applies an activation function to produce the final output.
  Arguments:
  inputs:Input tensor.
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:A tuple or list of 2 integers specifying the spatial
  dimensions of of the filters.Can be a single integer to specify the same
  value for all spatial dimensions.
  strides:A tuple or list of 2 positive integers specifying the strides
  of the convolution.Can be a single integer to specify the same value for
  all spatial dimensions.
  Specifying any`stride`value!=1 is incompatible with specifying
  any`dilation_rate`value!=1.
  padding:One of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,height,width,channels)`while`channels_first`corresponds to
  inputs with shape`(batch,channels,height,width)`.
  dilation_rate:An integer or tuple/list of 2 integers,specifying
  the dilation rate to use for dilated convolution.
  Can be a single integer to specify the same value for
  all spatial dimensions.
  Currently,specifying any`dilation_rate`value!=1 is
  incompatible with specifying any stride value!=1.
  depth_multiplier:The number of depthwise convolution output channels for
  each input channel.The total number of depthwise convolution output
  channels will be equal to`num_filters_in*depth_multiplier`.
  activation:Activation function.Set it to None to maintain a
  linear activation.
  use_bias:Boolean,whether the layer uses a bias.
  depthwise_initializer:An initializer for the depthwise convolution kernel.
  pointwise_initializer:An initializer for the pointwise convolution kernel.
  bias_initializer:An initializer for the bias vector.If None,no bias will
  be applied.
  depthwise_regularizer:Optional regularizer for the depthwise
  convolution kernel.
  pointwise_regularizer:Optional regularizer for the pointwise
  convolution kernel.
  bias_regularizer:Optional regularizer for the bias vector.
  activity_regularizer:Regularizer function for the output.
  trainable:Boolean,if`True`also add variables to the graph collection
  `GraphKeys.TRAINABLE_VARIABLES`(see`tf.Variable`).
  name:A string,the name of the layer.
  reuse:Boolean,whether to reuse the weights of a previous layer
  by the same name.
  Returns:
  Output tensor.
  """
  layer=SeparableConv2D(
  filters=filters,
  kernel_size=kernel_size,
  strides=strides,
  padding=padding,
  data_format=data_format,
  dilation_rate=dilation_rate,
  depth_multiplier=depth_multiplier,
  activation=activation,
  use_bias=use_bias,
  depthwise_initializer=depthwise_initializer,
  pointwise_initializer=pointwise_initializer,
  bias_initializer=bias_initializer,
  depthwise_regularizer=depthwise_regularizer,
  pointwise_regularizer=pointwise_regularizer,
  bias_regularizer=bias_regularizer,
  activity_regularizer=activity_regularizer,
  trainable=trainable,
  name=name,
  _reuse=reuse,
  _scope=name)
  return layer.apply(inputs)
  class Conv2DTranspose(Conv2D):
  """Transposed convolution layer(sometimes called Deconvolution).
  The need for transposed convolutions generally arises
  from the desire to use a transformation going in the opposite direction
  of a normal convolution,i.e.,from something that has the shape of the
  output of some convolution to something that has the shape of its input
  while maintaining a connectivity pattern that is compatible with
  said convolution.
  Arguments:
  filters:Integer,the dimensionality of the output space(i.e.the number
  of filters in the convolution).
  kernel_size:A tuple or list of 2 positive integers specifying the spatial
  dimensions of of the filters.Can be a single integer to specify the same
  value for all spatial dimensions.
  strides:A tuple or list of 2 positive integers specifying the strides
  of the convolution.Can be a single integer to specify the same value for
  all spatial dimensions.
  padding:one of`"valid"`or`"same"`(case-insensitive).
  data_format:A string,one of`channels_last`(default)or`channels_first`.
  The ordering of the dimensions in the inputs.
  `channels_last`corresponds to inputs with shape
  `(batch,height,width,channels)`while`channels_first`corresponds to
  inputs with&           
               
                                           
                       
                 

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/128751.html

相关文章

  • Keras TensorFlow教程:如何从零开发一个复杂深度学习模型

    摘要:目前,是成长最快的一种深度学习框架。这将是对社区发展的一个巨大的推动作用。以下代码是如何开始导入和构建序列模型。现在,我们来构建一个简单的线性回归模型。 作者:chen_h微信号 & QQ:862251340微信公众号:coderpai简书地址:https://www.jianshu.com/p/205... Keras 是提供一些高可用的 Python API ,能帮助你快速的构建...

    cyqian 评论0 收藏0
  • ApacheCN 人工智能知识树 v1.0

    摘要:贡献者飞龙版本最近总是有人问我,把这些资料看完一遍要用多长时间,如果你一本书一本书看的话,的确要用很长时间。为了方便大家,我就把每本书的章节拆开,再按照知识点合并,手动整理了这个知识树。 Special Sponsors showImg(https://segmentfault.com/img/remote/1460000018907426?w=1760&h=200); 贡献者:飞龙版...

    刘厚水 评论0 收藏0
  • python深度神经网络tensorflow1.0主要参数和svm算法

      本文主要是给大家介绍了python深度神经网络tensorflow1.0主要参数和svm算法,感兴趣的小伙伴可以参考借鉴一下,希望可以有一定的帮助,祝愿大家尽可能不断进步,尽早涨薪  tf.trainable_variables()获取练习主要参数  在tf中,参加锻炼的主要参数可用tf.trainable_variables()分离出来,如:  #取出所有参与训练的参数   params=t...

    89542767 评论0 收藏0
  • Tensorflow快餐教程(1) - 30行代码搞定手写识别

    摘要:在第轮的时候,竟然跑出了的正确率综上,借助和机器学习工具,我们只有几十行代码,就解决了手写识别这样级别的问题,而且准确度可以达到如此程度。 摘要: Tensorflow入门教程1 去年买了几本讲tensorflow的书,结果今年看的时候发现有些样例代码所用的API已经过时了。看来自己维护一个保持更新的Tensorflow的教程还是有意义的。这是写这一系列的初心。快餐教程系列希望能够尽可...

    April 评论0 收藏0

发表评论

0条评论

最新活动
阅读需要支付1元查看
<