These specific neurons with shared weights and bias can also be thought of as a single neuron sliding over the whole input matrix with spatially limited connectivity. At each step, this neuron is only spatially connected to the local region in the input volume (H × W × D) it is currently sliding over. Given this limited input of dimensions, kH × kW × D for a neuron with a filter size (kH, kW), the neuron still works like the ones modeled in our first chapter—it linearly combines the input values (kH × kW × D values) before applying an activation function to the sum (a linear or non-linear function). Mathematically, the response, zi,j, of the neuron when presented with the input patch starting at position (i, j) can be expressed as follows:
 is the neuron's weights (that is, a two-dimensional matrix of shape kH × kW × D),  is the neuron's bias, and  ...