The fundamental understanding that neural networks are mathematical expressions leads to really simple and easy implementations of neural networks. Recall from the previous chapter that a neural network can be written like this:
func affine(weights [][]float64, inputs []float64) []float64 {
return activation(matVecMul(weights, inputs))
}
If we rewrite the code as a mathematical equation, we can write a neural network like this:
![](https://static.packt-cdn.com/products/9781788993401/graphics/assets/a67670dc-893a-4308-9b31-462f96a5a349.png)
A side note:
is the same as
.
![](https://static.packt-cdn.com/products/9781788993401/graphics/assets/86262bf8-91aa-45b1-a951-f2db5da20de6.png)
![](https://static.packt-cdn.com/products/9781788993401/graphics/assets/d6732aaf-6223-4477-af91-f022c81b9b92.png)
We can simply write it out using Gorgonia, like this:
import (
G "gorgonia.org/gorgonia"
)
var Float tensor.Float = tensor.Float64
func main() {
g := G.NewGraph()
x := G.NewMatrix(g, Float, G.WithName("x"), G.WithShape(N, 728))
w := G.NewMatrix(g, Float, G.WithName("w"), G.WithShape(728, 800),
G.WithInit(G.Uniform(1.0)))
b := G.NewMatrix(g, Float, G.WithName...