Some neural network HW
Let \(f(x) = \sin(x^3 + 2(x^3+1))\).
- Draw out an expression graph for \(f\) as it’s written. Do be sure to reuse any expressions that you can.
- Trace the evaluation of \(f(2)\) through the expression graph.
Let \(f(x,y) = \sin(x^3 + 2(x^3+y^2))\).
- Draw out an expression graph for \(f\) as it’s written. Do be sure to reuse any expressions that you can.
- Trace the evaluation of \(f(2,-1)\) through the expression graph.
Compute the convolution of the data \(D\) with the kernel \(K\) given by
\(D=\,\) | 1 | 2 | 3 | 4 | 5 | 6 |
and
\(K=\,\) | 1 | -2 | 1 | . |
- Consider the two two-dimensional kernels \[
K_1 = \begin{bmatrix}1&1&1\\1&-8&1\\1&1&1\end{bmatrix}
\quad \text{ and } \quad
K_2 = \begin{bmatrix}1&1&1&1&1\\1&1&1&1&1\\1&1&-24&1&1\\1&1&1&1&1\\1&1&1&1&1\end{bmatrix}.
\]
- Could these be appropriate for edge detection in image processing? Why or why not?
- What kind of difference might you expect in the behavior of these?
- The neural network shown in Figure 1 below consists of three layers:
- the input layer,
- one hidden layer, and
- the output layer.
Let’s also suppose that the input layer has a ReLU activation and the output layer has a sigmoid activation.
Note that the inputs are given. Use those inputs together with forward propagation to compute the value produced by this neural network.
