Some Basic Neural Networks and Applications

  • Single layer perceptrons
  • Multilayer perceptrons
    • The deep part of deep learning.
  • Convolutional Neural Networks
    • Image/Video classification
    • Machine translationRecurrent Neural Networks
    • Language models
  • Reinforcement Learning

Beginning tools:

  • Pytorch
  • GPU
  • Caffe2
    • Can run on smartphones?

Some other ML stuff.

  • Deep-Q Network (reinforcement learning {ML for training computers to play games},
  • Sequence to sequence with attention (translation, summarization)
  • Residual Networks (image recognition)

Neural Networks

Awesome at finding patterns, mapping high dimensional data. Before NN support vector machines, boosting, random forest. Perception does not actually imply intelligence.

Hebbian learning– Something positive happens you increase the positive weights.

Generative Adversarial Networks- Given a random samples will generate image

Genetic Algorithm.

Single Layer Network

Single Layer Perceptron

Activation Functions

Activation function you choose depends on the convergence of the NN

  • Sigmoid is always positive and never too large. Somewhat robust to outliers.
  • Rectified Linear Unit is essentially the linear function with negative rectification.

Creating Logic Gates from Single Layer Perceptrons

Logical OR Function
Logical AND Function
Decision Boundary

The decision boundaries here are arbitrary and simply represent our choice of weights. There are infinitely many weights that will satisfy our decision boundary conditions here.

Classification vs. Regression (as it applies to ML)

Generically this is determined by asking whether the estimated output is continuous or discrete. Regression representing the continuous output case say if you are fitting a line to data or discrete if you are trying to identify between say 2 colors red and blue.

Example of difference between classification or regression.

How to Determine Goodness of Model

Training data is known good data. The objective function measures the difference between the target and the model (NN) output. The objective function is what we try to minimize over the weights (w). The loss function is in other mathematical language the residual sum of error squared (RSS).

Weight Updates

Gradient Descent


Where the new weight w_i is given by: w_i = w_i + \Delta w_i and

    \[\Delta w_i = \epsilon \left( t_j - y_j \right) f'(N_j)x_{ij}\]

where
N_j = \sum_{i=0}^{n}w_ix_{ij} and
\alpha = step size
t_j = target data
f'(N_j)= first derivative of the activation function.
x{ij}= input data
Getting more explicit here.
\frac{\partial E}{\partial w_i} = \frac{1}{2}\frac{\partial}{\partial w_i}\sum_{i=0}\left(t_j - y_j \right)^2
then use the chain rule:
\sum_{i=0}\left(t_j - y_j \right)\frac{\partial}{\partial w_i}-(y_j)
and recall y_j = f(N_j)

    \[-\sum_{i=0}\left(t_j - y_j \right)f'(N_j)\frac{\partial}{\partial w_i}N_j\]


    \[-\sum_{i=0}\left(t_j - y_j \right)f'(N_j)\frac{\partial}{\partial w_i}x_{ij}\]

Heaviside derivative is zero so can’t train with it. It’s useful to use the sigmoid to train and then put a threshold on (not sure how this works).

Learning Rate

Obvious problems if the learning rate is too small, never converges. If too big, miss minima.

Batch update is based on gradient descent.

  • Compute the average delta weight over the whole dataset
  • Update weights
  • repeat

Incremental based on stochastic gradient descent.

  • Compute delta w for single training example.
  • update weight
  • repeat until convergence.

Mini-batch are subsets of the data-set. Sample dataset randomly.


Support Vector Machine.

Maximize the distance between the decision boundary and the data.

Decision Tree

Breaking things up into orthogonal sections. There are oblique decision trees are expensive to compute.

Please follow and like us:

Posts navigation

1 2 3 4
Scroll to top