Coriants photonics applies to ML arxiv.org/pdf/1610.02365.pdf
Sent from my iPad
Learn about ML
Coriants photonics applies to ML arxiv.org/pdf/1610.02365.pdf
Sent from my iPad
Gaussian processes for ML www.gaussianprocess.org/gpml/chapters/
Sent from my iPad
Beginning tools:
Some other ML stuff.
Neural Networks
Awesome at finding patterns, mapping high dimensional data. Before NN support vector machines, boosting, random forest. Perception does not actually imply intelligence.
Hebbian learning– Something positive happens you increase the positive weights.
Generative Adversarial Networks- Given a random samples will generate image
Genetic Algorithm.
Single Layer Network
Activation Functions
Activation function you choose depends on the convergence of the NN
Creating Logic Gates from Single Layer Perceptrons
The decision boundaries here are arbitrary and simply represent our choice of weights. There are infinitely many weights that will satisfy our decision boundary conditions here.
Classification vs. Regression (as it applies to ML)
Generically this is determined by asking whether the estimated output is continuous or discrete. Regression representing the continuous output case say if you are fitting a line to data or discrete if you are trying to identify between say 2 colors red and blue.
How to Determine Goodness of Model
Training data is known good data. The objective function measures the difference between the target and the model (NN) output. The objective function is what we try to minimize over the weights (w). The loss function is in other mathematical language the residual sum of error squared (RSS).
Weight Updates
Where the new weight is given by: and
where
Heaviside derivative is zero so can’t train with it. It’s useful to use the sigmoid to train and then put a threshold on (not sure how this works).
Learning Rate
Obvious problems if the learning rate is too small, never converges. If too big, miss minima.
Batch update is based on gradient descent.
Incremental based on stochastic gradient descent.
Mini-batch are subsets of the data-set. Sample dataset randomly.
Maximize the distance between the decision boundary and the data.
Decision Tree
Breaking things up into orthogonal sections. There are oblique decision trees are expensive to compute.
Sent from my iPad
Best tutorial on ML I’ve seen yet. docs.google.com/presentation/d/1kSuQyW5DTnkVaZEjGYCkfOxvzCqGEFzWBy4e9Uedd9k/preview?imm_mid=0f9b7e&cmp=em-data-na-na-newsltr_20171213&slide=id.g2923c61c4e_0_33
Sent from my iPad
Lie groups for AI. Basically trying to understand how useful unitary matrices are for these problems. tacocohen.files.wordpress.com/2014/05/tsa_icml.pdf
Sent from my iPad
Google AI education link ai.google/education/
Sent from my iPad
Great tutorial on ML.
medium.com/machine-learning-for-humans/neural-networks-deep-learning-cdad8aeae49b
Sent from my iPad
Some simple Python example code to play around with gradient decent. github.com/mattnedrich/GradientDescentExample
Sent from my iPad
Great into to gradient descent. Nice little examples for beginners. spin.atomicobject.com/2014/06/24/gradient-descent-linear-regression/
Sent from my iPad