Machine learning and nueral networks for neuroscience 27-504

 

               Machine learning and nueral networks for neuroscience

Course number: 27-504-01/02

 Instructor: Dr. Ossnat Bar-Shira

TA: TBD

Academic year: 2023/4

Semester: A       Scope of hours: 4h lecture + 2h practice

Online course website: TBD

Course Goals (general and specific):

Machine learning (ML) is a branch of artificial intelligence that enables computers to learn a model given a large amount of data in a way that resembles human learning. Neural networks (NNs) are a type of machine learning model inspired by the structure and function of the neural networks inside the human brain.  In recent years, ML and in particular NNs have become very popular in many domains, and have been proved extremely effective in a range of applications, from machine vision and speech recognition to decision making and robotics.

The course covers four main topics: (1) Machine learning basics such as regression, classification, regularization and model evaluation (2) Supervised learning methods and algorithms (3) Unsupervised learning (4) Neural networks and deep learning.

The course is accompanied by set of hands-on exercises, including both theory and applied (programming) excercises.

 

Course content:

1.            Generalization and the bias-variance tradeoff in supervised learning.

2.            Linear classification, large margin classifiers

3.            Logistic regression, optimization and regulation methods

4.            Nonlinear classificatoin, SVM, Multi-layer perceptron, convolutional networks.

5.            Neural networks and backpropagation

6.            Recurrent neural networks: RNN, LSTM

7.            Unsupervised learning: Autoencoders, PCA, Infomax.

8.            Hopfield networks as associative memory networks

The course of the lessons (teaching methods, use of technology, guest lecturers):

Mostly board and chalk lectures. Sporadic pre-recorded video lectures.

A detailed teaching plan for all classes:

Introduction to models, from neuron to brain, simplified neurons.

Introduction to learning regression and generalization with polynomial regression.

Classification, logistic regression, tree based algorithms

Regularization and model evaluation

Projection to half spaces, GD, SGD, Newton method

Bias and variance, the perceptron Algorithm

Perceptron as GD. online vs batch learning, log reg (3)

Non linear classification, Multi-layer perceptrons

MLP architectures. ConvNets, ResNets

Backpropagation, multi-class and softmax

Max-Margin classification, SVM

Duality, dual SVM, kernels (advanced class)

Kernel-SVM as a network, kernel examples

Generalization theorems, word embedding

Recurrent neural networks, RNNs and LSTMs

Unsupervised learning: clustering, k-means, Multi-variate Gaussians, PCA

PCA as reconstruction, Oja

Attention, entropy fundamentals of Info theory

Measures of uncertainty, Entropy and source coding

Dkl, The mutual information. Chains. Markov MaxEnt,

Info theory - cont., Compression

Continuous Entropy. Entropy of a Gaussian

Info Max as a learning principle

Denoising auto encoders

Rehearse information theory and learning

Advanced classes: Generative models VAE

Advanced classes: Generative models VAE

Advanced classes: Generative models GAN

linear dynamical systems, fixed points & stability, high dim

Stochastic linear system, Markov chains

Markov chains, non linear systems, bifurcations

non linear systems

Integrate and fire, associative memory, hopfield model

nonlinear models FN

Hopfield model

Hopfield learning

Summary

 

Prerequisites:

Mathematics: Probability and statistics, Linear Algebra, Calculus.

Fluency in a programming language preferably Python.

 

Duties/requirements/assignments:

Students should submit 90% of home assignments, and a final project

 

Grading distribution:

Final Project 45%

Home assignments 30%

Midterm exam 25%

              

Bibliography (required and recommended reading):

Machine Learning and Pattern recognition, C. Bishop (2006)

Deep Learning, I. Goodfellow and Y. Bengio (2015)

The Elements of Statistical Learning, T. Hastie et al. (2001)

 

See full list online here:

https://sites.google.com/site/gondaneuralnetworks/home/notes-and-links