Description

This course will look at machine learning from the viewpoint of modeling data as coming from an underlying (unknown) probability distribution. The machine learning problems then boil down to inferring the model parameters and other latent variables that define the probability model and using these in making predictions/decisions from the data. The probabilistic view is particularly useful to (1) realistically model and capture the diverse data types, characteristics, and peculiarities of the data via appropriately chosen probability distributions, (2) encode prior assumptions about the model via prior distributions over the parameters/latent variables, (3) handle and infer missing data, etc. This course will introduce the basic (and some advanced) topics in probabilistic machine learning, covering (1) common parameter estimation methods for probabilistic models; (2) formulating popular machine learning problems such as regression, classification, clustering, dimensionality reduction, matrix factorization, learning from sequential data (e.g., time-series), etc., via probabilistic models; (3) Bayesian modeling and approximate Bayesian inference; (4) Deep Learning; and (5) some assorted topics. We will also, at various points during this course, look at how the probabilistic modeling paradigm naturally connects to the other dominant paradigm which is about turning machine learning problems into optimization problems, and understand the strengths/weaknesses of both these paradigms, and how they also complement each other in many ways.

General Information


Announcements

Announcements are not public for this course.
Staff Office Hours
NameOffice Hours
Piyush Rai
When?
Where?
Milan Someswar
When?
Where?
Rahul Kumar Patidar
When?
Where?
Vinit tiwari
When?
Where?
Priya Saraf
When?
Where?