Today’s lecture is split in two parts.
In the first part, you will be introduced to a simple, but fundamental kind of machine learning models, Linear models. We will understand how are they formulated, and how can they be solved numerically. We will also see how can they be used for both regression and classification.
The second part comprises an introduction to non-linear models, specifically to neural networks. We will continue studying neural networks in the next lecture, but this week you will already understand what are they and how can they be used for classification tasks. The material related to this second part is located here
This is an overview of what we will be learning in this first hour:
Motivation and Goals. Linear Models for Supervised Classification and Regression
Linear Regression Models
- Linear Regression for one variable
- Gradient descent for Linear Regression
- Linear Regression - A real-world example
- Linear Regression for more than one variable
Logistic Regression Models (for classification!)
- First part
- Second part
Sources and References
The same as in the practical lectures, I have also built a Jupyter notebook that contains all the information you will find below. In addition, it will allow you to experiment with short pieces of code. You can access it here.