The actuaries beginning to get acquainted with AI/ML find it **a bit daunting**. A couple of reasons for this. In day-to-day work, actuaries use the results of statistical analyses, like standard rate tables, and spend less time deriving the regression models. Over the period, the rigor of knowledge gets rusted. The other reason is that the AI/ML field sharply focuses on a specific, usually large, dataset. In contrast, the actuarial field deals with a diverse set of factors, like demographic events like death or illness, man-made or natural risk events, operational expenses, forward interest rates, etc. Actuarial focus is more on complex interactions within well-defined datasets and financial outcomes.

Actuaries learn linear regression early on; the CS1 subject covers correlation analysis to assess the strength of the relationship between variables. In contrast to ML/AI structure, the CS1 starts defining uncorrelated error functions ** ei** at the outset. Then, without explicitly mentioning the loss function, the least squares method is used to numerically derive the regression parameters in terms of variances and covariances. This approach works for a smaller set of known parameters but becomes tedious with datasets with very many features. For example, image classification can easily contain tens of thousands of pixels as inputs.

This blog details the alternative approach adopted by the AI/ML field. More specifically, the general formulation of linear regression includes ** basis functions** that help to model non-linear relations (linear regression refers to linearity only with respect to regression coefficients).

**It is crucial to understand the**

**matrix formulation of the problem**; the matrix operations can involve a massively packed sample set, with each observation containing a large number of features and parameters to train.

This is part of the learning log from the 3rd module, Machine Learning Basics for Real-World, from the certification course on Digital Health and Imaging. Links to the module learnings are below:

- Class 1 – Machine Learning – An actuary’s learning log
- Class 2 – ML Log – Learning Methods & Bayesian Decision insights
- Class 3 – ML Log – Rigorous explanation of Supervised Learning Approach
- Class 4 – ML Log – Pseudo-inverse estimator and gradient descent approaches to Linear Regression
- Class 5 & 6 – ML Log – Likelihoods and Bayesian Philosophy in Machine Learning

**Learning Summary – Class 4; 2 July 2022**

- Linear Regression – One dimensional
- Getting the ‘Best’ – Enter the loss function
- Linear Regression as Optimization Problem
- Scaling up: General formulation of Linear Regression
- Linear Regression training model representation in matrices
- Learning through cost optimization – squared error
- Pseudo-inverse estimator – the need for regularization
- (Alternative) Gradient Descent Solution for Least Squares

**Linear Regression**

Linear regression has been a statistical model for quite a long time. Many variations have been tried, with entirely different names. Linear regression assumes a linear relationship between the input variables ** (x)** and the single output variable

**. The method is referred to as simple linear regression when there is a single input variable**

*(y)***.**

*(x)*As we saw before, a loss function is used as a guide to train a machine learning model from sample data. The most common method for linear regression has been ordinary least squares.

**Linear Regression – One dimensional**

We have ** N** data samples of input and response pairs; the data given to us is

**. The input data dimension is just 1. For finding out a linear regression model, we assume that the input and response relationship is linear (hopefully!).**

*(x*_{1}, y_{1}), (x_{2}, y_{2}), . . . , (x_{N}, y_{N})Our problem statement is:

Given data

; find a straight line that best fits this set of points.{(x_{1}, y_{1}), (x_{2}, y_{2}), . . . , (x_{N}, y_{N})We can rephrase this as; given a set of all straight lines

, choose a straight line that best fits this set of pointsF

We can restrict ** F **to a

**that are passing through the origin. This is reasonable because, with some pre-processing, we can transform the data.**

*set of all straight lines*Hence; define ** F** as

*F = {f*_{w}(x) = wx : w ∈ ℝ }Or ** F** is parameterized by

**; our aim hence is to learn**

*w***from the given data.**

*w***Getting the ‘Best’ – Enter the loss function**

As seen in the last class, the general machine learning approach uses loss function to determine empirical loss. A loss function takes two inputs: the response given by the model and the corresponding ground truth (to estimate the cost of the decisions made from the model).

The least squared loss function is defined as:

is per sample loss*(y*_{n}− f_{w}(x_{n}) )is the total loss*L(f*_{w})

**Linear Regression as Optimization Problem**

With loss function, our problem statement becomes

Find

infthat minimizesF; or equivalentlyL(f)Find

that minimizes empirical riskw ∈ ℝ.L(f_{w})

A solution to this problem is given by *dL/dw*** = 0**; the least squared function is a neat convex differentiable function, providing a closed form of solution.

First, we will calculate the derivative of ** L** w.r.t

**. and then set it to zero (gradient is**

*w***at minimum for a convex function):**

*0*Hence, we get the solution in terms of observed data that provides a minimum value for the least squared error. Note that this is on the observed sample only; we don’t know how this will perform for the test data or any new observations.

**Scaling up: General formulation of Linear Regression**

In real life, ** x** will have many more than a single feature. In these cases, we can extend

**concept to a**

*the line***space. This is where thinking in terms of matrices helps. Matrices help to present the equations with very many variables in an elegant form.**

*hyperplane*We can frame the problem as:

Given a training data ** D = {(x_{n}, y_{n})}^{N}_{n=1}**, where

**is input in**

*x*_{n}∈ ℝ^{D}**dimensions (or features) and**

*D***is the response in one dimension or a real value.**

*y*_{n}∈ ℝA straightforward adaptation of a single feature case to a ** D**-dimension input

**is**

*x***where**

*F = {f*_{w}(x) = w^{T}x : w ∈ ℝ }**w**Another common representation is to denote the set of functions

^{T}= [w_{1}, . . . , w_{m}].**as a set of hypotheses or**

*F*

*H*This simpler model is linear *in both the parameters and the input variables*, which ** limits** it from adapting to nonlinear relationships. Linearity is often a reasonable assumption when many inputs influence the output, but generally, it is unlikely that a proper function is linear. Similarly, assuming that the classification boundaries are linear hyperplanes is often unreasonable.

The above model can be augmented by replacing the input variables with nonlinear basis functions of the input variables. For example, we can expand H to be the set of functions that are linear combinations of a set of ** basis functions**,

*φ _{1} , . . . , φ_{M} : ℝ^{D} → ℝ*

Note that each of ** φ_{i}** is a mapping from the

**dimension into a real number or**

*D***Dimension.**

*1*I initially struggled with this concept, especially comprehending the change in the input dimensions. Some examples from here and here will help to clarify.

**Example 1** – Consider input vector: ** 1, x_{1}, x_{2}**, we want to model nonlinear interaction term

**into the model.**

*x*_{1}x_{2}Our solution could be: *f(x) = 1−2x _{1}−2x_{2}+4x_{1}x_{2} =φ_{1}(x)−2φ_{2}(x)−2φ_{3}(x)+4φ_{4}(x)*

with *φ _{1}(x) = 1, φ_{2}(x) = x_{1}, φ_{3}(x) = x_{2}, and φ_{4}(x) = x_{1}x_{2}*

**Example 2:** A Polynomial Basis Function for the above vector could be: *{1, x _{1}, x_{2}, x_{3}, x_{1}x_{2}, x_{1}x_{3}, x_{2}x_{3}, x_{1}^{2}, x_{2}^{2}, x_{3}^{2}}*

**Example 3**: Consider a real values input *x*** ∈ ℝ **(input in

**).**

*1-D*Define the basis functions: ** φ_{1}(x) = x, φ_{2}(x) = x^{2}, …, φ_{M}(x) = x^{M}**.

Then ** h ∈ H** is of the form

*M*_{h}(x) = w_{0}+ w_{1}x + w_{2}x^{2}+ … + w_{M}x^{M}^{ }; which means that

**is the set of all polynomials in**

*H***with a degree**

*x**no greater than*.

**M**The basic idea is simple: in addition to the original inputs, we add inputs that are calculated as deterministic functions of the existing inputs and treat them as additional inputs.

Independent of the choice of basis functions, the regression parameters are calculated using the well-known equations for linear regression.

Mathematically we define the feature mapping ** φ: ℝ^{D} → ℝ^{M}** by

So** H** is the set of linear functions of

**. To summarize,**

*φ(x)*where ** φ** is a fixed nonlinear vector-valued function and

**is a learned linear function. Then**

*w***.**

*h = w ◦φ*Our linear model is (please note** **reverting to

**here from**

*f***, to be consistent with my course notes):**

*H*Where** w_{j} **are the model parameters and

**is the basis function (changes the representation of**

*φ*_{j}**)**

*x*or

wherey = b + w^{T}φ(x),w^{T}= [w_{1}, . . . , w_{m}] φ^{T}= [φ_{1}, . . . , φ_{m}]

Heads-up- this is the part I struggled with most; it always left a feeling of not knowing enough to be confident. I had been putting off revisiting my rusty linear algebra, especially the matrices operations. In most machine learning implementations, matrices representation makes it convenient. In neural networks especially, one needs to be comfortable with the matrices representations.

**Linear Regression training model representation in matrices:**

Let us simplify the model by setting ** φ(x) = x_{i}** and

**. You might recognize this as a straightforward adaption of linear regression mentioned above; from 1-dimensional input to D-dimensional input.**

*d = m*Model: ** y = b + w^{T}x **for an observation

*x*_{n}

*∈*

*ℝ*

^{D}Suppose we have ** n** observations in our training data (or samples), each with a

**dimension (or features). Then, we can represent the above model applied to**

*D***number of samples compactly through matrices.**

*n*Note the dimensions of the ** X** matrix –

**observations (rows) with each sample having**

*n***features (columns). The ground truth is packed in an**

*D***rows column matrix; each of the outcome observations. So, we get the set of predicted values in a compact form:**

*n**Y = XW + b*

Note ** b**, it is the column vector of

**s from**

*b***predictions; moving**

*n***into the first term is convenient. Effectively, this can be added as an extra dimension to the samples leading to**

*b***columns.**

*D+1*Now our problem is solving the linear equation system, given** Y** and

**, find**

*X***that satisfies**

*W***.**

*Y = XW*A solution is not guaranteed, W that satisfies

**may not exist.**

*y*_{n}= w^{T}x_{n}, n = 1, 2, . . . , N**Learning through cost optimization – squared error**

As we saw earlier, the regression objective is to learn by optimization of the cost function. Since we do not know the underlying distribution ** P,** we try to minimize the

**loss over the observations, called**

*average***Using the least square loss function, our objective is:**

*Empirical Loss.*Given data , find

such that is minimum.w

The squared error loss function provides a convex, differentiable function. We can find ** w** by differentiating the

**with respect to**

*L*_{emp}**, and setting it to zero.**

*w*This leads to our final solution:

Honestly, this is the juncture where I start thinking about my priorities in life, taking out my dog or checking on my younger son to see if he is picking up maths alrighty; come on, look at that equation! How inspiring! This time too, I got stuck for about two days, trying to figure out how this half-line formula could contain so much data.

Two days of trawling through resources did pay off. I think I have finally come to terms with this formula! If you struggle like me, here are quick references.

- Need to keep all steps together that lead to packing so much into this result; this video has a great explanation and simpler visual connections. Has a good refresher on matrix operations too.
- The following part explains the derivations of the formula above, leaned that this approach is called The
Estimator.*Pseudoinverse* - This course note has been such a relief. However hard I tried through the sequences, there was still a feeling of disconnect. This note goes on to work out a bivariate regression; it couldn’t have been a better explanation! If you wish, you can access more course notes here.
- Additional reference – this note takes a set of values and works out the matrices.

The above solution is a one-step learning solution or ** pseudo-inverse estimator**. The pseudo-inverse refers to

**, which needs only the input matrix**

*(X*^{T}X)^{-1}**, and we can recover the optimal parameter vector. Note this is a closed-form analytical solution.**

*X*** X^{T}X** is a

**matrix (where**

*d × d***is the dimension of the data); as the number of features grows large, it can be computationally very expensive to invert.**

*d***The need for regularization**

Another drawback with this numerical closed form solution is that it uses every information in, hence ** w_{i}**s in

**can become very large. Large**

*W = [b, w1, . . . , w*_{d}]**values mean the model turns out to be very complicated and hence overfits the data. Hence, we will need to penalize the large**

*w*_{i}**values by regularization.**

*w*_{i}With ** L2** (or least squared) regularization, our linear model becomes:

And the closed-form solution is:

This regularization helps as the individual components of w remain smaller, or in other words, do not learn a simple feature with too much importance. Regularization is very important when ** N** is small, and

**is very large.**

*D*One more advantage of Regression: If ** X^{T}X** is not invertible (hence the numerical solution isn’t possible), one can make

**invertible.**

*(X*^{T}X + λI_{d})**Gradient Descent Solution for Least Squares**

In the case of high-dimensional data, it is prohibitively difficult to compute the inverse of the feature matrix. Instead, we turn to a gradient descent method. In this method, we start with a random set of features, then find out the slope of the empirical loss function with respect to each of the parameters. The direction of the slope tells us (hopefully) what adjustment needs to be made to minimize the loss.

The risk, however, is that we may be stuck in a local optimum in a multidimensional space.

**1** Start with an initial value *w = w ^{(0)}*

**2** Update w by moving along the gradient of the loss function *L(L _{emp} or L_{reg})*

for squared error loss or L2 loss function this becomes:

**3** Repeat until convergence.

The squared loss function in linear regression is convex, if we add an ** L_{2}** regularize, then the resulting function is strictly convex.

**Disclaimer**:

*I currently work full-time at Swiss Re, Bengaluru. The blogs and articles on this website **www.balajos.com** are the personal posts of myself, Balachandra Joshi, and only contain my personal views, thoughts, and opinions. It is not endorsed by Swiss Re (or any of my formal employers), nor does it constitute any official communication of Swiss Re.*

*Also, please note that the opinions, views, comprehensions, impressions, deductions, etc., are my takes on the vast resources I am lucky to have encountered. No individuals or entities, including the Indian Institute of Science and NSE Talent Sprint who have shown me where to research, or the actuarial professional bodies that provide me continuous professional growth support, are responsible for any of these views; and these musings do not by any stretch of imagination represent their official stands; and they may not subscribe/support/confirm any of these views and hence can be held liable in any vicarious way. All the information in the public space is shared to share the knowledge without any commercial advantages.*