In the world of science and engineering, you often encounter data that appears chaotic or noisy. However, behind this apparent confusion may lie fundamental patterns and relationships you can reveal using the techniques you’ve developed. In this topic, you will explore how the pseudoinverse becomes the driving engine behind the least squares method, an essential technique for fitting models to data.
You’ll take advantage of your mathematical skills to interpret the least squares problem in matrix form. Although the problem arises naturally in 2 dimensions, your knowledge of linear algebra will allow you to reformulate it in its most general context. You’ll develop a criterion to determine the best possible solution for the pseudoinverse, so now you’ll only reap the fruits of your effort. It’s time to transition from theory to practice!
A cloud of points
Suppose you have data points in the plane . Suppose these data represent the money invested in advertising and the corresponding profits earned by various companies in some industry. A reasonable assumption would be that the more you invest in advertising, the more profits are generated, so you could start your research assuming that the relationship between the two has a linear trend:
In an ideal case, all the points are on the same line. As any line is determined by its slope and intercept , the problem translates into finding the numbers and satisfying the linear system of equations:
But there are too many equations with only two variables. So, it's unlikely that the system has a solution. Geometrically, this means there's no line that fits the raw data perfectly. The best line to describe this dataset isn't the one going through the points but the one that outlines the way this set is directed or oriented its increasing/decreasing pattern in the best manner. As is expressed in terms of , we say that is a predictor and that is the target. Before thinking about a way to attack this problem, let's look at this problem in larger dimensions.
The general problem
Now, suppose you're given data points in the space . Your goal is to find a plane that best approximates all these points:
Analogously to the two-dimensional problem, this poses a system of equations:
So, the problem reduces to find the values of , and . When is a large number, it's nearly impossible that the system could have a solution. Again, since is expressed in terms of and , is called the target while and are predictors.
You already know linear algebra, so you can work with the most general scenario involving many variables and linear equations. In general, you have data points with predictors and a target .
Let be the -th observation of the -th predictor and the i-th observation of the target for every and . This implies that the data points have the form . Under these conditions, the initial problem can be written as a linear system of equations:
But as in lower dimensions, it is very unlikely that such a system (where is usually much larger than ) has any solutions. But not everything is lost. Let's see how you can get your way.
An optimality criterion
You're working with a lot of variables and equations at the same time, and it's easy to get confused. Time to introduce some matrices!Thanks to them, the linear system becomes a simple matrix equation:
So, for any proposed vector of solutions , you get an approximation to . Since the system has no solution, it's clear that , so the distance between both vectors is non-zero. You can think of this distance as an error associated with . In summary, for any , its associated estimation error is:
The smaller the error, the better the approximation. This suggests that the best parameter vector is the one that minimizes . But then you might be wondering if there even exists a unique vector with this property, and worse, how can you find it? But don't worry, this is precisely the problem that the pseudoinverse solves!
Recall you are given and , think about them as fixed. Your goal is to compute . With it, you can estimate the target values as
The best approximation
Well, as you already know, the vector you're looking for is simply:
where is the pseudoinverse of . This is because for any other . In our current problem, this means that . Once you've computed the best parameter vector, you can estimate the target:Before putting the theory to work, let's discuss one more point. The vector is the closest to among all the vectors in the column space of . Actually, this is the orthogonal projection of onto the column space of . Thus, by multiplying the vector by the matrix you obtain its projection. For this reason is known as the projection matrix.
In the standard approach, the optimization process is carried out through derivatives. Specifically, the error function is minimized through the second derivative, which is tedious.
Furthermore, in the standard approach, the error is not minimized, but rather the quadratic error – this is done because the quadratic error is easier to derive.
For the standard approach to work, it is usually assumed that the columns of are linearly independent. In these circumstances, the projection matrix is reduced to . When you have many predictors, this can easily fail, and other problems arise. In our strategy, we do not suffer from this problem, and the projection matrix is much easier to remember.
The least squares method is usually derived by the usage of calculus, but we’re introducing an alternative way of solving the problem, which is more straightforward. Actually, in statistics, this method allows you to go deeper. This allows you to go deeper by estimating other parameters, performing hypothesis tests, and evaluating the quality of fit, all in a unified way.
The best line
Let's start off with a simple example in two dimensions. Suppose the data points are and . In order to find the best estimation, the first step is to identify the data matrix, the target values, and the parameters:
So, you're trying to find the closest thing to a solution for the system . Here, the key step is to compute the pseudoinverse of . Although it isn't necessary, you can also calculate the projection matrix:
Now you have everything to calculate the parameter and the corresponding estimate of :In this case, the error is . Then, the error associated with any other beta vector is greater than or equal to this value. Geometrically, the slope of the best line is and its intercept is .
The best plane
Now, let's move on to a slightly more realistic example. There is more than one predictor, and the pseudoinverse doesn't look pretty. The data points are and . As before, the required pieces are:
The corresponding pseudoinverse This implies that the projection matrix is then:Finally, the parameter vector and the corresponding best approximation are:
Conclusion
In the least squares problem, you have data points of predictors and a target variable .
Your meaningful goal is to find some linear function describing your data. The closest thing to a solution for the linear system is an instrument you use to reach this goal.
The best solution is the vector whose error is as small as possible
The best solution is given by and the estimation for the target is .