In this topic, you'll get familiar with one of the most popular and interpretable metrics in regression evaluation — mean absolute error (MAE), look at its pros and cons, and perform a sample MAE calculation on a synthetic dataset.
MAE
Let's say is the set of true target values, and is the set of predicted target values. Then MAE, the average of the absolute errors, can be introduced as follows:
Because the absolute error is taken, MAE is always non-negative. MAE preserves the units of the data under analysis (the score uses the same scale as the underlying data), and this property is known as scale dependency. Scale dependency helps to interpret the scores, but the scores on different datasets or models can't be compared. Since MAE doesn't take into account the sign of the errors (we take only the absolute value of these), it's not possible to determine if our model underestimates or overestimates the data. For the same reasons, it is not possible to determine the skewness (whether the model is prone to either underestimate or overestimate the data) either.
This metric is not very sensitive to outliers, which is a great thing if we don't want to make large errors more prominent. The errors are measured linearly and both the small and the large errors proportionally contribute towards the final score.
A minor elaboration on the outlier insensitivity
Let's illustrate the outlier insensitivity point by considering a small example. We have a set of ground truth predictions, , and two sets of predictions made by 2 models, and :
1 | 2 | 4 |
2 | 1 | 5 |
3 | 2 | 6 |
4 | 5 | 7 |
100 | 6 | 13 |
6 | 7 | 10 |
Let's calculate the mean squared error for and :
MSE will report better performance for the second model. The fifth sample in the ground truths is an outlier, and MSE picks the model that predicts the outlier closer, suppressing the general performance on other samples. We can observe that predicts the general samples closer than , and is more off for the general samples, but models the outlier better than .
Calculating MAE for and :
In this case, the first model shows better results, since MAE doesn't distinguish whether the prediction got worse on the outlier or on the normal samples.
MAE depends on the values predicted by the model and on the dataset. The dataset is a constant, and the predictions depend on the parameters of the model. Suppose our model is described as , where is the only parameter. That means we can make MAE a function of the model parameters:
During the derivative computation, the derivative can be either or , and MAE can't be differentiated at , which might pose a challenge in a case where the derivatives are required for training. It's possible to compute the derivatives, but it's just more complicated when compared with other metrics (for example, approximations might be used, but we're not going to consider it in this topic).
Lower MAE values indicate a better model performance. Which loss can be considered too big, you might ask, and the answer boils down to a particular problem scenario. For example, let's say we are predicting human weight from some dataset and use MAE as a loss function. We can tell that the loss of 0.5 (kg, since the units are preserved) would be much better than the loss of 10. So, the acceptable inaccuracies are determined on a case-to-case basis. As a baseline, you can calculate the MAE for a case when all the predictions are equal to the median of the target values, and then make adjustments based on that and the scenario in question.
Calculating the MAE
Let's consider the following dataset with the model being :
0 | 0.06 | 0.0 |
1 | 0.21 | 0.5 |
2 | 0.80 | 1.0 |
3 | 0.16 | 1.5 |
4 | 0.24 | 2.0 |
5 | 0.18 | 2.5 |
Let's review MAE in a more general setting and plot it as a function that depends on the model parameters, (we still consider our model to be ):
Our plot will look like this:
From the plot above, we can tell that MAE is a piecewise-linear function (a function composed of multiple linear segments) regarding the parameter with a unique global minimum, and we can see that the errors contribute proportionally towards the score.
Conclusion
Here's what you need to know about MAE:
MAE is insensitive when it comes to dealing with outliers in the data.
Lower MAE scores indicate a better model performance, but a score of 0 is often unattainable.
MAE is differentiable, but the process of differentiation is not very straightforward when compared to other metrics.
MAE preserves the units of the underlying data and can be easily interpreted, but it can't be used to compare different datasets or models.
MAE's growth is linear, with both large and small errors having a proportional weight in the final score.