Measuring Error
There are several common ways to measure error mathematically, especially in the context of machine learning and statistics. A lot of these come in handy for all sorts of procedural setups that need to compare a bunch of numbers. Here are a few of the most widely used methods:
- Mean Absolute Error (MAE): This measures the average absolute differences between predicted and actual values. It’s simple to understand and gives equal weight to all errors.
- Mean Squared Error (MSE): This measures the average of the squared differences between predicted and actual values. Squaring the errors means larger errors have a bigger impact, which can be useful in some cases.
- Root Mean Squared Error (RMSE
RMSE
Root Mean Squared Error (RMSE) is a widely used metric in statistics and machine learning to evaluate the accuracy of...
- Mean Absolute Percentage Error (MAPE): This expresses the error as a percentage, which can be useful for understanding the error relative to the size of the values being predicted.
- Huber Loss: This combines the benefits of MAE and MSE. It behaves like MSE when errors are small and like MAE when they are large, making it robust to outliers.
- Cross-Entropy Loss: Commonly used in classification problems, this measures the difference between the predicted probability distribution and the true distribution. It’s particularly useful for multi-class classification tasks.
sources / further reading: