ML metric: mean average precision @ N

Srijan Bhushan
2 min readSep 5, 2022

--

what is mAP@N aka mean average precision?

mAP@N is a very useful metric used in ML models which are ranking things like content, users, products etc. It is used in recommendation engines results as they involve ranking. The outcome for such models is usually a list of content, users, products etc, in an order of relevancy, with their outcome as either 1 or 0.

Put simply, mAP@N, is the running Precision calculated at different levels of the list. Let’s assume we have a ranked list of content with their predicted engagement score. Below the labels are shown for the contents, and they are ordered in decreasing order of their scores. See below.

Example:

Now, we can calculate the overall Precision of the model output above. We know that precision = TPs / TPs + FPs. Here, overall precision is easy to calculate.

Overall precision = 0.60

However, for ranking problems, that is not enough because there is an element of rank order involved. What matters is how well the predictions were ranked?

mPA calculates the running(cumulative) Precision for k-observations, from k=1 till k=N observations. N could be any number of observation you wish to evaluate. It could be the entire list or a subset of it. At each point N, Precision is calculated (from 1 to N). In the figure below, k is taken as 5, and 5 precision values are calculated for 5 different ranges.

After all the 5 precisions are calculated, the average of all them is taken. The average will be:

1/N (Σ precisions (at K)) = 1/N (precision 1 + precision 2 + … precision 5).

This average is known the mPA@N, for N=5.

Here, mPA@N= 1/ 5 ( 1 + 0.5 + 0.6 + 0.75 + 0.6) = 0.69

Previously we calculated overall precision as 0.60 and now we have mPA@N as 0.69. So they are different metrics.

--

--

Srijan Bhushan
Srijan Bhushan

Written by Srijan Bhushan

Data/ML Engineering, Economics, History

No responses yet