In the earlier article (see Setting the Performance Levels (Part 1)) we defined the four performance levels used in Performance Based Contracts (PBCs). They were the Required Performance Level, Minimum Performance Level, Inflection / Elbow Performance Level and the Incentive Performance Level. In this article we will look at the general principles that should be followed in setting these performance levels.
General Principles in Setting PBC Performance Levels
In setting performance levels the following general considerations should be taken into account:
- Do the performance levels change during the year? For example, some performance levels have seasonal variation (e.g. to align with the weather such as summer or monsoon seasons). That is, during specific months of the year (say peak periods) the buyer’s performance requirements may increase, or have less variation in the performance delivered. At other times, the buyer’s performance requirements may have more flexibility.
- Do the performance requirements change during the term of the contract? Is it different at the start of the contract (e.g. do the performance levels change from when establishing a new product / service as opposed to maintaining the product / service)? Can a transition program allow for a scaled increase in Performance Levels over time?
- Is the performance level achievable or is it a ‘stretch goal’?
- Do the performance measures need an extensive list of business rules (e.g. automatic exclusions) to make the performance level achievable? In setting the performance levels does it use ‘adjusted’ performance (i.e. with the business rules applied) or ‘raw / actual’ performance (i.e. without any business rules)?
- Does the buyer need a very high performance level? At very high performance levels each additional level of performance will typically leads to escalating prices. As such, the buyer needs to understand the cost drivers associated with very high performance (i.e. S-curve).
In addition to these general questions we need to understand the mathematical basis of the performance level. Many practitioners may ignore this part since they may feel uncomfortable with maths, and specifically statistics. However, it is essential to that practitioners address this in setting the performance levels based on the following three elements.
The first element is averaging. When a performance measure uses averaging (e.g. the daily number of ‘on-time’ deliveries measured daily but then averaged over the month) it results in a smaller spread in the performance levels between highest and lowest possible performance results. Importantly, as the period of which the averaging occurs gets longer (e.g. from weekly to monthly to quarterly) the spread between highest and lowest possible performance results continues to get smaller and smaller. Therefore, practitioners need to check that spread between their Required Performance Level and Minimum Performance Level takes into account the amount of averaging applied (e.g. daily performance levels are not used in setting the performance levels when the performance is averaged monthly).
The second element is the difference between the mean (the average of the performance) and the variance (the spread of the performance). Mathematically the average will typically (based on a normal or Gaussian distribution) represent the point where 50% of the performance is below this point and 50% is above this point. However, from a PBC perspective, do you want to set your Required Performance Level as the average that has a 50/50 chance of success? Most buyers want more confidence that their requirements will be met. Most sellers want more confidence of 100% payment. Therefore, when setting the performance levels, especially when comparing with historical performance, we don’t usually set it using the mean. However, this only applies when setting the performance levels using pervious historical performance (see Setting the Performance Levels (Part 3) for further details on the Bottom Up approach).
The third element is statistical significance. This is the concept that ensures that enough data exists to make meaningful deductions of performance from. For example, if a performance measure uses a 12 month average then it requires at least 12 months of data before the first assessment can be made. If this same measure is then updated quarterly, it will need to continue to use the previous 12 months data updated with the new quarter and discarding the previous quarter. These are typically referred to as “rolling averages”. Reliability performance measures, such as Mean Time Between Failure (MTBF) are high dependent on statistical significance.
In the next article we will look at how to bring these general principles together to set our PBC performance levels.