If planning is what we seek to achieve, then budgeting is the allocation of resources to get there. Given constraints of both time and money, optimizing this allocation of resources is mission critical. Making poor capital investment decisions is a real waste of time and money. Not only are resources drained, but alternative, beneficial initiatives, foregone. This is in effect a double whammy: the true cost of poor capital budgeting decisions is the sum of the direct wastage plus the opportunity cost.
Project scoring and ranking is important in project portfolio management to mitigate project risk and optimize existing projects and the selection of future projects. Ultimately, in accordance with the principles of zero-base budgeting, initiatives will need be effectively ranked and selected in accordance with a project portfolio optimization strategy.
Project Scoring and Ranking
In the 70’s, when zero-base budgeting was in its infancy, the most common method of project ranking was for each member of a committee to assign a direct ranking score. These ranks were normalised through review and a ranking score assigned via consensus. This approach relied heavily on the quality of the decision package preparation and presentation. These methods were very time consuming and inefficient. They were also extremely subjective, potentially unfair and frequently sub-optimal. This spawned the need for application of more objective measures, such as key financial metrics, as described below.
Financial Metrics to Project Scoring and Ranking
Many large organizations have settled on project ranking criteria such as key financial measures to ‘score’ a project. The most common metric in practice is the payback period. Payback period identifies the amount of time taken to recoup the initial investment. Often this is calculated based on an undiscounted and pre-tax accounting basis, and provides a rough measure of financial risk. It is also, obviously, extremely invalid. Two projects that propose an equal ‘break-even’ date, and would thus be ranked equally, may in fact produce dramatically different quantitative outcomes. Clearly the one produces the greatest benefit overall should thus be ranked higher.
To compensate for this, another common metric for scoring is Net Present Value (NPV). This naturally leads to a more logical outcome: the project with a higher NPV should rank higher. But two caveats exist here – what if the return is far in the future, beyond the current management’s planning (and perhaps employment) timeline? To counter this, NPV is often coupled with a minimum payback period. Of course, this inappropriately excludes transformational projects that deliver radical value just beyond the planning horizon. The second concern of NPV relates to the required investment to achieve it. A project that returns a NPV of $1m for a $3m investment, is obviously a better value proposition than a comparison project that requires an investment of $100 to produce an equivalent NPV.
To address the deficiencies of NPV as the scoring metric, other organizations are relying on Internal Rate of Return (IRR) as their primary project scoring and ranking metric. IRR is useful as it is easy to compare to an organization’s cost of capital. Any project that fails to return what stakeholders require should be excluded, and projects ranked in order of their IRR. This unfortunately has the converse effect of NPV: a project that returns $1m for an investment of $3m may be prioritized over a project that returns $10m from an investment of $40m. It would take 11 low-value projects to return the equivalent value of the larger project. Are all these projects available, and does the organization have the human resources to conduct 11 projects simultaneously? Will implementing 11 low-value projects be higher or lower risk than a single large project?
Practically, when it comes to replacement of existing plant and equipment, financial analyses are cumbersome to produce. A core component may stop an entire production line if it fails. The exact same benefit of replacement could thus be applied to two critical components, even though they have significantly different costs, and likelihoods of failure. For that reason, attempting to provide common financial metrics to all project types is impractical.
Clearly, a single financial metric is not satisfactory as a financial metric. However, if you employ more than one, how do you rank your projects when they give differing outcomes? For example, project A has a higher NPV but lower IRR and payback period than project B: which do you choose? You can start by excluding projects outside certain threshold values (eg IRR < cost of capital, or Payback > 5 years, and NPV < 0) but thereafter you will still need an appropriate scoring methodology to effectively weight each measure for ranking purposes.
Non-financial Metrics to Score Projects
Organizations have financial goals and objectives which can be evaluated by financial metrics. But what about non-financial goals, like safety? Or environmental, social and governance objectives? It’s hard to put a direct financial metric to meeting net-zero targets or enhancing social inclusion, but they are worthy strategic objectives.
What about risk? Both in terms of the urgency to perform the project as well as implementation risk? Whilst risk can theoretically be factored into the project cashflow discount rate, it rarely is. In many cases, it is more practical to simply assess the risk directly.
Or what about non-financial benefits? Those qualitative assessments related to quality, goodwill, or competitiveness that are hard to measure directly. How can these be incorporated in a project score and used to rank projects for selection accordingly?
The practical solution is to attempt to score all dimensions: benefit (quantitively where possible and qualitatively always), strategic alignment (degree of alignment to an areas goals and objectives), urgency (risk of not doing), and confidence (execution risk). A variety of scoring methods can be applied as described below.
Many scoring systems are available in sports. Some count in 1’s, like soccer, some in 3’s and 5’s like rugby, and some in 15’s like tennis. These scoring systems have evolved over time to help segregate participants and declare a ‘winner’.
Scoring projects to determine a ranking and ultimate winner requires a similar scoring system. You may ask, what is a scoring model in project management, or even, what is a weighted scoring model in project management? It must achieve the goal of determining an indicative ranking, whilst being simple to define and ease use. When evaluating a project and determining dimension scores, the following 6 score capturing mechanisms are commonly employed:
- Scoring Metrics – A scoring metric is a value entered directly. This may be a quantity (such as headcount reduction, or tonnes of carbon dioxide removed), an amount (such as net present value), a period (such as payback period), a ratio (such as profitability index) or a percentage (such as rate of return).
- Sliding Scales – Sliding scales are used to gather qualitative assessments. For example, delivery confidence may be simply expressed by selecting a position on a 5 point sliding scale with 1 being very low confidence, and 5 being high confidence (we’ve done this many times before).
- Heat Maps – Heat maps can be used for 2 dimensional assessments – risk matrices are a good example. The scorer is required to assess both the likelihood and impact of the risk being assessed. The heat maps are often colour coded, with the high impact, high likelihood, corner shaded red to convey an extreme risk situation. Note that risk matrices can be equally applied to both operating risk (urgency of replacement, for example) and opportunity cost risk (likelihood of foregoing beneficial outcomes).
- Multiple Choice – Several qualitative questions may be required to accurately assess a scoring dimension. For example, a more sophisticated confidence assessment may be performed by considering the major categories of project risk e.g. scope, schedule, resourcing, commercial and technical feasibility. Each of these categories could be assessed on the same 5 point scale, or the scale could vary by criterium. For example, scope definition may be assessed with reference to previous experience on a 1 to 5 scale, whilst the supplier may be simply assessed as existing or new. The response to each question will generate its own score, which are then collated to determine a dimensional score.
- Alignment Matrices – Where a number of criteria are being evaluated on the same scale, an alignment matrix can be used. The evaluations are consistently described, for example as Low, Medium and High) and the user simply has to score each criterium on this same basis. This matrix type is usefully applied with strategic alignment. Each of an area’s goals for the period can be listed, and the project alignment thereto assessed.
- Hybrid Scoring – In many cases, it may be necessary to combine scoring sheets, and or vary them by stage. For example, at the ideas stage, a simple weighting scale may be most appropriate. At proposal budgeting stage, based on rough estimations, a combination of financial metric and qualitative multi-choice assessment many be required. At the business case stage, once a detailed financial analysis has been performed, the benefit score could rely entirely on the financial metrics.
Standardized Scoring vs Threshold Scoring
Whatever scoring template is used to capture the assessment, the result must be converted into a reportable value.
In threshold scoring systems, the score is only a relative metric. Given a set of projects to score, the results may range from 0 to 127 (or any other arbitrary number). Every positive result simply improves the project score, with highly positive assessments incrementing the score more than slightly positive assessments. Projects are simply ranked in accordance with this score, the quantum of which is meaningless from one year to the next as the required ‘threshold score’ will be implied by the budget capacity.
Under standardized scoring systems, each dimension is scored within a common range (between say 1-10 or A-E). This permits a relative assessment of the various dimensions – an urgency score of 10 or A, for example, would imply that the project scored the highest possible mark for criticality. A benefit score of 5 would mean that the projects benefits are only mediocre. Standardized scoring mechanisms are convenient for both scorers and ultimate decision makers to give a relative feel for the project. Numeric scores are, however, preferable to alphabetical scores, as they make the application of weighting possible, as discussed in the following section.
Overall Scores by Investment Reason
Having scored each dimension (benefit, strategic alignment, urgency, and confidence) with a derived score between 1 and 10, an overall score is necessary for ranking purposes. With a standardised scoring approach, individual dimensions score can simply be weighted and aggregated to determine an overall score.
Calculation of this score consistently for all project types at all stages is often not practical, as different scoring systems may be applicable to different project types.
When selecting investments for a financial portfolio, international equities are never compared to domestic bonds because they belong to different investment classes. Normally, a financial adviser will first determine the allocation of the portfolio amongst the primary asset classes, and only compare specific investments within an asset class.
The same logic applies to capital project portfolio planning. It makes little sense to directly score and compare sustenance and growth initiatives for example. Whilst the same general evaluation scoring and ranking approach may be adopted, the specific scoring system and overall score assessment is likely to be different. Whilst replacements are heavily weighted towards the risk assessment, growth initiatives will be more weighted towards the benefit assessment. Regulatory compliance projects may automatically be assigned a top score, as implementation is mandatory.
Once an overall score has been assigned to each initiative through individual dimension scoring and weighting, the project portfolio optimization can commence.
Project Ranking and Project Portfolio Selection
Manual Project Portfolio Selection
With an overall score assigned to each project based on benefit, strategic, and risk considerations, it may be possible to eyeball and intuitively select an optimal project portfolio within resource (time and money) constraints. One simple method is to sort the candidate projects by overall score and include projects top-down until the budget capacity is consumed. This may, however, not produce the optimal selection because of the lumpiness of the items. Intuitively, project portfolio managers may identify opportunities to include a number of smaller projects in the place of a large project to optimise the overall score or net present value. Or vice versa – a single large project may be prioritized over a number of smaller projects as it will be easier to control, and potentially be lower-risk.
Automated Project Portfolio Selection
A key feature of modern capital expenditure management systems is the ability to automatically optimize project selections based on your preferred optimization strategy. Simply select the optimization dimension (such as overall score, benefit score or NPV) , set the constraints, and the computer system can efficiently and effectively propose the best collection of projects to maximise that outcome.
Furthermore, the computer system can generate an efficient frontier of project portfolios for a range of budget increments. This can help determine an appropriate capital budget level by clearly illustrating the trade-off between value and cost. This can greatly assist organizations adopting zero-based budgeting principles establish the most justified capital expenditure level in pursuit of the organizational objectives and move away from traditional practices of fiscal year budget escalation.
Ultimately, all project portfolio decisions require manual approval, as no scoring system is infallible. The stakes are too high to rely exclusively on generated scores. However, manual selections should become the exception, not the rule. And management overrides are most like to occur at the boundary: very weak projects are seldom promoted, and high scoring projects are seldom rejected: management attention is best invested at the boundary, to validate any scoring inconsistencies and ensure that the maximum return is achieved.
Traditional project portfolio selection was based on gut instinct, convincing presentations, and simple financial metrics.
Modern capital management systems provide scoring systems for all key project dimensions including benefit, strategic alignment and risk. These scores can then be weighted into an overall score for project prioritization and automated portfolio optimization.
Ultimately management experience is invaluable, and no scoring method is perfect. Exceptions are inevitable. But by starting with a fair and transparent scoring and ranking system, management effort is focussed on the boundary between good and bad. And the benefit of selecting one less bad project and one more good project is doubly good. So, if you’re not effectively scoring and ranking your candidate projects, upgrade your capital budgeting system today!