In most organizations, the demand for project work far exceeds the funding and resources available. Aproaches to resolve this challenge vary from pitched power struggles, rigorous portfolio management practices and demand management arrangements to siloing strategies and executive dictates.
In this post, we’ll look at a company that came to grips with the excess demand challenge using a simple, nine point ranking process. Project proponents used the tool to quantify the potential enterprise value of each proposal. All submissions were then ranked to select the top value proposals. The cut-off was established based on the estimated level of affordability for the upcoming year, nicely matching demand with capacity. Simple, consistent, fast and good for the organization.
Thanks to I.B. for his contributions on this case.
This manufacturing organization faced an ongoing challenge to balance demand for project resources with organizational value, affordability and the profit expectations of its shareowners. Past attempts to deal with the issue had resulted in the use of a number of new practices: in-depth business cases, funding allocations by division based on a high level assessment of divisional contribution to strategic objectives, quarterly funding allocations and a formal gating process.
The CFO was still not satisfied with the prioritization and funding approaches. He believed that many short and long term profit opportunities were being left on the table. He discussed his reservations with his CEO and other senior executives but found support to address the issue lukewarm at best. So he set out to see what other organizations were doing to address the challenge as a prelude to revising their own internal practices. He assigned one of his Finance managers to take on the challenge.
To investigate best practices being used by market leaders in their industry and in other industries, assess how those practices were contributing to organizational success and make recommendations to improve their own internal operations. The assigned manager was challenged to have a formal, justified proposal in place within six months to coincide with the start of the corporate budgeting cycle. Any funding required would come from the manager’s operating budget with assistance from the CFO’s budget as needed.
The Finance Manager enlisted a couple of his senior staff and started a literature search for project prioritization and funding practices to get the ball rolling. He also engaged with the Systems Development Manager about key project success factors and project management best practices. The four of them were soon overwhelmed with information. They decided to constrain the scope of the undertaking by developing an assessment matrix that addressed four components:
- No more than ten selection criteria
- A rating scale for each criteria that would produce a confirmable and quantitative result
- Weighting for each criteria that could be adjusted to reflect current corporate priorities
- A total score that could be used to rank projects
The Finance Manager also suggested that they stage their efforts to test concepts rapidly, garner feedback and iterate until they had a solid proposal and test the models on their current project inventory and backlog. His teammates agreed. The CFO agreed.
The team of four developed three iterations over fourteen weeks. Each iteration was tested on the current project inventory and the results reviewed with the CFO. Based on qualitative and quantitative assessments, the model was revised and a new iteration was processed for subsequent review. In three months, the Finance Manager and CFO were ready to present their findings and recommendations to the company’s executives for approval.
After some stubborn resistance and localized grumbling, the proposed project prioritization and ranking process was approved unanimously by the company’s executives for use in the upcoming budget cycle. The prioritization mechanism included the following components: a stakeholder model, prioritization matrix and a process guide.
- Process Owner – the CFO was identified as the owner of the process and the final arbiter on any issues related to the operation of the prioritization and ranking practice.
- Reviewing Partners – the members of the Executive Committee were identified as Reviewing Partners, responsible for vetting every submission to ensure completeness and consistency across the portfolio
- Submission Owners – the Sponsors of the submitted proposals were designated as the Submission Owners. They were responsible for the accuracy and integrity of the submissions. They were also responsible for socializing the proposal and ensuring all stakeholders were on side.
- Collaborators – the change Targets were identified as Collaborators. They were responsible for signing off on the accuracy and integrity of the submissions before they were presented for consideration
Nine assessment criteria were selected as the primary indicators of worth and risk with an assessment scale for each. As well, they defined the Value and Weight ratings for the criteria. The team saw the Value rating as an important indicator of project success that would stay constant from year to year. The Weight rating, on the other hand, could be varied year by year to reflect the needs of the corporation, from shorter term, lower risk projects to high strategic alignment.
The assessment criteria included:
- Sponsors – Sponsorship was seen as a key success factor. However, the more sponsors there were on a project, the greater the risk. The scale was Number of Sponsors, the Value was set at 20%, the Weight at 20% and the total calculated as Value/# Sponsors*Weight.
- Targets – Targets are the managers/decision makers of organizations affected by a planned change. As with Sponsors, the more Targets involved, the greater the risk. The scale was Number of Targets, the Value was set at 20%, the Weight at 10% and the total calculated as Value/# Targets*Weight.
- Associated Projects – Associated Projects was included to identify inter-dependencies that needed to be considered during project selection. Of course, the more Associated Projects, the greater the risk. The scale was Number of Associated Projects, the Value was set at 10%, the Weight at 10% and the total calculated as Value/# Projects*Weight.
- Supported Strategies – Supported Strategies was included to reflect strategic alignment. As well, it allowed for an analysis of potential inter-dependencies and identification of gaps in strategic support. The scale was Number of Supported Strategies, the Value was set at 15%, the Weight at 10% and the total calculated as Value*# Strategies*Weight
- Strategic Fit – Strategic Fit focused on the significance of a project’s support for stated corporate and line of business strategic goals. The scale was defined as:
- Critical to the success of line of business and corporate goals – 5
- Necessary to deliver line of business and corporate goals – 4
- Supports line of business and corporate goals – 3
- Tactical with strategic component – 2
- Tactical solutions only – 1
- Has no direct or indirect relationship to the corporate vision – 0
The Value was set at 10%, the Weight at 10% and the total calculated as Value*Strategic Fit*Weight.
- Economic Impact – Economic Impact considered the value expected to be delivered to the organization in terms of annual benefit and payback. The scale was defined as:
- under 1 year – 5
- 1 – 2 years – 4
- 2 – 3 years – 3
- 3 – 4 years – 2
- 4 – 5 years – 1
- 5+ years – 0
The Value was set at 10%, the Weight at 15% and the total calculated as Value*Economic Impact*Weight.
- Competitive Advantage – Competitive Advantage focused on the value derived from implementation of a project supporting a new business strategy, product or service. The scale was defined as:
- Greatly improves competitive position (24 month window) – 5
- Substantially improves competitive position (12 month window) – 3
- Moderately improves the competitive position (6 month window) – 1
- Brings the level of service/ products in line with the competition – 0
The Value was set at 5%, the Weight at 10% and the total calculated as Value* Competitive Advantage *Weight.
- Competitive Risk – Competitive Risk assessed the degree to which failure to do the project will cause competitive damage. The scale was defined as:
- Postponement will result in irrevocable damage – 5
- Postponement will result in further competitive disadvantage – 4
- Postponement may result in further competitive disadvantage – 3
- Can be postponed for up to 12 months without affecting competitive position – 1
- Can be postponed indefinitely without affecting competitive position – 0
The Value was set at 5%, the Weight at 10% and the total calculated as Value* Competitive Risk *Weight.
- Project Risk – Project Risk focused on the degree to which the organization is capable of carrying out the changes required by the project. This is a negative factor. Higher measurements represent greater risk. The scale was defined as:
- Magnitude of change
- Major change for targets – 5
- Moderate change for targets – 3
- Minor change for targets – 1
- Primary target of change
- Customer – 10
- Field or remote locations – 5
- Internal – 1
- Management commitment
- No active sponsorship – 10
- Actively managed at mid-management levels – 3
- Actively sponsored at highest level – 1
- Project management/skill sets
- Inexperienced – 10
- Somewhat experienced – 3
- Experienced and capable – 1
- Cross-organizational implications
- Affects all services or lines of business – 10
- Affects multiple services or lines of business – 3
- Confined to limited services or lines of business – 1
- Magnitude of business & technology changes
- Major change to existing environment – 5
- Completely new business or technology – 3
- Minor changes to existing environment – 1
- Clarity of business need
- Ill-defined – 10
- Generally understood – 3
- Clearly defined – 1
- Knowledge of target solution
- Brand new product or service – 5
- Somewhat new/limited applications – 3
- Very well-known/industry wide – 1
- For total scores of 8 – 26, the risk rating is Low
- For total scores of 27 – 46, the risk rating is Medium
- For total scores of 47 – 65, the risk rating is High
- Magnitude of change
The Value was set at 5%, the Weight at 5% and the total calculated as -Value* Project Risk *Weight, a negative number that reduced the overall project rating. The higher the risk rating, the larger the reduction.
In addition, two additional pieces of information were required: Annual Benefit and Payback Period. At this early stage, it didn’t make much sense to try and develop cost estimates. Instead, the submission proponents were asked to calculate a projected annual benefit and the payback period required based on the nature of the change. For example, operational changes would be expected to provide a quick return (usually less than a year). Tactical initiatives should have a payback within 1 to 3 years and strategic undertakings could have a payback period of three years or greater. That information was used to arrive at target investment amount (annual benefit x payback period). The target investment was used as a proxy for estimated cost on the proviso that the solution would be managed within that number.
The Prioritization Matrix was presented in a worksheet form that submitters were required to complete, socialize, present and defend in seeking funding for their initiative.
Completing the worksheet was an informative process for the participants. They were often forced to go back and reconsider aspects of their proposal to increase the value and reduce the risk in an effort to improve the total score. Often, what started out as a multiple sponsor initiative ended up with one, more committed sponsor. Proposals that had a significant risk were often reconfigured to reduce the risk and increase the chance of funding. On occasion, proposals were dropped from consideration when the sponsor and collaborators realized what looked like a great idea didn’t pan out with further due diligence.
The Process Guide included eight steps, from conceptualization to the final decision regarding the projects to be funded. The sponsoring executive was required to ensure proposed projects were socialized with managers affected by the change and that affected decision makers were in agreement.
Each initiative was presented by the sponsoring executive in an executive scrum. The presenting executive was challenged by his or her peers to justify each position, especially the specific strategies supported and the ratings for strategic fit, economic impact, competitive advantage, competitive risk, project risk, forecast annual benefit and the target payback period.
There were heated debates about most of the submissions and the values assigned. In many cases, the debate was taken offline where further discussions and investigations resulted in modified and resubmitted proposals. The process had two significant benefits: the proposals, when finally approved for funding, where solid, well research, fully supported ventures and, all senior executives were fully informed about the initiatives going forward. The CFO, in his role as process owner, was responsible for making a final call on a proposal if agreement couldn’t be achieved in the scrum sessions.
The CFO’s staff accumulated a list of approved proposals in a ranking table and ordered it by the prioritization total, below.
Individual project proposal totals and rankings could be tweaked by the CFO’s staff to reflect project inter-dependencies, mandatory initiatives like government legislation, critical operational fixes and other planning opportunities. Finally, the CFO’s staff would determine the appropriate level of funding to commit for the upcoming period and draw a line across the ranking chart. Projects lying above that total would get funded. Project below the line would not.
After the first use of the prioritization practice and completion of the budget cycle, the CFO’s staff surveyed those involved to see how the practice met its goals. The almost unanimous feedback: simple, consistent, fast and great for the organization.
How Great Leaders Delivered
From the millions of hits returned on an internet search for “project prioritization”, this is obviously a challenge for many organizations. There are also undoubtedly a plethora of solutions. The CFO and his Finance Manager and team did a number of things right to solve the prioritization challenge for the organization and deliver a solution that was right for them.
- Collaborate – The CFO sounded out his colleagues at the outset and received a lukewarm reception. But through that initial contact, the other executives knew something was probably afoot so they weren’t surprised when the proposal was presented to them. Also, collaboration was at the core of the prioritization practice and contributed significantly to its success.
- Reuse – The Finance Manager and his teammates started with a broad search for applicable practices. They certainly weren’t disappointed. Reusing what others had developed and discovered before, they were able to develop a practical, powerful solution quickly and cost-effectively.
- Test – The team’s decision to iterate, review and test the output on their existing project inventory allowed for a rapid discovery cycle while minimizing the investment of time and accumulation of vested interests. Was their solution perfect? Certainly not. What they did get though was good enough to meet the challenge.
- KISS – Keep It Simple Silly. They did keep it simple. Nine factors. A simple scale for each. A Value rating. A Weight rating. An annual benefit. A target payback period. Add them all up to get a total. Review with others. Decide. Oh, and they used Excel.
- Measure – The CFO’s team worked with the sponsors and collaborators to complete the worksheets and go through the review and decision processes. They adjusted the materials based on the feedback received. They communicated widely, about usage, number of proposals under development, received, reviewed and approved. They interviewed the executives midway through the budgeting process and made appropriate revisions. They surveyed the users after the budget cycle and communicated the results. They knew how the process was performing throughout the budgeting cycle.
In my mind, this is a great example of a solution fitting the need. It wasn’t the perfect solution. But it was more than good enough. So, if you find yourself in a similar situation, put these points on your checklist of things to do in future endeavours so you too can be a Great Leader. And remember, use Project Pre-Check’s three building blocks covering the key stakeholder group, the decision management process and Decision Framework best practices right up front so you don’t overlook these key success factors.
Finally, if you have a project experience, either good or bad, past or present, that you’d like to have examined through the Project Pre-Check lens and published in this blog, don’t be shy! Send me the details and we’ll chat. I’ll write it up and, when you’re happy with the results, we’ll post it so others can learn from your experiences. Thanks
Drew Davison is the owner and principal consultant at Davison Consulting and a former system development executive. He is the developer of Project Pre-Check, an innovative framework for launching projects and guiding successful project delivery, the author of Project Pre-Check – The Stakeholder Practice for Successful Business and Technology Change and Project Pre-Check FastPath – The Project Manager’s Guide to Stakeholder Management. He works with organizations that are undergoing major business and technology change to implement the empowered stakeholder groups critical to project success. Drew can be reached at firstname.lastname@example.org.