Project Management Case Study: 

Lessons Learned From The “Forecasting”  Implementation at Beta Corpora on

A Pre‐Course Reading Exercise for the Voice of the Team (VOT)   Workshop Using the PRIMMS® Project Risk Management Tools

Version 5.0
11/25/2017
Beta Company-Forecasting Project Case Study Abstract

The Forecasting Project at Beta Corporation is an adaptation of an actual project situation that illustrates how limitations of cognitive processing, human biases and organizational dynamics can undermine good project management processes. The case examines the internal dynamics of an application development/ business transformation project that begins with very strong business case along with the solid backing by senior management. Yet, as the project unfolds it becomes clear that the warnings of knowledgeable team members are ignored resulting in management (i.e. the project manager, the CIO and the project sponsor) becoming increasingly insulated from the serious emerging risks. The result is a disastrous business outcome that could have been avoided if the risks had become visible and dealt with effectively early in the project life cycle.

The case study reinforces several key points to be made in the workshop:

  • Project environments are both stressful and information intensive. It is well documented that our ability to process information is limited to 7 ± 2 bits of information at once (Miller 1956). Stress causes even additional cognitive constriction (Janis and Mann 1977). Yet, vigilant Project Management decision making requires the full use of all available information.
  • Once a project manager and project team make a “public” commitment to achieve a challenging objective such as scheduled go-live date, psychological biases kick in that can cause them to ignore or reject new information that point to the contrary. This phenomenon, known as decision traps, can lead to misrepresentation in project reporting that delays the recognition of serious emerging risks on a project until it is too late to recover.
  • Mental conflict (Cognitive Dissonance) occurs when beliefs or assumptions are contradicted by new information. When confronted with challenging new information, people often seek to preserve their current understanding of the world by rejecting, explaining away, or avoiding new information or by convincing themselves that no conflict really exists.
  • To offset these information challenges in a project environment, we use recurring project team feedback to uncover important insights to not only emergent technical issues but also to emergent project management process problems as well. The raising of both technical issues and project management process concerns creates cognitive dissonance which is channeled to drive action towards prevention and correction.
  • Weekly Voice of the Team (VOT) surveys probe the confidential and candid insights of the entire project team and stakeholders throughout the project life cycle. The continual collection of diverse insights from stakeholders focuses upon the identification and follow-up of both project-specific technical risks as well as benchmarked project management process risks. It is confidential survey that enables team members to safely vocalize concerns about emergent issues and/or project management process concerns.
  • Although the Forecasting Project case study addresses a single, specific project only, the events described are generalizable to almost any complex technology project and especially to those requiring a targeted return on investment.

Background

Beta Corporation is a large, global manufacturing company with headquarters in the Midwestern U.S. The company’s material master contains approximately 50,000 different finished good materials that it produces and sells globally. 

Each month the sales, supply-chain and financial planners would provide a rolling 12 month forecast in units and revenues and submit that forecast to management for sign-off. Each month the teams produce these forecasts in Excel spreadsheets within weekly cycles where:

  • Week 1—Sales planners gather data on past sales and analyze trends provide sales forecasts. Demand planners revise inventory policies and customer service policies.
  • Week 2—Supply Chain planners assess ability to meet demand, make capacity adjustments and schedule operations.
  • Week 3—Finance matches supply and demand plans with financial considerations.
  • Week 4—Finance finalizes the plan and releases for implementation.

The monthly forecasting process is shown graphically in Figure 1 below.

Figure 1

Historically, the forecasting process was always very difficult to execute and was rife with problems.  The financial people complained that there was a lack of uniformity as some product lines were aggregated into families and forecasted at a consolidated level by sales planners. Whereas, other product lines were forecasted granularly at the SKU level (the finished good part number level). The financial people also complained that the sales forecasts were usually too low reflecting an underachieving sales force. Finance also complained that accountability was lacking for specific salespeople to achieve the forecasts.

The sales planners were also dissatisfied with the process. They complained that the Financial planners would arbitrarily boost the forecasts to unachievable sales levels in order to make the financials look good for Wall Street. They argued that Finance would find “Pennies from Heaven” and then artificially bump up the forecasts without consulting the salespeople. 

Both the sales planners and the supply chain planners complained that Finance wanted too much detail in the forecasts. They argued that the forecasting process took too much time and was too detailed creating inaccuracies. Additionally, they argued that the forecast was inaccurate because actual monthly demand at the SKU level was generally sporadic, idiosyncratic, and often exhibited nearly random behavioral patterns with many month’s actuals at zero volumes. They maintained that accurate forecasting was only possible if the forecasts were made less granular by aggregating the materials into product families as the basis of the forecast.  

In addition to process problems, technology problems existed. All participants in the forecasting process complained that Excel was cumbersome to work with. Files frequently were corrupted without proper backups causing rework and formulas across worksheets frequently contained errors leading to loss of time.

 

The Emergence of the “Forecasting” Project 

In early 2012 Mike Jacobs, CFO of Beta Corporation, decided that the forecasting process needed to be overhauled and re-engineered. Mike believed that a better technological solution had to exist, so he called a meeting with Beta Corporation’s CIO, Betty Blair, to discuss alternative solutions that might be available. 

During the meeting, Betty indicated to Mike that she was quite familiar with alternative approaches as she successfully implemented a similar solution for a previous employer using the “Horizon” forecasting application last year for her previous employer. Betty explained that Horizon was a web-based, multi-user application that gave users Excel-like features and functions yet it could handle the big data demands of Beta Corporation. After listening to Mike’s commentary Betty stated that Horizon was most likely the solution application that could best accommodate his stated requirements. Betty clarified that Horizon would not be a COTS solution. Rather it would require customization using VB (dot net) programming. Betty indicated that it would probably require a six-month implementation and cost about $600,000 in outside consulting plus the licensing fee of $70,000 per year given the required number of users. Betty indicated that they may also have to upgrade SQL Server to the latest version.

Mike was pleased to hear this. He felt that a business case could easily be constructed to justify this kind of investment and implementation. Mike asked Betty to organize a team to objectively investigate all available options and to confirm the required scope and cost within a month. Mike explained that this was an urgent priority and needed immediate attention. Betty agreed to accomplish this right away as she knew that Mike had the authority and power to push this project through the executive committee. Betty saw this as a major potential win for herself. So she immediately contacted her best IT project manager, Jack Gilmore, to put him on the assignment.

Throughout the first quarter of 2012, the IT group evaluated various automated forecasting options and decided that “Horizon” was indeed the appropriate forecasting solution to meet Beta Corporation’s requirements. In early April 2012, Mike Jacobs assigned two of his financial planners to join the core IT team in order to help them write a detailed requirements document. During April the IT group upgraded the SQL database to the latest version.

The core team completed the requirements document by the end of April and sent it to several qualified implementation firms for bids. By mid-May, the core team had received four qualified bids along with follow-up visits/presentations from each of the prospective vendors. After a two-week vetting and negotiation period, the team decided that “Axiomatic Inc.” was the vendor of choice. Contracts were signed, and the vendor team was set to begin work on the first week of June 2012.

During each of the vendor visits, the vendor brought up the question of granularity (i.e. level of detail) regarding the design of the process. None of the vendors ever implemented a process containing 50,000 finished goods. All four of the vendors suggested that Beta Corporation consider aggregating (i.e. rolling up) the forecasts at least somewhat to ensure that the system performance and process complexity remain acceptable to users. Axiomatic Inc. was the only one of the four vendors that expressed confidence and enthusiasm that they could, in fact, ensure that the system could be configured to meet all of Beta’s requirements while operating at an acceptable level of performance.

During the vendor presentations, Mike Jacobs and his two finance planners were absolutely insistent on the required level of granularity in the forecasting system. In fact, the requirements document clearly specified that not only did the forecasts be entered at the lowest SKU level, but the forecasts had to be entered for each product, customer, sales person combination. This requirement exponentially raised the complexity of the forecasting process, but Mike was adamant that greater accountability on the part of salespeople was a critical success factor for the project. During one of the meetings Axiomatic, Inc. indicated that the goal was challenging, but they expressed confidence that they could achieve this objective. Several of the IT team members expressed some doubts privately. The sales and supply chain planners who attended these meetings expressed (very diplomatically)  that the system would be so complex that their teams might be unable to enter all of the detail necessary and consistently meet a monthly forecasting schedule. They sensed that this was a very sensitive subject with Mike, so they remained mute on the subject following Mike’s explosive reaction to their comments..

Managing the Implementation Startup

The kick-off for the project officially occurred on June 1, 2012. The project team was structured in a highly matrixed fashion as shown in Figure 1 on the following page. The key players were: Jack Gilmore (project manager for Beta Corp), Suresh Khan (lead consultant for Axiomatic Inc.), Betty Blair (CIO), and Mike Jacobs (Finance VP and Sponsor). Technically the two Finance subject matter experts reported to Jack, but they took functional direction from Mike Jacobs. The Sales and Supply Chain subject matter experts (SMEs) attended design walkthroughs and were responsible for data readiness for loading the materials master and historical sales data into the Horizon database. 

Throughout June the project team developed a detailed project plan, set up the development environment, and worked on the system prototype to be demonstrated to the SMEs for validation. The process went smoothly, as reflected in the first status report to the steering committee on July 8th, 2012 (shown in Figure 3).

The project was planned to follow a hybrid project methodology. The project work breakdown structure followed a traditional (waterfall) structure, but the phases were iterative where prototypes were developed and progressively elaborated. In addition, the business and IT team members were co-located and conducted daily meetings. 

On July 17th the first iteration of forecasting functionality was demonstrated to all of the Finance, Sales, and Supply Chain SMEs. The discussion following the demo pointed to the fact that striking an agreement between product managers and supply planners for different product lines was going to be very difficult to achieve. Some sales and product managers liked the high level of granularity; others did not. Some planners felt that Unit of Measure and freight amounts would need to be added; others did not. Some sales planners felt that the canned reports were sufficient; others indicated a need for custom reports. Nearly all of the sales and supply chain planners voiced concern over the level of complexity that would be required to make the new business process work. It was far more detailed than the current forecasting approach that everyone was accustomed to using.

The demonstration also revealed a rather poor response time (slowness) in the system even using only test data. There were some questions raised asking that if the system was slow for a simple demo (without a full data load), how bad would it be with all 50,000 materials loaded?

The core team members were more positive and viewed the demonstration as a moderate success. From their point of view, there were no compelling comments made by reviewers suggesting that the new process could not work the way Mike (the sponsor) envisioned, but in their minds, there were clearly some organizational change management issues that needed to be handled. The team did acknowledge that perhaps different lines of business needed different configurations of the forecasting product based on their unique requirements.

The next week the core team huddled together and discussed the possible need to make different design variations available for different product lines. They agreed that Mike needed to hear about this. So, during the third week of July the two finance SMEs met with Mike to suggest that perhaps each product line should have its own tailored version of the product. And perhaps a big bang go-live may not be the best decision. It may make sense, they argued, to go live with the product lines where the existing design fit is the highest. Mike listened to the arguments, but he would have none of it. Mike indicated that this was an enterprise project whereby the same level of detailed granularity was needed for all product lines and that there was no need for multiple versions of the application. Mike did indicate he understood that some custom reports might have to be developed. Mike said that he would work with the business owners to settle the issues regarding Unit of Measure and Freight.

Figure 2 Project Structure

Figure 3  July 8th Steering Committee Status Report and Format                                 

Managing the Build and Test

Jack met with Betty each week to discuss progress. Jack Gilmore managed the project using MS Project as a scheduler containing about 1400 activities, and the core team reviewed the action item/issue log weekly. Jack also had daily tag-up meetings with the team each morning. Jack and Betty agreed to meet weekly and discuss project status using a traffic light report along with a summary of key issues. The traffic light categories were: 

While working with Betty, Jack soon discovered that Betty became uncomfortable when he started discussing task details or emergent project issues in his reporting. In fact, the culture of the entire executive team was to keep project discussions at a very high level and avoid in-depth discussions about issues. Therefore, Jack kept his status reports to Betty very brief and to the point. Throughout the months of July and August, Jack reported that overall the project was on track (green), but the interfaces to Horizon, the data extracts, and the custom reports probably would not be ready for the two integration cycle tests as they originally planned. So, they would have to slip them into the User Acceptance (UAT) test cycle starting in late October. Jack and Betty agreed to move these three items on the schedule to finish in time to start UAT in mid-October.

During the months of July and August Suresh Khan, lead consultant for Axiomatic Inc. became increasingly concerned about system performance risks. As the build progressed Suresh and his team statistically modeled the number of calculations required and grew alarmed that the typical user transaction would require a 20 min response time delay. In other words, users would have to wait 20 minutes every time they entered a system command. Suresh mentioned this risk to the core team and continued to ask if there was any way that Finance would back off the requirement to make the forecasting process so granular. The two Finance SMEs strongly rejected Suresh’s suggestion and accused him and his company of being incompetent and unwilling to fulfill their commitment. 

Suresh was perplexed. He could not yet prove that there would be a problem once the system was fully loaded (full loading would only occur in October in time for UAT). He also believed that his technical team was sharp and might find a way to get around this risk. So he stopped bringing the matter up at the daily meetings. Also, Jack (the PM) observed a high level of acrimony was developing between Suresh (and his consultants) and the Finance SMEs. As a result, Jack avoided escalating the problem further. He decided not to bring the matter up with Betty (the CIO). After all, he felt, he was not really sure whether this was a real or imaginary issue himself. Jack did briefly explore the possibility of using automated load testing to simulate a full load prior to UAT. Jack soon dropped the idea, however, after learning about the amount of script writing required and the expense that would be incurred to accomplish this kind of testing for a first release.

The build progressed according to schedule throughout the month of August and through the first two weeks of September. The core team went through several iterations of internal testing (cycle 1 testing) and debugging while providing walkthroughs to the Sales, Supply Chain and Finance SMEs. Test scripts and user training materials were under development. Management seemed satisfied that the project was performing to plan. The steering committee was told that the project was “green”. The project was making a special effort to ensure that users would get their customized reports, and that “the “Organizational Change Management” team was working with the users to ensure high adoption of the new system at go live.

Jack Gilmore continued to feel uneasy, however. Underneath the green veneer, he saw some disturbing signs. First, there was no evidence that the response time risk was really being resolved or mitigated. All of the user walkthroughs exhibited a very slow transactional response-even under no load conditions. The walkthrough facilitators always attributed the slow response to a poor internet connection to the development environment. Secondly, the sales and supply chain SMEs who attended the walkthroughs were dead quiet during the sessions. Yet outside the formal meetings, he was hearing through the grapevine that they believed the system was being shoved down their throats, and to object meant political suicide. As a result, they remained silent. 

 Then on September 15th, a big announcement came through the email. Mike Jacobs CFO was leaving the company to become president at another firm. The core team immediately realized that this could be real trouble for the project. And sure enough, the Sales and Supply chain SMEs started becoming vocally resistant to the system during walkthroughs over the final two weeks of September. As a result, the core team reopened the discussion that perhaps the team should forget about a “big bang” cutover and should consider going live only where the product managers and supply chain managers wanted the existing design of the Horizon system. The core team was preparing to discuss this option at the September 27th Steering committee when suddenly the team learned on the 25th that Betty Blair was leaving the company to become CIO for another firm. Sponsorship and support for the Horizon project were crumbling.  

On October 2nd Jessica White, an IT director, was named interim CIO. Jessica was also named interim sponsor for the Horizon project. Jessica advised Jack to proceed with the second cycle of integration testing which was scheduled to start on October 5th. Jessica indicated that she needed time to sort out the various and highly emotional viewpoints about the Horizon project.

Between October 5 and October 23, the core team executed the full battery of cycle 2 integration tests and uncovered 140 defects and resolved 135 of those defects. See Figure 4 below. Four of the five interfaces were also tested during this cycle.

By October 23 the ETL data extract and load programs were now unit tested, and the functional SME testers were all trained and ready to perform UAT. The system was fully loaded with data on October 24-25, and user acceptance testing started on schedule on October 26. From day one of UAT, the number of discovered defects surged. See Figure 4 below. Most of the problems were related to system performance and response time issues. Many problems were also found with errors in reports and process complexity. Testers complained that a five-minute task was now taking over an hour while waiting for the system to respond.

Figure 4 

Defects Discovered and Resolved During Cycle 2 Integration Testing and User Acceptance Testing

The testers and core team struggled for over a week. By November 2, it was clear that the core team could not close the UAT defects fast enough to keep the cycle going. As a result, the UAT test cycle was postponed until the core team could find a workable solution to the problem. Suresh Khan expressed pessimism in finding a response time solution with the current granular design. 

Over the next several week’s sponsorships and functional support for the project continued to evaporate. A new VP of Finance was hired. He told Jessica White that Forecasting is not in Finance’s purview and that Sales and Supply Chain should own the process. Jessica organized a special meeting with the Sales and Supply Chain SMEs and management to decide upon a course of action. During the meeting, only one group expressed any interest in using the Horizon application if the response time issue could be solved and if the highly granular, time-consuming process design remained.  

It was clear to Jessica that the project needed to be shut down. The project was officially canceled on November 17th. The loss of investment was over $800,000. 

Share This
%d bloggers like this:
Verified by MonsterInsights