Leveraging a Y2K Evaluation
To Improve Information Systems Architecture

Please cite: Brown, G., Fisher, M., Stoll, N., Beeksma, D., Black, M., Taylor, R., Choe, S., Williams, A., Bryant, W., and Jansen, B. J. 2000. Leveraging a Y2K Evaluation To Improve Information Systems Architecture. Communications of the ACM, 43(10), 90-97.

See Other Publications

The cost of ensuring that information systems were prepared for the Year 2000 (Y2K) were enormous. The U.S. government spent an estimated $8.34 billion [1]. Once the cost borne by state and local governments, public utilities, and the commercial sector are included, the costs in the U.S. alone are staggering. When the Y2K costs from other countries are included, the estimated amount exceeds $3 trillion [2]. With the new millennium here, it appears that most of the potential problems were identified, although some fixes are still ongoing. One would hope, however, that organizations are able to recover more from these costs than just a positive statement "We can successfully operate our information systems in the new millennium?"

We propose that additional benefits can be gained from these Y2K costs by leveraging the experiences of Y2K testing into a method for evaluating and improving an organization’s information technology systems. In addressing the Y2K issue in the Republic of Korea (ROK) / US Combined Forces Command (CFC), we identified organizations with whom CFC must communicate, mission critical tasks, and the underlying information technology systems that enable CFC to accomplish these tasks. Through this process, CFC gained valuable knowledge concerning its operational information systems architecture. The organization now has both a baseline and systematic methodology available to improve the employment of its information management systems (IMS). With a vision of the organization’s desired information technology end state, this baseline and methodology permit us to prepare a road map of how to get there.

This article provides an overview of the CFC Y2K operational evaluation (OPEVAL) including the detailed planning and the variety of organizations involved. We cover the implementation of the OPEVAL in detail, the method of data collection, and the results in terms of Y2K issues and information concerning our IMS. We conclude with recommendations for employing this Y2K evaluation methodology in other activities aimed at improving an organization’s information systems architecture.

Y2K Issue

The Y2K issue has been well reported in the popular media [2] and academic press [3]. Almost exclusively, the focus has been negative, and reports generally address the Y2K issue as an operating cost that organizations had to bear. There has been little discussion concerning what long-term organizational advantages might be gained from this experience [4] or the opportunity Y2K preparation presents for an organization. There appear to be no published reports of how to translate the experiences and processes of preparing for the Y2K into future benefits and opportunities for an organization. This article addresses this shortcoming by detailing the gains achieved in the CFC.

Y2K Operational Evaluation

The organization of the CFC is complex with the U.S. having maintained a significant military presence in support of its partnership with the ROK for over 50 years. Present U.S. military forces in Korea includes an Army division, two Air Force wings, Navy and Marine elements as well as a Theater Army Area Command, and several supporting units. The ROK fields the largest contingent of forces in the CFC with over 650,000 men and women in uniform [5]. Each sub-command in CFC has its own organization for daily operations, but operate under the CFC commander during combined exercises and times of crisis.

While the CFC military organization may seem formidable, the North Korean military is significantly larger. Estimates place the number of North Korean forces at over one million [5] including significant numbers of tanks, special operations units, and a staggering number of artillery pieces. Faced with this numerically superior force, CFC depends on reliable information systems to ensure the efficient and effective concentration of firepower and communications to facilitate command and control within the organization. The mere possibility of a Y2K-related issue rendering any of these systems inoperative is unacceptable. Therefore, CFC set about to ensure its mission critical information technology systems would continue to operate successfully in a Y2K environment.

CFC is similar to a large, multinational corporation in that it has several IMSs, and it operates in an environment that relies on support systems from multiple countries for infrastructure and basic processes. CFC also faces a very real threat from North Korea. During the OPEVAL time frame, a dispute between North Korea and the ROK regarding non-OPEVAL related territorial issues resulted in armed confrontation in the Yellow Sea. Approximately 30 personnel died during the altercation [6]. The tensions between North Korea and South Korea have remained tense with the North Korean policy of brinksmanship and its long-range missile program [7]. Because of these factors, CFC operational capabilities had to remain fully functional as the Y2K evaluation proceeded. This need for continual operation is similar to certain commercial corporations in the financial and public utility sectors, and mass transportation hubs (e.g., international airports, shipping centers), where activities cannot be halted to conduct Y2K or other evaluations.

OPEVAL Design

CFC was required to conduct a Y2K OPEVAL with the goal of ensuring CFC could accomplish its critical missions in a Y2K environment. To meet this goal, we first identified (1) the critical tasks and (2) the information systems that supported these critical tasks. From a systems view, our Y2K issue was concerned with continuity of operations and interoperability. The individual systems involved were Y2K certified prior to the OPEVAL.

In identifying critical tasks, two documents were examined, the Universal Joint Task List (UJTL) and the Command’s Joint Mission Essential Task List (JMETL). The UJTL and the JMETL are standard documents that outline multi-service missions that CFC must be able to accomplish. We also examined guidance on systems cited as critical in the Joint Chief of Staff (JCS) OPEVAL guidance [8] including Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR), and weapon control systems. Looking at actual IMSs involved in our major training exercises, we identified the same general categories of systems. CFC contingency plans were also vital in this process.

We immediately realized there were more missions, corresponding tasks, and underlying systems than we could possibly evaluate given the temporal and financial constraints. Taking some comfort from JCS guidance that an exhaustive testing policy is not possible [8], we set out to reduce our pool of tasks and systems by first examining missions, tasks, and systems that other organizations had evaluated. Where the mission, task, and systems were identical, we determined that we could reasonably mitigate our testing and concentrate on theater-unique critical tasks and systems evaluations.

Even after these considerations, our task list was large and needed further refinement. We invited the CFC staff and component representatives to help us cull the JMETL tasks to those the components considered the most significant. The staff was also responsible for categorizing or assigning any tasks into the framework and numbering schema of the UJTL. This facilitated our tracking of tasks and assessing the impact of any potential degradation in task performance back to the mission affected. Our aim was to select the most critical tasks and systems. Fine-tuning and elimination of redundancy eventually narrowed our tasks to a manageable number that all components agreed represented tasks that supported CFC’s critical missions.

We identified fifteen tasks as critical to the accomplishment of CFC’s mission. These fifteen tasks were categorized into one of seven general mission areas: (1) primary theater systems for command and control, (2) air space coordination, (3) intelligence, (4) artillery counter fire, (5) deliberate targeting, (6) tasking order dissemination, and (7) theater missile defense. We related each of these fifteen tasks to a "thin line," the minimum number of integrated systems needed to accomplish a given task [8][9]. Each thin line generically represents a single path on which critical information flows from one element to another in order to accomplish the given task.

To fully test the thin lines, all IMSs that supported these tasks needed to be in the OPEVAL. The OPEVAL would concentrate on systems from end-user to end-user, or in military terms, from the foxhole to the headquarters and the headquarters back to the foxhole. We eventually identified 33 information systems (e.g., critical IMSs) that supported our 15 thin lines (e.g. critical tasks). Collectively, these systems represent the critical C4ISR architecture for CFC.

We then set out to determine how to evaluate our thin lines of systems. Naturally, in order to simulate a Y2K environment, the clocks would have to be advanced on each component in all the thin lines of systems. The critical midnight crossings to be evaluated were 31 December 1999 to 01 January 2000, 28 February 2000 to 29 February 2000, and 29 February 2000 to 01 March 2000. The effect of the clock roll would then be gauged relative to a baseline assessment conducted prior to the simulated Y2K environment.

We eventually decided to utilize an evaluation methodology that would consist of sending through the thin lines of systems actual message traffic (products) occurring during critical tasks execution. At each processing component along the thin line, the products would be captured and examined for completeness, accuracy, and timeliness. These were our measures of performance. Using these measures, we could assess the thin line for possible Y2K degradation during clock rolls.

The end-to-end evaluation of each thin line required detailed planning by the CFC staff and the subordinate commands. Both ROK and U.S. IMSs were incorporated to accurately reflect the critical tasks, functions and methods by which missions in the CFC are accomplished. We employed product inputs and associated outputs using realistic tasks and message traffic to evaluate the systems within the theater in the OPEVAL simulated Y2K environment.

The next step was to determine the configuration of the thin lines of systems down to the individual components. This was no easy chore given the complexity of the systems. With so many systems interfacing with so many other systems within many different organizations, there was no single point of contact who knew the complete configuration of all systems supporting each task. Naturally, an accurate system configuration was necessary in order to determine where we needed to collect the products along each thin line. In some cases, due to operational considerations, we resorted to shadow, or parallel, systems to minimize any potential negative impacts on systems that supported critical real-world command and control.

To provide this level of detail, we developed what we call the bit path for each of the 15 thin lines. The bit paths identified the exact flow of products from end user, through every component, to end user. Once the bit paths were developed, we could identify exactly what products were needed and where they needed to be captured for evaluation. This bit path configuration was gathered in many cases by actually "walking" the thin line.

We then diagrammed these bit paths. A sample bit path, with specific locations, units, and software applications removed, is illustrated below in Figure 1.

Figure 1: Fabricated bit path for a targeting mission thin line.

Bit Path

 

The bit path diagrams depicted how the systems’ product flow occurred while a task was being executed or evaluated. The bit path diagrams also identified system name, location of components, data collection points, products to be captured, and the process to follow in capturing products during the course of evaluating the thin line. Bit paths were critical to the successful planning and execution of the evaluation because of multiple one-to-many relationships within the thin lines of systems. Each mission could involve more than one thin line, each thin line could involve more than one task, and each task could transit more than one system.

We developed a Master Scenario Events List (MSEL) to serve as the primary OPEVAL control mechanism. The MSEL integrated the activities and corresponding product flow of thin lines into a single test string, allowing for near-simultaneous testing of all systems. The MSEL events and product flows were representative of real-world operational conditions and stimulated the flow of products and messages across the 15 thin lines and through the systems being evaluated in the simulated Y2K environment. A portion of a MSEL is shown in Figure 2.

Figure 2: Portion of a fabricated MSEL showing time, events, and product flow.

The complete MSEL was a chronological set of 253 steps detailing all actions and data captures required to validate the thin lines. An execution through the complete MSEL cycle required three hours with ten iterations of the MSEL cycle required during the OPEVAL to evaluate the thin lines for all critical date-time crossings and baseline runs. As a result of this process, a total of 2,820 products were captured for analysis.

With this number of products, we needed to design a mechanism for cataloging and storing the products to facilitate analysis. We developed an 11-digit filename convention that would uniquely identify each of the 2,820 products. Two digits represented the location, two digits represented the thin line, four digits represented the scheduled time of the event, one digit represented whether the item was a transmitted or received product, and the final two digits represented the scenario run designation. The majority of the planned product captures consisted of soft copy screen saves. When this was not possible, a hardcopy was printed, or a digital camera was used to take a picture, and these were then scanned or converted into the format of the central repository.

We developed an OPEVAL product directory structure with sub-directories based on the day, scenario run, site, and functional cells. We created the file names and generated default files and placed them in the appropriate directory folders. At each data collection point in the MSEL, the product code was also provided as a reference (see Figure 2, column Filename Code). The operator only had to go to the right day, run, site, and functional cell in the directory structure, click on the file name associated with the specific event, and then paste the contents of the screen into the file and save it. This process minimized the likelihood of an error in the product file name. It also served to better capture and organize the products for subsequent OPEVAL analysis.

OPEVAL Implementation

The OPEVAL execution phase lasted nine days. Day one was used to verify the operational configuration, ensure the systems were functioning properly, and conduct a rehearsal of the data collection and analysis process.

Day two was devoted to creating a baseline to establish a performance reference point for each thin line. This performance baseline was used to compare follow-on evaluations of performance observed during the Y2K operations and assessment phase.

Days three through five of the operations phase involved the primary Y2K evaluation of systems and tasks in the simulated Y2K environment. Thin line evaluations and the MSEL cycle were accomplished twice during this period for each of the three critical midnight crossing These tasks were executed in a scenario that simulated the real world environment as closely as possible.

Day six was used to assess the need for regression testing and to prepare the systems needed for a special evaluation of satellite communication systems.

Days seven and eight of the OPEVAL represented a condensed version of the previous runs but only involved assessing alternate communication paths to U.S. Navy components and one portion of the missile defense thin line.

Day nine was the recovery phase when clocks on all systems were re-set to current day and time, and the operators were required to demonstrate normal log-on and operational procedures. These checks were necessary to ensure the systems were properly operating after the series of Y2K assessments and clock rolls. Data organization and analysis was also started during this phase with the collection, cataloging, and reviewing of the products.

 

The overall OPEVAL schedule is illustrated in Figure 3.

Figure 3: OPEVAL schedule by day and time.

 

The OPEVAL execution was under the control of a test director whose role was to orchestrate the timing of the evaluation with regard to the planned scenario, rollovers, phase changes, go/no-go, and other control decisions. The Combined Exercise Control Staff (CECS) exercised command and control over the OPEVAL and comprised the test director, assistant test director, technical director, trouble desk, and other testing and information system subject matter experts.

The CECS cell was responsible for all decisions related to test execution, scheduling, and management during the OPEVAL. CECS personnel were also located at each of the test sites and assisted in maintaining positive control and coordination between their location and the CECS cell. An Analysis Cell was co-located with the CECS Cell to conduct real-time analysis of captured products for quality assurance and quality control, obtain feedback from the site on anomalies, and assist the CECS in researching and resolving problems.

We used functional system operators during the OPEVAL who were trained in the functional responsibilities concerning their specific C4ISR system, as well as any actions required to capture and save the OPEVAL products. Because of their familiarity with the operational process, trained operators represented the first opportunity in identifying task or product anomalies.

Data collectors trained and assigned especially for the OPEVAL worked with operators to capture products associated with all the MSEL runs and to conduct initial analysis. These data collectors were positioned at appropriate points along each thin line as the MSEL tasks were initiated and completed. These appropriate points were illustrated in the bit path diagrams for each thin line (i.e., see Figure 1). Data collectors were responsible for quality control in the data collection process, adherence to the MSEL, and the centralized collection of site products. If any system experienced a failure or anomaly, the data collector notified appropriate technicians, informed the CECS, and documented the failure. The data collectors also performed quality assurance checks on all products to ensure that the products were processed in a standardized manner. This real time quality assurance greatly facilitated product analysis.

There were two general categories of data collection required during the OPEVAL. The first was the task-oriented information that facilitated an assessment of CFC’s ability to accomplish the critical task. This information addressed the question, "Was the delivery of the product timely, accurate, and complete?" The primary data collection associated with task observations included capturing the designated product and recording any degradation or failures noted. The detailed collection strategy and specific products associated with the task-oriented data was depicted and captured in the bit path diagrams (i.e., see Figure 1). Non-intrusive data collection efforts and direct observations by trained observers were used to capture this task-oriented data. Hard copy printouts and soft copy files were used to capture the information exchanged where possible and were manually reviewed and compared at the various information flow points along the thin line.

The second data collection requirement involved system evaluations associated with the Y2K clock rolls. There were nine Y2K metrics associated with evaluating a given date sensitive device in each thin line. Each system component was evaluated against all nine metrics for the thin line in which it participated. The metrics were identified by JCS guidance [8]. An evaluation form was used to record the results of each metric applied to each system component for each thin line. Legend codes were developed and placed on the evaluation sheets to depict which clock rolls, or product runs were applicable to the nine metrics.

A specific form was developed for each evaluation location and the specific system component being evaluated. The serial numbers of the devices and software versions installed were also annotated on the form. These checklists were used to develop the overall system evaluation and were entered into a database to provide a complete picture of all components in the thin lines of systems. If a component was functionally unique (i.e., workstation, server, or workstation/server), or was unique in terms of the software version used, its information was specifically entered into the database. Redundant systems supporting the same task were not entered into the database. Stated another way, if a task was supported by seven workstations and they were all configured with the same hardware and software version, only one of these workstations was entered into the database for that specific task. However, all seven workstations and the results of the Y2K metrics were considered in preparing the database. Any Y2K metric failure would be entered in the database for the component and system on which it occurred, and recorded against all tasks supported. All systems were evaluated during each MSEL run.

OPEVAL Results

In order to gain an appreciation of the complexity of the CFC OPEVAL, it is worthwhile to recap some OPEVAL numbers. The exercise took nine days. Thirty-three major warfighting systems were evaluated at 11 separate geographical locations, including a ship at sea. In addition to these warfighting systems, the OPEVAL was conducted over the real-world communications infrastructure. The date sensitive routers and communications hubs linking these workstations were also evaluated. There were ten MSEL executions of 253 steps taking three hours for each , resulting in the capture of 2,820 products.. Each product was typically composed of a number of sub-products, such as email messages, screen captures, and data files, needed to verify that an action was timely, accurate, and complete. There were 4,797 sub-products captured.

There were over 200 personnel involved in the OPEVAL, with about 50 used for data collection, OPEVAL control, and product analysis. There were over 14 major CFC components, government agencies, or commercial corporations involved in the OPEVAL, including: ROK Army, ROK Air Force, U.S. Army, U.S. Air Force, U.S. Navy, Office of the Secretary of Defense / Director of Operational Test and Evaluation, Joint Interoperability Test Command, MITRE Corporation, SAIC, BETAC Corporation, Sterling Software, Titan Corporation, and Hughes Corporation. The planning and execution required over 199 person-months and cost over $6.61 million. In a nutshell, the CFC OPEVAL was a complex operation in every respect.

From after action reviews, we determined the keys to successful OPEVAL planning and execution were: (1) configuration management verification and control, (2) training the operators and data collectors and reinforcing the training with a rehearsal prior to baseline run, (3) systems installation and testing prior to rehearsal, (4) a complete and accurate baseline run, (5) a detailed data collection and analysis plan, (6) real time analysis for quality assurance and control, (7) verification of system performance and processes prior to OPEVAL execution, and (8) a detailed MSEL for execution and control of the process and all product captures. During the analysis process, we also identified some non-Y2K operational issues concerning the information systems that may have been over looked or would not have been isolated without the OPEVAL infrastructure we had in place.

Conclusion

The benefits of the OPEVAL to the organization were extraordinary. First, we were confident we had identified Y2K anomalies associated with the thin lines of systems and other operational issues. Second, we developed workarounds or initiated fixes for these anomalies and issues ensuring the organization could function better now and successfully in a Y2K environment. Third, and a point we will discuss in further detail, we had established a baseline for the current architecture and a configuration of the organization’s critical IMSs while developing a methodology that could be used to evaluate these systems in the future using performance criteria other than Y2K.

This third point is the long-term benefit of the Y2K costs for this organization. If applied correctly, this methodology could lead to substantial information technology savings and better organizational performance. Because of the complexity of the organization’s IMSs, the large number of other organizations it communicates with, and the rapid turnover of personnel, few people had a current picture of the complete information systems architecture CFC required to accomplish its critical missions. Prior to the OPEVAL, this system architecture had never been documented at a comparable level of detail. We believe this situation is common to many complex organizations in both the government and commercial sector. With the baseline developed during the OPEVAL, CFC has now documented the current status and configuration of its most critical IMSs. With a vision of where the organization needs to go, it can now develop the road map to get there.

The organization can utilize the current baseline with accompanying system architecture in three major ways. It can review current IMS configuration in an effort to reduce the complexity of current system integration. From a review of the bit paths of the systems that support some of our critical missions, the requirements on both the systems and the people involved are very complex. The organization can analyze the critical tasks and underlying systems in order to discover ways to reduce their complexity, thereby increasing their performance. With the current baseline as a starting point, the organization can make reasonable determinations concerning future IMSs to support its critical tasks. This type of review can channel resources and provide purpose to the seemingly endless installation of upgrades and new versions of hardware and software that a typical organization experiences. Organizations can utilize the evaluation methodology to conduct integration and performance tests of new information systems, software/hardware upgrades, or existing information systems.

The OPEVAL approach is now being used within the CFC to conduct C4ISR assessments during theater level exercises. These assessments are being done without interfering with the accomplishment of other training objectives and with little increase in information system resources. Additionally, the observers that are already programmed to support the after action review process are serving as data collectors. Benefits we are seeing from the continued use of the OPEVAL approach include (1) maintaining an up to date baseline of the organization’s information systems architecture, including both hardware and software, (2) identifying needed hardware and software enhancements, (3) validating the interoperability of C4ISR systems in the multi-service and multi-national environment, (4) capturing, documenting, and assessing doctrinal processes and procedures, and (5) stimulating the use of standard operating procedure for reporting of information throughout the organization.

The configuration management aspect of an organization’s IMSs is continually changing. A vigilant and systematic approach is needed to ensure the organization is knowledgeable of it IMS status. With this knowledge, it can determine the IMS changes that would enhance its operations. Making these determinations is a challenge. The information gained and methodology developed during an OPEVAL can be, and within the CFC is being, leveraged to help meet this challenge.

REFERENCES

1. Ohlson, K. 97% of Critical Federal Systems Reported Y2K-Compliant. Online News. (5 Oct 99); see http://www.computerworld.com/home/news.nsf/all/9909153ombrep.

2. Christensen, J. Gearing up for the gold rush. CNN.com. (6 Oct. 99); see http://www.cnn.com/TECH/specials/y2k/stories/y2k.goldrush/.

3. Berghel, H. How the Xday figures in the Y2K Countdown. Communication of the ACM. 42 5 (May, 1999), 11 – 14.

4. Stewart, R. & Powell, R. Exploiting the benefits of Y2K Preparation. Communication of the ACM. 42 9 (Sept. 1999), 42 - 48.

5. Sullivan, K. Billions spent in Korea standoff. Washington Post. (6 Oct. 99); see http://www.seattletimes.com/extra/browse/html/arms_041096.html.

6. Reuters. N. Korea, U.S. to resume Berlin talks Friday. Cnn.com. (6 Oct. 99); see http://cnn.com/ASIANOW/east/9909/08/bc.korea.north.usa.reut/.

7. Reuters. North Korea reasserts right to launch missiles. Cnn.com. (6 Oct. 99); see http://cnn.com/ASIANOW/east/9909/29/nkorea.missiles.reut/index.html.

8. JCS Year 2000 Operational Evaluation Guide, Version 2.0, 1 October 1998. Contact information available from the World Wide Web at http://www.dtic.mil/jcs/j6/j6v/.

9. DOD Year 2000 Management Plan, Version 2.0, December 1998. Contact information available from the World Wide Web at http://www.disa.mil/cio/y2k/cioosd.html.


Garland Brown (BrownGB@usfk.korea.army.mil) and Marshall Fisher (FisherM@usfk.korea.army.mil) are consultants for The MITRE Corporation, MITRE – Pacific Operations.

Dr. Ned Stoll (StollN@usfk.korea.army.mil) is a consultant for the BETAC Corporation.

Dave Beeksma (BeeksmaD@usfk.korea.army.mil) and Mark Black (blacksv@hotmail.com) are consultants for the Titan Corporation.

Ron Taylor (TaylorR@usfk.korea.army.mil) is a consultant for Science Applications International Corporation.

LCdr Choe Seok Yon (y2ksailor@yahoo.com) is an officer for the Republic of Korea Navy.

LTC Aaron J. Williams (Aaron.Williams@forscom.army.mil), CPT Williams Bryant (BryantW@usfk.korea.army.mil), and MAJ Bernard J. Jansen (jjansen@acm.org) are officers for the U.S. Army.