Diagnosing the Top Level Coefficient of Variation: An Alternative Approach
Risk Track
RSK10_Paper_DiagnosingTopLevelCoeffofVariation-AnAlternativeApproach_Andelin
RSK10_Presentation_DiagnosingtheTopLevelCV-AnAlternativeApproach_Andelin
Abstract:
The coefficient of variation (CV), defined as the standard deviation divided by the mean, is a useful metric to quantify the uncertainty inherent in a probability distribution, as it provides a relative (and thereby more intuitive) measure of spread than the variance or standard deviation. In the field of cost analysis, the CV can be used as a diagnostic to determine the level of risk and uncertainty built into an estimate and whether those levels fall within the expected range for a program of a given scope and complexity and at a given stage in its life-cycle.
The CV is easy to calculate, even for complicated distributions, and is a standard output for risk-based estimating software, such as @Risk or ACEIT’s RI$K. However, it is not always intuitive to understand what factors contribute to the overall CV, or why a particular estimate may have a CV that is lower (or higher) than expected. When conducting ad hoc diagnostics, it is tempting to treat the CV of a parent WBS element as approximately the weighted average of its children’s respective CVs. This approach is fundamentally flawed, as it neglects both the widening effect of correlation and the narrowing effect of adding distributions (due to the fact that the standard deviation adds in quadrature). In some fortuitous instances the two errors may cancel each other, leading to a reasonable approximation, but such a result is coincidental.
An alternate approach to diagnosing the CV is based on representing the parent CV as a function of the relative size and CV of the child elements and the correlation between those elements. The functional form of this representation is elegant and leads to a natural treatment of the parent CV in three parts: (1) the weighted average of the child CVs, (2) an adjustment to account for summing in quadrature and (3) an adjustment to account for correlation. Rules of thumb are given to facilitate “back of the envelope” calculations, and a graphical display of more precise results is suggested for briefing decision makers and other stakeholders.
Author:
Daniel J. Andelin
U.S. Department of the Navy
Dan Andelin is a journey level Operations Research Analyst for the U.S. Navy Department in Washington, D.C., where he provides business support to include cost estimating, acquisition support, and earned value management (EVM) analysis. He has nearly three years’ experience in the field of cost analysis and holds PCEA(R) certification from the Society of Cost Estimating and Analysis (SCEA). He is particularly interested in cost risk and uncertainty analysis, especially the mathematical laws that govern it.
Prior to his employment with the Navy, Dan received a B.S. in Physics from Brigham Young University and an M.S. in Physics from the University of Virginia. His graduate work was in High Energy Physics, and he has spent time at both CERN in Geneva, Switzerland and Fermi National Accelerator Lab near Chicago working on the Compact Muon Solenoid (CMS) experiment. He also spent a summer as an undergraduate at MIT’s Plasma Science and Fusion Center (PSFC), working on the Alcator C-Mod experiment.
Dan is a devoted family man and spends most of his free time with his wife Jill and their one-year-old daughter Piper Jean.