User:DCLawyer/Hubbard COI

From How to Measure Anything: Finding the Value of Intangibles in Business, by Douglas W. Hubbard, John Wiley & Sons, 2007, pages 224-225.

Another popular version of arbitrary weighted scores is called the Analytical Hierarchy Process (AHP). AHP is different from other weighted scores in two ways. First, it is based on a series of pair-wise comparisons instead of directly scored attributes. That is, the experts are asked if one attribute is "strongly preferred," "slightly preferred," and so on over another attribute, and different choices are compared within the same attribute in the same manner. For example, subjects would be asked if they preferred the "strategic benefits" of new product A over new product B. They would then be asked if they preferred the "development risk" of A over B. They would also be asked if "strategic benefit" was more important than "development risk." They would continue comparing every possible choice within each attribute, then every attribute to each other. Pair-wise comparisons avoid the issue of developing arbitrary scoring scales, so that could be an advantage to this method. However, strangely enough, the data on the comparisons is converted by AHP to an arbitrary score.

The second difference between AHP and other arbitrary weighted scoring methods is that a "consistency coefficient" is computed. This is a method for determining how internally consistent the answers are. For example, if you prefer strategic benefit to low development risk and prefer low development risk to exploiting existing distribution channels, then you should not prefer exploiting existing distribution channels to strategic benefit. If this sort of circularly inconsistent result happens a lot, then the consistency calculation will have a low value. A perfectly consistent set of answers earns a consistency value of 1. The consistency calculation is based on a method from matrix algebra called Eigenvalues, used to solve a variety of mathematical problems.

Because AHP utilizes this method, it is often called "theoretically sound" or "mathematically proven." If the criteria for theoretical soundness were simply, at some point in a procedure, using a mathematical tool (even one as powerful as Eigenvalues), then proving a new theory or procedure would be much easier than it actually is. Someone could find a way to use Eigenvalues in astrology or differential equations in palm readings. In neither case will the method become more valid merely because a mathematical method that is proven in another context has been applied.

The fact is that AHP is simply another weighted scoring method that has the one noise-reducing method for recognizing inconsistent answers. But that hardly makes the outputs "proven," as is often claimed. The problem is that comparing attributes like strategic alignment and development risk is usually meaningless. If I asked you whether you prefer a new car or money, you should ask me, first, what kind of car and how much money I'm talking about. If the car was a 15-year-old subcompact and the money was $1 million, you would obviously give a different answer than if the car was a new Rolls-Royce and the money was $100. Yet, I've witnessed that when groups of people engage in this process with an AHP tool, no one stops to ask "How much development risk versus how much manufacturing costs are we talking about?" Amazingly, they simply answer as if the comparison were clearly defined. Doing this introduces the danger that they simply imagine a completely different trade-off from someone else. It merely adds another unnecessary level of noise.

A final, particularly bizarre flaw in AHP is rank reversal. Suppose you used AHP to rank alternatives A, B, and C in that order, A being the most preferred. Suppose then that you delete C; should it change the rank of A and B so that A is second best and B is best? As nonsensical as that is, AHP can result in exactly that.