The first industrial revolution showed us how to do most of the world's heavy work with the energy of machines instead of human muscle. The new industrial revolution is showing us how much of the work of human thinking can be done by and in cooperation with intelligent machines.
Get full access to this article
View all access options for this article.
References
1.
SimonH.A., Report of the Research Briefing Panel on Decision Making and Problem Solving (Washington, D.C.: National Academy of Sciences, 1986), p. 35.
2.
Useful guides to the literature in behavioral decision making include ArkesH.R.HammondK.R., eds., Judgment and Decision Making: An Interdisciplinary Reader (New York, NY: Cambridge University Press, 1986); BellD.E.RaiffaH.TverskyA., Decision Making: Descriptive, Normative, and Prescriptive Interactions (New York, NY: Cambridge University Press, 1988); HogarthR.M., Judgement and Choice: The Psychology of Decision (New York, NY: Wiley, 1987); KahnemanD.SlovicP.TverskyA., Judgment Under Uncertainty: Heuristics and Biases (New York, NY: Cambridge University Press, 1982).
3.
For other expositions that are similar in spirit to ours, see EinhornH.J.HogarthR.M., “Decision Making: Going Forward in Reverse,”Harvard Business Review, 65 (1987): 66–70; SchoemakerP.J.H.RussoJ.E., “A Pyramid of Decision Approaches,”California Management Review, 36/1 (Fall 1993): 9–31; and SimonsonI., “Get Closer to Your Customers by Understanding How They Make Choices,”California Management Review, 35/4 (1993): 68–84.
4.
Techniques also exist for aiding and improving nonrepetitive decisions, but we do not consider them here.
5.
A relatively long time period was employed so we would have enough observations to develop statistically reliable forecasting models.
6.
Additional details about our procedures can be found in AshtonA.H., “An Empirical Study of Budget-Related Predictions of Corporate Executives,”Journal of Accounting Research, 20 (1982): 440–449; and AshtonA.H.AshtonR.H., “Aggregating Subjective Forecasts: Some Empirical Results,”Management Science, 31 (1985): 1499–1508.
7.
GoldbergL.R., “Man Versus Model of Man: A Rationale, Plus Some Evidence, for a Method of Improving on Clinical Inferences,”Psychological Bulletin, 73 (1970): 423. The superior forecasting accuracy of environmental or “actuarial” models was demonstrated more than 50 years ago: See SarbinT.R., “A Contribution to the Study of Actuarial and Individual Methods of Prediction,”The American Journal of Sociology, 20 (1943): 593–602. An excellent review is provided by DawesR.M.FaustD.MeehlP., “Clinical Versus Actuarial Judgment,”Science, 243 (1989): 1668–1674.
8.
Schoemaker and Russo, op. cit., provide additional commentary on the managerial usefulness of bootstrapping models. In addition, these authors discuss a broad array of other decision-making techniques, grouped into a four-level “pyramid of decision approaches”: (1) intuition; (2) rules; (3) importance weighting; (4) value analysis. The techniques that we describe in the present article are closely related to the third level of their pyramid.
9.
WallaceH.A., “What Is in the Corn Judge's Mind?,”Journal of the American Society of Agronomy, 15 (1923): 300. According to Wallace, this study was actually conducted by Professor H.D. Hughes of the Iowa Experiment Station in 1916 and 1917 and first published in the Iowa Agriculturist in 1917. It is remarkable for the similarity of both its design and its results to the typical bootstrapping study of today. Hughes had several experienced corn judges forecast the yields of some 500 ears of corn. The ears were planted, and yields were subsequently measured. Both the judges' forecasts and the actual yields were used as dependent variables in analyses that established their statistical relationships with six predictor variables (length and circumference of ear, weight of kernel, and three others). Thus, both environmental models and bootstrapping model—as we use the terms in this article—were constructed. While direct comparisons between the two types of models were not reported, Wallace does discuss three aspects of the results that are still found today in virtually all studies of repetitive forecasts: First, considerable agreement existed among the yield forecasts of the various judges (average correlation coefficient of .7); Second, despite this agreement, the judges were not very good at forecasting the actual yields (average correlation coefficient of only .2); Finally, the judges' forecasting errors could be traced largely to placing too much emphasis on one predictor (length of ear) and too little emphasis on another (kernel weight), relative to their statistical validities. It almost certainly follows from this pattern of results that, had they been computed, the average forecast errors of the environmental model would have been smaller than the average forecast errors of the bootstrapping models, which, in turn, would have been smaller than the judges' own errors.
10.
Much of this research is reviewed by ArmstrongJ.S., Long-Range Forecasting: From Crystal Ball to Computer (New York, NY: Wiley, 1985); CatererC., “General Conditions for the Success of Bootstrapping Models,”Organizational Behavior and Human Performance, 27 (1981): 411–422.
11.
An excellent discussion of these and other reasons for considering manager/model combinations is provided by BlattbergR.C.HochS.J., “Database Models and Managerial Intuition: 50% Model + 50% Manager,”Management Science, 36 (1990): 887–899.
12.
Another option, which we do not pursue here, is to use the manager's forecast as an additional variable in the environmental forecasting model. A classic paper on combining people's and models' forecasts which discusses the various possibilities is SawyerJ., “Measurement and Prediction, Clinical and Statistical,”Psychological Bulletin, 66 (1966): 178–200.
13.
See AshtonAshton, op. cit.; AshtonR.H., “Combining the Judgments of Experts: How Many and Which Ones?”Organizational Behavior and Human Decision Processes, 38 (1986): 405–414; HogarthR.M., “A Note on Aggregating Opinions,”Organizational Behavior and Human Performance, 21 (1978): 40–46; LibbyR.BlissfieldR.K., “Performance of a Composite as a Function of the Number of Judges,”Organizational Behavior and Human Performance, 21 (1978): 121–129; MakridakisS.WinklerR. L., “Averages of Forecasts: Some Empirical Results,”Management Science, 29 (1983): 987–996; WinklerR.L.MakridakisS., “The Combination of Forecasts,”Journal of the Royal Statistical Society, Series A, 146, Pt. 2 (1983): 150–157.
14.
Excellent discussions of objections to decision modeling are provided by DawesR.M., “The Robust Beauty of Improper Linear Models in Decision Making,”American Psychologist, 34, (1979): 571–582; KleinmuntzB., “Why We Still Use Our Heads Instead of Formulas: Toward an Integrative Approach,”Psychological Bulletin, 107 (1990): 296–310.
15.
This example comes from EinhornH.J., “Accepting Error to Make Less Error,”Journal of Personality Assessment, 50 (1986): 387–395.
16.
Also see ChapanisA., “Men, Machines, and Models,”American Psychologist, 16 (1961): 113–131; LittleJ.D.C., “Models and Managers: The Concept of a Decision Calculus,”Management Science, 16 (1970): 466–485; and MorrisW.T., “On the Art of Modeling,”Management Science, 13 (1967): 707–717.
17.
WhitneyD.E., “Real Robots Do Need Jigs,”Harvard Business Review, 64 (1986): 111.
18.
Whitney, op. cit.
19.
This example is taken from Einhorn, op. cit.
20.
This example comes from RE. Meehl, Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence (Minneapolis, MN: University of Minnesota Press, 1954), pp. 24–25.
SamsonD.ThomasH., “Linear Models as Decision Aids in Insurance Decision Making: The Case of Estimation of Automobile Insurance Claims,” in WrightG.AytonR, eds., Judgmental Forecasting (New York, NY: Wiley, 1987); BunnD.W.SeigalJ.P., “Forecasting the Effects of Television Programming Upon Electricity Loads,”Journal of the Operational Research Society, 34 (1983): 17–25.
23.
WilksS.S., “Weighting Systems for Linear Functions of Correlated Variables When There is No Dependent Variable,”Psychometrika, 8 (1938): 23–40.
24.
This example is provided by Dawes, op. cit.
25.
AshtonA.H., “A Field Test of Implications of Laboratory Studies of Decision Making,”The Accounting Review, 59 (1984): 361–375.
26.
PassellP., “Wine Equation Puts Some Noses Out of Joint,”New York Times, March 4, 1990, pp. 1. 27.
27.
Passell, op. cit., p. 1.
28.
MartorelliW.P., “Cowboy DP Scouting Avoids Fumbles,” in ZmudR. W., Information Systems in Organizations (Glenview, IL: Scott, Foresman, 1983).
29.
Martorelli, op. cit., p. 228.
30.
“Sentencing by the Numbers” (Editorial), Wall Street Journal, August 2, 1984, p. 24.
31.
AshtonR.H.ElliottR.K.WillinghamJ.J., “The Pricing of Audit Services: Evidence from KPMG Peat Marwick,”Working Paper, Fuqua School of Business, Duke University, August 1993.
32.
AshtonR.H., “Effects of Justification and a Mechanical Aid on Judgment Performance,”Organizational Behavior and Human Decision Processes, 52 (1992): 292–306; also see PetersonD.K.PitzG.F., “Effect of Input from a Mechanical Model on Clinical Judgment,”Journal of Applied Psychology, 71 (1986): 163–167.
33.
HammondK.R., “Direct Comparison of the Efficacy of Intuitive and Analytical Cognition in Expert Judgment,”IEEE Transactions on Systems, Man, and Cybernetics, 17 (1987): 753–770. Also, see SchoemakerRusso, op. cit.
34.
IsenbergD.J., “How Senior Managers Think,”Harvard Business Review, 62 (1984): 89.