Dipankar
Ghosh; Manash R Ray
07/01/2000,
Journal of Managerial Issues, Page 247260
Copyright
(c) 2000 Bell & Howell Information and Learning Company. All
rights reserved. Copyright Pittsburg State University, Department
of Economics Summer 2000
Evaluating
performance is an important function in organizations and few decisions
are made in organizations that are not subject to some sort of performance
evaluation. Although it is possible in some jobs to obtain objective
performance information, more typically this is not the case. Instead,
organizations frequently rely upon some type of subjective evaluation
of performance which, by the very nature of being subjective, is
criticized for having errors, biases, and inaccuracies (Borman,
1991). One such bias commonly encountered in evaluations is the
"outcome effect," which is a systematic overweighting
of outcome knowledge by the evaluator in assessing a manager's performance
(Hawkins and Hastie, 1990). Thus, when the outcome is positive (negative),
evaluators tend to evaluate the manager positively (negatively),
regardless of the actual appropriateness of the decision resulting
in the outcome. Hence, organizations often end up evaluating managers
based upon outcomes over which they may not have control.
This
research first shows the outcome effect and also examines whether
this effect can be mitigated during performance evaluation. Organizational
members responsible for evaluation should be aware of the outcome
effect and how to mitigate it since they influence how managers
experience organizational phenomena and learn from that experience.
Improper evaluation undermines its influencing and learning roles.
The
remainder of this article is organized as follows. The next part
discusses prior research and develops the hypotheses. The subsequent
parts, in order, describe the research method, present the results,
and summarize the implications of this research.
Theory
and Background
Evaluation,
Information Asymmetry and Cognitive Process. An evaluator cannot
observe all aspects of performance of a manager/decision maker (hereafter
referred to as DM) because of conflicting demands on the evaluator's
attention, or simply because of physical constraints (DeNisi, 1996).
Hence, there is always information asymmetry between the evaluator
and DM, and when the evaluator has less information than the DM,
his or her ability to evaluate the DM's performance accurately is
limited (Hershey and Baron, 1992). If, however, this information
asymmetry can be reduced, an evaluator can take into consideration
DM's information about potential outcomes that existed at the time
of the decision to evaluate it (Hershey and Baron, 1992). Three
general approaches were adopted by prior research to reduce the
information asymmetry between the evaluator and the DM in order
to eliminate the outcome effect. However, for reasons discussed
below, the applicability of these approaches in the organization
remains an open question.
The
first approach attempts to increase the involvement of the evaluator
in the DM's decisionmaking process. Brown and Solomon (1987) reduced
the outcome effect on performance evaluation by prior advisory involvement
on the part of the evaluator. Brown and Solomon (1993), however,
found that only prior involvement with an exante agreement between
the evaluator and DM on the course of action to be adopted by the
DM reduced the outcome effect. Finally, Fisher and Selling (1993)
found that an exante correct outcome prediction made by the evaluator
reduced the outcome effect. But in each of these cases, the evaluator
is contracting on foresight or an occurrence predicted to happen.
Thus, the outcome effect is reduced because the evaluator is inclined
to remain committed to the initial outcome agreed upon based on
the outcome predicted, an example of escalation of commitment bias
(Bazerman, 1994). Research shows that subjects who choose a particular
course of action subsequently filter information selectively to
justify remaining committed to that course of action (Caldwell and
O'Reilly, 1982).
The
second approach to mitigate the outcome effect is to make the decision
process of the evaluatee observable to the evaluator. Fisher and
Selling's (1993) work is an illustration of this approach. However,
their result may have been influenced by the strong operationalization
of observability since evaluators were given a full description
of the DM's decision process along with the cues used by the DM
in making the final decision.
The
third approach examines whether framing can mitigate the outcome
effect. Lipe's (1993) experiment on outcomes and framing on the
evaluations of managers responsible for the variance investigation
decision showed that the evaluations were more favorable when investigations
revealed problems in the production system. Further, managers received
higher ratings when the initial expenditure (IE) made to investigate
the variance was framed as a cost than when the IE was framed as
a loss. The rationale is that investigation expenditures matched
with perceived benefits are framed as costs while those without
perceived benefits are framed as losses. But, a critical issue is
whether framing is an appropriate mechanism for mitigating the outcome
effect. Although a decision problem can be presented in a frame
consistent format in an experiment, discerning a frame latent in
the presentation of a realworld problem is difficult. Tversky and
Kahneman assert that "individuals who face a decision problem
. . . are normally unaware of alternative frames and of the potential
effects on the relative attractiveness of options" (1981: 457).
Thus, it is debatable whether evaluators would perceive an IE as
a "cost" or as a "loss" and not just another
item of expense unless it is labeled as such and explicitly brought
to the attention of the evaluator.
In
light of the limitations of the above approaches, an alternative
approach to mitigate outcome effect (examined in this research in
the context of cost variance investigation decisions) is to make
the evaluator more aware of the nature and extent of uncertainty
faced by the DM when making such decisions. Lord (1985) suggests
that evaluation accuracy would always depend on the information
available to the evaluator, and this first step is seen as critical
to the entire appraisal process. This awareness should be relevant
since the characteristics of the available information have an impact
on the cognitive process of the evaluator during performance evaluation
(DeNisi, 1996). This will be elaborated more at a later stage, when
discussing hypothesis H2.
Since
the outcome effect and how to mitigate it is examined in this research
in the context of a cost variance investigation decision, a brief
discussion of the formal cost/benefit rule which facilitates making
such a decision is necessary at this stage. Information needs of
this rule include cost of investigating the variance, cost of correcting
the problem, cost of allowing an out of control production process
to continue, and the state of the production process (i.e., the
probability the production process is in control or out of control).
In this rule, illustrated in Figure I, the expected costs of the
two managerial alternatives are computed, based on the probability
that the production process is in control or out of control. The
alternative with the smaller expected cost is then selected (Horngren
et al., 1997).
In
a cost variance investigation decision, the critical part is the
DM's assessment of the probabilities of the production process incurring
the cost being in control or out of control. In reality, however,
such probabilities can rarely be judged precisely (Dyckman, 1969;
Kaplan, 1975) and the DM may at best be able to specify a range
of probabilities. Moreover, in some situations there can be some
reservation about the uncertainty of the production process and
even when specifying the range of probabilities, the DM may be unsure
of the secondorder probability distributions for each point within
that range. In contrast to precise uncertainty, defined as one that
can be justifiably expressed either as a point probability estimate
or a secondorder distribution over probability values (Budescu
and Wallsten, 1987), the nature of uncertainty in the variance investigation
decision about the production process being out of control is best
described as "ambiguous" or "vague" (Einhorn
and Hogarth, 1985). If so, representing ambiguous uncertainty as
a point estimate gives an unwarranted appearance of precision. Alternatively,
evaluating the DM may involve communicating to the evaluator the
nature and extent of uncertainty faced by the DM by representing
it as a range of probability values, thus providing some insight
into its qualitative characteristics. As discussed next, the evaluator's
cognitive approach entails .a process called simulation heuristic,
which is of fected by knowing the kind of probability data faced
by the DM (Kahneman and Tversky, 1982). Thus, communicating the
characteristics of the uncertainty faced by the DM may facilitate
more accurate evaluation (Goguen, 1974).
Outcome
Effect and Variance Investigation Decision. Since the variante investigation
decision is made ex ante (i.e., before the results or outcome of
the investigation is known), the DM should be evaluated based on
the ex ante information assuming that information is available to
the evaluator (Edwards, 1984; Hershey and Baron, 1992) . However,
Lipshitz (1989) showed that those who take normatively correct actions
are evaluated more favorably than others. Thus, for example, DMs
who choose actions with lower expected costs are likely to be evaluated
higher than those who choose actions with greater expected costs.
However,
since it is difficult to ignore outcome knowledge, ex post information
also affects performance evaluation of DMs (Baron and Hershey, 1988).
For example, Lipshitz (1989) found that Israeli Defense Force officers
evaluated a regiment commander more favorably when his decision
to go to the aid of an attacked force was followed by a signif icant
battle rather than a minor one. Furthermore, a commander who did
not go to the attacked force's aid was viewed more favorably if
this decision preceded an attack on his own sector (see also Mitchell
and Kalb, 1981; Baron and Hershey, 1988). Thus, in the case of a
variance investigation, it is likely that evaluators will consider
the outcome of the investigation (the ex post information), as well
as the ex ante expected costs. Specifically, when the investigation
reveals a problem, the manager's decision will appear more appropriate
and will lead to higher performance ratings than when the investigation
indicates no problem with the system.
H1:
The outcome of a cost variance investigation will be related to
the performance rating of the DMs, such that their performance evaluation
will be higher if the system was found ex host to be out of control
than it was found to be in control.1
Communicating
Uncertainty, Cognitive Process and Mitigating the Outcome Effect
As
stated earlier, the precision of the probability information used
in the variance investigation model is often low (Dyckman, 1969).
Thus, the forced dichotomy between in control and out of control
may be an unrealistic aggregation of reality (Kaplan, 1975). In
discussing variance investigation models, Kaplan suggests:
. .
expanding the number of states to allow for varying degrees of out
of controlness. For example, we might allow S states (S=5 or 10,
say) with state 1 representing perfectly in control, state 2 representing
slight deterioration and state 5 being well out of control (1975:
323).
This
solution clarifies the imprecision of uncertainty inherent in variance
investigation decisions (Zebda, 1991). Most realworld decisions,
including the investigation decision, depend on uncertain events
whose probabilities cannot be precisely assessed (Ghosh and Ray,
1997). Thus, the imprecision of the uncertainty should be considered
in the analysis and explicitly communicated to the evaluator (Wallsten,
1990) because if DMs fail to communicate the uncertainty they face
when making decisions, evaluators tend to assume that the DMs' estimates
require no qualifications (Fischhoff, 1994).
If
many probability distributions are consistent with the evidence
of the process being out of control, then selecting any one by the
DM is arbitrary. Any subsequent decision, such as evaluating the
DM based on arbitrary information, is likewise arbitrary (Wallsten,
1990). Thus, in situations where there are many probability distributions,
the DM might communicate a more realistic assessment of uncertainty
by showing it as a range (Ghosh and Crain, 1993). This allows the
evaluator to consider the extent of imprecision in the investigation
decision and should have a marked ef fect on the evaluator's simulation
heuristic when evaluating the DM's decision.
Simulation
heuristic entails a cognitive construction or imagining of outcomes
of how an event will turn out or how it might have turned out under
other plausible circumstances (Kahneman and Tversky, 1982), and
provides an explanation of the cognition process that results in
the outcome effect. Assume the evaluator is not made aware of the
uncertainty faced by the DM making the variance investigation decision.
The outcome knowledge then dominates the evaluator's simulation
heuristic in which otherwise plausible outcomes from the investigation
decision are discounted because they are less easy to imagine than
the actual outcome (Ashcraft, 1994). It is this implausibility of
alternate paths or outcomes that produces the outcome effect in
performance evaluations. Now assume that the evaluator is made aware
of the uncertainty faced by the DM making the variance investigation
decision. When the evaluator thinks about the DM's investigation
decision, the scenario under which that decision happened is very
easy to imagine. After all, just that scenario happened. Making
the evaluator aware of the nature and extent of uncertainty faced
by the DM increases the range of plausible alternative scenarios
and should reduce the effect of outcome knowledge on the evaluator's
assessment of the DM's performance. This is consistent with the
cognitive approach to an appraisal wherein evaluators attribute
a performance less strongly to the ability of the ratee in the case
of a more unstable environment (DeNisi, 1996).
H2:
Stating the probability of the system being out of control imprecisely
(precisely) when the data warrant imprecision is associated with
a smaller (larger) outcome ef fect.
Method
Sixty
undergraduate students enrolled in an upperlevel accounting course
participated in the experiment. They had just studied the normative
rule used to make the cost variance investigation decision. However,
to ensure that the students understood the rule, they were given
the relevant data and were asked to make a variance investigation
decision. The following data were provided for the decision:
Assume
that you are trying to decide whether to investigate the cause of
a labor efficiency variance. The following information is available
to you:
Cost
of investigating the cause of the variance: $500
Cost
of correcting the system when it has failed: $1,000
Potential
loss to the firm for not discovering an existing correctable problem
now: $4,000
Probability
of the system being out of control: 0.30
(a)
What is the critical probability that a problem exists that must
be surpassed before an investigation is justified? (b) Based on
your analysis in part (a) above, would you investigate the variance?
(Please state your answer as either "Yes" or "No.")
The
above screening process reduced the subject pool to fiftyseven.
The students were then told that the manager chose to investigate
the variance and were provided with the information regarding the
outcome of the investigation. Next, they were asked to evaluate
the manager's performance using a rating scale. Finally, the students
completed a debriefing questionnaire, which collected information
regarding their age, graduating major, gender and work experience.
In addition, there were questions about the experiment. The experiment
took about 10 minutes to administer.
The
experiment employed a 2 X 2 mixed design. The withinsubjects variable
was the outcome of the variance investigation decision (OUTCOME)
with two levels; that is, the production process is in control or
out of control. The betweensubjects variable was the precision
of the probability describing the production process being out of
control (PRECISION). Subjects in the precise probability manipulation
were presented with the following data related to a variance investigation
decision:
Assume
that you are a VPManufacturing for a medium sized firm. Your firm
uses a standard costing system with currently attainable standards.
You have several production managers working for you. Each one runs
his/her own production plant.
Pat
Smith is one of these production managers. Pat's plant had a large
direct labor efficiency variance this year and he had to decide
whether to investigate the variance further. Pat's staff collected
all the relevant cost information and provided him only with the
following summary:
Critical
probability: 10%
Probability
of the process being out of control cannot be determined with certainty,
but was estimated to be 20%.
You,
as VPManufacturing, received a report from Pat which included the
above summary information, along with Pat's decision.
You
need to evaluate Pat's performance for the period under two situations
discussed below. Please give a judgment or opinion for both the
situations on the scales below. Although Pat is not held accountable
for direct material or overhead variances, the direct labor variances
and investigation decisions are Pat's responsibility.
Situation
1: Pat made the investigation and found the process to be in control.
Pat's performance for this period was (make a slash on the scale
below)2:
Situation
2: Pat made the investigation and found the process to be out of
control. Pat's performance for this period was (make a slash on
the scale below):
Subjects
in the imprecise probability manipulation had the same information
as above, except that the information on the probability of the
process being out of control was stated as follows:
Probability
of the process being out of control cannot be determined with certainty.
It ranges between 12% and 28%, and the likelihood of any of these
individual probabilities occurring is equal.
The
dependent variable is the performance evaluation ratings (EVALUATION)
in the two situations namely, when the investigation found the production
process to be in control and out of control.
Results
One
of the questions posed to the students in the debriefing questionnaire
was on the clarity of the instructions received for the experiment.
The mean (s.d.) of the responses from the question was 9.11 (1.32).
Further, the responses from the precise probability manipulation
group were not significantly different from the responses in the
imprecise probability manipulation group (T=0.461; p=0.8906).3 In
addition, analyses of the evaluation ratings indicated that they
were not affected by subjects' age, graduating major, gender, or
work experience.
Hypothesis
H1 documents the outcome effect by predicting that performance ratings
will be higher when the variance investigation decision reveals
the production process is out of control than when it is in control.
Hypothesis H2 predicted that the outcome effect would be greater
when the probability of the production is stated precisely than
when it is stated imprecisely. Both the hypotheses were tested by
using a single oneway repeated measures ANOVA model. In the model,
the repeatedmeasures factor was the outcome of the variance investigation
decision (OUTCOME) with two levels, the dependent variable was the
EVALUATION associated with each of the two levels of OUTCOME, and
the independent (betweensubjects) variable was the precision of
the probability describing the production process (PRECISION). The
results are presented in Table 1.
The
withinsubjects analysis, (which is the only portion from part A
relevant for this study) , however, indicates that all the variables
were significant. Specifically, EVALUATION for the two levels of
variance investigation decision OUTCOME (i.e., production process
was in control or out of control) was significantly dif ferent from
each other (F=139.21; p= 0.0001). Further, the interaction of OUTCOME
with PRECISION was also significant (F=45.87; p=0.0001). This suggests
that not only was the EVALUATION for the two OUTCOME levels different
from each other but this difference was not the same when the probability
describing the production process was stated precisely compared
to when the probability was not stated precisely.4
To
better understand the results from the withinsubjects analysis,
the mean ratings of EVALUATION by OUTCOME were compared separately
for the two levels of PRECISION (refer to Part B of Table 1 ) .
When the probability of the production process was stated precisely,
mean evaluations from out of control OUTCOME was 67.17 and mean
evaluations from in control OUTCOME was 53.45. In addition, these
evaluations were significantly different from each other (T=6.819;
p=0.0001). Similarly, when the probability was stated imprecisely,
mean evaluations from out of control OUTCOME was 61.53 and mean
evaluations from in control OUTCOME was 57.82. These evaluations
were also significantly dif ferent from each other (T=2.875; p=0.0001).
The above results document the outcome effect and provide evidence
in support of hypothesis HI that ratings will be higher when the
variance investigation decision reveals the production process is
out of control than when it is in control.
Next,
the interaction of OUTCOME and PRECISION is examined in Part C of
the Table and illustrated in Figure II. When the probability describing
the production process was stated precisely, DMs' EVALUATION when
the production process was found to be in control averaged 53.45,
which increased by 13.72 to 67.17 if the process was found to be
out of control. In contrast, when the probability was stated imprecisely,
DMs' average evaluation ratings were 57.82 for in control production
process, which increased by only 3.71 to 61.53 for out of control
production process. Furthermore, these increases (i.e., 13.72 and
3.71) were significantly different from each other (T=4.424; p=0.0001).
The results show that, consistent with hypothesis Hz, the outcome
effect is significantly greater when the probability of the production
process is stated precisely compared to when the probability is
stated imprecisely.
Summary
and Discussion
The
noted philosopher Karl Popper once remarked that, "it is always
undesirable to make an effort to increase precision for its own
sake . . one should never try to be more precise than the problem
situation demands" (1974: 284). In discussing the communication
function in accounting, Hayes (1983) states that too much precision
does not help and, in fact, it may harm communication because excessive
precision limits obtaining insight into the qualitative characteristics
of the data.
Because
the evaluation of the DM involves the cognitive process of imagining
an alternative outcome to the decision made by the DM, the evaluator
bases evaluations on the simulation heuristic, which depends on
the ease with which alternatives can be constructed or imagined
(Kahneman and Tversky, 1982). This is facilitated by communicating
a more accurate assessment of the uncertainty facing the DM to the
evaluator to alleviate the connection between the initial situation
facing the DM and the final outcome decision. Consequently, the
outcome effect in performance evaluation of the DM's variance investigation
decision is significantly reduced. The principal conclusion of this
study is that the outcome effect can be mitigated by providing the
evaluator more accurate information about the nature and extent
of uncertainty in the form of a range of probabilities rather than
as a point or precise estimate. The results underscore an observation
by DeNisi ( 1996) that since evaluation is a cognitive process,
accuracy of evaluation would be dependent upon the information available
to the evaluator and is a critical step in the entire appraisal
process.
The
results of this research have practical implications for mitigating
the outcome effect. First, communicate to the evaluator the decision
process and the decision outcome. For example, along with the material
price variance, the assumptions that the DM had to make in order
to estimate the initial budgeted price quality of raw material,
consumption patterns, market demands, supply sources, etc.  should
also be reported. This information would allow the evaluator to
assess the validity of the critical assumptions and the uncertainty
faced by the DM in budgeting material price. And second, since the
outcome effect is a bias, efforts to reduce biases of supervisors
in training programs should be considered. Such training sensitizes
supervisors to a potential problem in evaluating subordinates and
may help reduce the bias.
There
also are important managerial implications for using a probability
range to describe uncertainty when it is warranted. For example,
reporting uncertainty range may facilitate job selection. Some selection
decisions could perhaps be automatic; for example, everyone above
a particular score on a test is offered a job and everyone below
the score is rejected. But someone would have to make a judgment
about what that score should be. Judgment is an integral part of
most selection procedures. The question is not whether there is
subjectivity in selection decisions but whether the subjectivity
is recognized and understood. Reporting a range of acceptable scores
may ameliorate this problem, thereby reducing the dependency on
a single score to make selection decisions and, instead, encourage
adopting additional methods (e.g., a structured interview) to augment
that test score.
As
with any research effort, limitations exist, and the results of
the present experiment must be considered in light of those limitations.
For example, the evaluators in this study were all students with,
perhaps, no prior experience or formal training in doing performance
ratings. Thus, experienced raters may have reacted differently to
the uncertainty information compared to the subjects in the current
study. Further, although generalizability is from the theory and
not from the data, nevertheless, additional research needs to be
done to determine whether the findings from the study would generalize
to contexts other than the very specific one studied here. Also,
the results are parameterized by the features of the experimental
design, such as the operationalization of the nature of uncertainty
about the production process. And, finally, all the cost information
was arbitrary; thus, it is unknown if the competing treatments contained
too little or too much information.
Future
research should further examine the subtleties of the outcome effect
and performance evaluations. For example, this study can be replicated
using tasks where prescriptive models or policies are used, such
as inventory purchase (EOM, capital budgeting decisions and buyormake
decisions. Also, Fischhoff s ( 1994) research suggests that decisions
are of fected by accountability and reputation; thus, future research
on decision making could examine their role along with the nature
of information provided to the evaluator. Finally, future research
should study how various organizational factors (e.g., budgetary
participation, technology, compensation structure) along with the
nature of information might influence the level of outcome effect.
Dipankar
Ghosh
Associate Professor of Accounting
University of Oklahoma
Manash
R. Ray
Associate Professor of Business
Lehigh University
Footnotes:
' The
comments of Margie Boldt, Steve Butler, Jim Largay, Marlys Lipe,
the two anonymous reviewers and the Editor are gratefully acknowledged.
' It
should be noted that a hypothesis similar to this hypothesis was
also posited and supported in Lipe ( 1993). However, it is included
here since it was deemed necessary to first replicate the outcome
effect in the context of the current study and then demonstrate
an outcome effect differential due to reasons discussed next and
stated in hypothesis H2.
Half
of the subjects received these questions in reverse order; statistical
analysis confirmed that there were no order effects.
Part
A of the Table (betweensubjects analysis) also shows the PRECISION
variable to be insignificant. It should be noted that for betweensubjects
analysis, the repeatedmeasures procedure averages the dependent
variable (EVALUATION) across all levels of the repeatedmeasures
factor (OUTCOME). In this study, for the two levels of OUTCOME from
the investigation decision, average EVALUATION was 60.31 when the
probability of the production process being out of control was stated
precisely and 59.69 when the probability was stated imprecisely.
The insignificance of PRECISION is not central to this study since
there was no hypothesis (or theory) which suggests that average
EVALUATION for the two levels of OUTCOME should be more or less
when the probability describing the production process is stated
precisely or imprecisely. Furthermore, since the outcome effect
is measured as the difference of EVALUATION associated with the
two levels of OUTCOME, averaging the EVALUATION has no meaning.
References:
Ashcraft,
M.H. 1994. Human Memory and Cognition. New York, NY: Harper Collins.
Bazerman, M. 1994. Judgment in Managerial Decision Making, 3rd ed.
New York, NY: John Wiley and Sons, Inc.
Borman,
W.C. 1991. ` Job Behavior, Performance, and Effectiveness."
In Hand book of Industrial and Organizational Psychology, 2nd ed.,
Vol. 2. Eds. M.D. Dunnette and M. Hough. Palo Alto, CA: Consulting
Psychologists Press. pp. 271326
Baron,
J. and J. Hershey. 1988. "Outcome Bias in Decision Evaluation."
Journal of Personality and Social Psychology 54: 569579.
Brown,
C.E. and I. Solomon. 1987. "Effects of Outcome Information
on Evaluations of Managerial Decisions. "The Accounting Review
62 (3): 564577.
and
. 1993. "An Experimental Investigation of Explanations for
Outcome Effects on Appraisals of Capital Budgeting Decisions."
Contemporary Accounting Review 10(Fall): 83111.
Budescu,
D.V. and T.S. Wallsten. 1987. "Subjective Estimation of Precise
and Vague Uncertainties." In Judgmental Forecasting. Eds. G.
Wright and P. Ayton. Chichester, NY: Wiley.
Caldwell,
D.F. and C.A. O'Reilly. 1982. "Responses to Failures: The Effects
of Choices and Responsibility on Impression Management." Academy
of Management Journal 25: 121136.
DeNisi,
A.S. 1996. Cognitive Approach to Performance Appraisal . New York,
NY: Routledge.
Dyckman,
T. 1969. "The Investigation of Cost Variances." Journal
of Accounting Research 7: 215244.
Edwards,
W. 1984. "How to Make Good Decisions." Acta Psychologica
56: 527. Einhorn, HJ. and R.M. Hogarth. 1985. "Ambiguity and
Uncertainty in Probabilistic Inference." Psychological Review
92: 433461.
Fisher,
J. and T.I. Selling. 1993. "The Outcome Effect in Performance
Evaluation: Decision Process Observability and Consensus."
Behavioral Research in Accounting 5: 5877.
Fischhoff,
B. 1994. "What Forecasts (Seem to) Mean." International
Journal of Forecasting 10: 387403.
Ghosh,
D. and T.L. Crain. 1993. "Structure of Uncertainty and Decision
Making: An Experimental Investigation." Decision Sciences 24(4):
789807.
Ghosh,
D. and M.R. Ray. 1997. "Risk, Ambiguity, and Decision Choice:
Some Additional Evidence." Decision Sciences 28(1): 81104.
Goguen,
J. 1974. "On Fuzzy Robot Planning." Memo #1 on Artificial
Intelligence. Los Angles, CA: University of California, Los Angles.
Hawkins,
S.A. and R. Hastie. 1990. "Hindsight: Biased Judgements of
Past Events After the Outcomes are Known." Psychological Bulletin
107: 311327.
Hayes,
D. 1983. "Accounting for Accounting: A Story of Managerial
Accounting." Accounting, Organizations and Society 8: 241249
Hershey,
J.C. and J. Baron. 1992. ` Judgment by Outcomes: When is it Justified?"
Organizational Behavior and Human Decision Processes 53: 8993.
Horngren,
C.T., G. Foster and S.M. Datar. 1997. Cost Accounting, 9th ed. Englewood,
NJ: PrenticeHall.
Kahneman,
D. and A. Tversky. 1982. "The Simulation Heuristic." In
Judgment Under Uncertainty: Heuristics and Biases. Eds. D. Kahneman,
P. Slovic and A. Tversky. Cambridge, MA: Cambridge University Press.
pp. 201208
Kaplan,
R. 1975. "The Significance and Investigation of Cost Variances:
Survey and Extensions." Journal of Accounting Research 39:
311337.
Lipe,
M.G. 1993. "Analyzing the Variance Investigation Decision:
The Effects of Outcomes, Mental Accounting, and Framing." The
Accounting Review 68 (October) : 748764.
Lipshitz,
R. 1989. "Either a Medal or a Corporal: The Effects of Success
and Failure on the Evaluation of Decision Making and Decision Makers."
Organizational Behavior and Human Decision Processes 44: 380395.
Lord,
R.G. 1985. "Accuracy in Behavioral Measurement: An Alternative
Definition Based on Raters' Cognitive Schema and Signal Detection."
Journal of Applied Psychology 70: 6671.
Mitchell,
T. and L. Kalb. 1981. "Effects of Outcome Knowledge and Outcome
Valence on Supervisors' Evaluations." Journal of Applied Psychology
66: 604612.
Peters
D.L. and EJ. McCormick. 1966. "Comparative Reliability of Numerically
Versus JobTask anchored Rating Scales." Journal of Applied
Psychology 50: 9296.
Popper,
K 1974. "Autobiography of Karl Popper." In The Philosophy
of Karl Popper. Ed. P. Schilpp. La Salle, IL: Open Court. pp. 1181.
Swalm,
R.O. 1966. "Utility Theory: Insights into Risk Taking."
Harvard Business Review 44: 123136.
Tversky,
A. and D. Kahneman. 1981. "The Framing of Decisions and the
Psychology of Choice." Science 211 (January): 453458.
Wallsten,
T.S. 1990. "The Cost and Benefits of Vague Information."
In Insights in Decision Making: A Tribute to Hillel J. Einhorn.
Ed. R.M. Hogarth. Chicago, IL: The University of Chicago Press.
Zebda.
A. 1991. "The Problem of Ambiguity and Vagueness in Accounting."
Behavioral Research in Accounting 3: 117145.
