Home > Measurement Resources > Performance Measurement in the News >

Sales Performance: Timing of Measurement and Type of Measurement Make a Difference

Lawrence B Chonko; Terry N Loe; James A Roberts; John F Tanner

Journal of Personal Selling & Sales Management Page 23-36

Copyright (c) 2000 Bell & Howell Information and Learning Company. All rights reserved.
Copyright Journal of Personal Selling & Sales Management Winter 2000

This paper investigates how the time of measurement and the "type" of variable used to measure sales performance can impact the results of sales performance studies. Information is presented which indicates that performance measures taken at different times are not highly related. Further, the relationships between different performance measures are also not strong. Finally, the type of performance measure used and when the measure of performance was taken had an impact on the relationship of sales performance to a set of predictor variables. The results provide empirical support for previously published concerns that researchers exercise caution in their use of performance measures that are readily available from a host firm or easily created by the researcher.

Four sales managers representing the same firm are discussing how they measure a salesperson's performance. These managers are responsible for salespeople who sell similar products to similar customers and represent four different sales districts. The sales managers do not agree as to what constitutes the "best" measure of salesperson performance. Moreover, they are discussing the relationship of the timing of the measurement of the sales performance in relation to the timing of a company survey on various aspects of the sales job that was sent to all salespeople. Therefore, each advocates a different measure as illustrated below:

Sales manager 1 uses a ten item self report performance measure that was collected at the same time the survey data was collected from the salespeople.

Sales manager 2 uses the percent salary increase awarded salespeople three months after the sales force survey data was collected.

Sales manager 3 uses a single item self-report performance measure that was collected at the same time the sales force survey data was collected.

Sales manager 4 uses the percent salary increase awarded salespeople 3 months prior to the sales force survey data being collected.

Each of the four performance measures advocated by our sales managers exhibits face validity as each represents an aggregate measure of a salesperson's performance. Indeed the second and fourth performance measures are identical, except for the time at which they were measured. The four performance measures are also typical of those used in much sales force research. They are either easily created, as in the case of the self report measures, or readily available from company file data, as in the case of the percent salary and dollar salary increases.

The sales managers began their discussion in an effort to gain insight into performance appraisal and review processes that employ such aggregate performance data. The use of such performance data is common in industry and is also common in the sales literature as shown in Table 1 in which excerpts from ten recent sales performance studies published in Journal of Personal Selling and Sales Management are provided. Each excerpt represents the complete description of the performance measure used in the selected study, although citations and other references to the study are omitted to preserve anonymity. Each study identified in Table 1 relied on either readily available (the company uses the measure and is willing to provide it to researchers) or easily created (the researchers create them) performance measures. In only one case (study 1 in Table 1) was performance data collected over multiple time periods and, in this particular case, the three measures were averaged.

The primary purpose of this paper is to provide further evidence to support the frequent calls for improved measures of sales performance (e.g., Avila et al. 1988; Oliver and Anderson 1995). This paper examines performance in several ways:

1.  The same performance measure taken at dif ferent time periods is compared for consistency in rating a group of salespeople.

2.  Different performance measures, taken at the same time, that purport to measure sales performance are compared for consistency in rating a group of salespeople.

3.  The performance measures are then related to two predictor variables-role conflict and role ambiguity to ascertain the consistency in relationships between these predictors and the various measures of performance.

The Timing of Sales Performance Measurement

Differences in sales performance due to timing of the measurement have been largely ignored. The work of Sager and Wilson (1995) provides an appropriate analogy. They note that job stress, an outcome variable, can be either chronic or episodic when a temporal perspective is introduced into the study of stress. They also note that chronic job stress relates to environmental considerations (e.g., market conditions), while episodic job stress is more events related (e.g., a bad day at the office, a fight with one's spouse). If differing performance environments exist for different salespeople, it follows that measurement of performance at a single point in time may not adequately capture the nature of the sales environment in which each individual salesperson is operating. In other words, the point in time a sales outcome is measured may have an impact on the overall rating of a salesperson and the relationship of that outcome variable to other variables.

In Figure 1, we illustrate the changing nature of the performance of one salesperson. A chronic environmental condition (e.g., the appearance of a new competitor, long term illness of the salesperson) may have a longer term influence on sales performance. However, several significant episodic events may also occur during the course of that same time period, the first (e.g., the loss of a key account) being at t^sub 2^ when the salesperson's increasing level of performance slows. A second episodic event (e.g., a natural disaster in the salesperson's territory) occurs at t^sub 3^ and the salesperson's performance begins to decline. Finally, a third episodic event (e.g., securing several new accounts) occurs at t^sub 4^ and the salesperson's performance seems to level off. If performance is measured at t^sub 5^, and the chronic environmental condition is also measured, researchers would develop some statistical relationship between the chronic condition and performance. But what happens if our researchers measure performance in t^sub 4^, t^sub 3^, or t^sub 1^? The chronic condition remains the same, but the performance level of the salesperson varies greatly depending on the timing of the measurement of that performance and on episodic events. A key issue, then, for sales managers and researchers is the effect of the timing of the measurement of sales performance on the overall assessment of the salesperson's performance and on the relationships of performance measures to other sales-related variables.

Epstein (1980), in his seminal work on behavior stability, attempts to provide some insight into this question. He advances the hypothesis that stability can be demonstrated over a wide range of variables so long as the behavior in question is averaged over a sufficient number of occurrences. For sales managers, such aggregation has simplified the performance appraisal and review process. Such aggregation also accomplishes two purposes for researchers-the reduction of measurement error and broadening the range of generalization of research findings. Therefore, as Epstein notes, aggregation should represent an excellent procedure for establishing replicable generalizations.

Employing Epstein's thesis would, ideally, result in one good measure to suffice for all sales performance appraisal and review processes in a company in which salespeople perform similar tasks. But, as the four sales managers in the opening scenario in this paper articulated, any particular sales performance measure is subject to question as each uses a different aggregate measure to evaluate sales performance. Further, it is unclear which of several performance measures are used and which of those measures is "best." Finally, discussions of aggregation and validity do not directly incorporate the notion of the timing of the aggregate performance variable measurement in relationship to the timing of the measurement of other predictor variables.

To illustrate this point, refer to Figure 2. Here we have the unlikely, but possible situation of 5 salespeople performing at the same level at time t^sub 3^, but having very different performance histories (times t^sub 1^ and t^sub 2^). Factors such as changes in territories, sales seasonality and competitive activity can impact results, unbeknown to sales managers who rely on aggregate performance measures. The measurement of sales performance at only that point in time (t^sub 3^) may not adequately identify the chronic or episodic nature of the sales environment in which the salesperson has been operating prior to the measurement of performance (even using aggregate measures). Similarly, if our sales managers commissioned a survey of job satisfaction among salespeople, would the five salespeople described in Figure 2 exhibit the same satisfaction-performance relationship? It is not likely as the paths to the same level of performance at time t^sub 3^ are different. This suggests that different events have led salespeople to their existing level of performance. Nevertheless, both managers and researchers working with our five different salespeople might reach inaccurate conclusions about performance-predictor relationships if performance is measured only at time t^sub 3^.

The Timing and Nature of Sales Performance Measurement: An Illustrative Example of Some Problems

To illustrate several of the problems associated with the timing and type of performance measures, we present an example that incorporates multiple sales performance measures that are readily available from a company or easily created by managers or researchers. In this example, ten different measures of sales performance obtained from either company file data or self report data are compared to each other and then related to role conflict and role ambiguity predictor variables. These ten measures are described in Table 2. Our purpose in presenting this example is to address the following questions:

Does an individual salesperson's performance change over time when the same measure of sales performance, measured at different points in time, is used?

Does an individual's relative performance vary depending on the type of sales performance measure used?

What impact does the timing of the measurement of sales performance and the type of performance measure used have on the relationships of performance to a specified predictor variable?

Three comments are warranted. First, our purpose is to examine the impact of the timing of measurement and the type of measurement on a salesperson's relative performance ranking. We do this in a way that imitates typical sales force research. That is, we employ performance measures that were readily available from a firm and easily created by the researchers, providing information about performance measures in a manner similar to that provided in the studies identified in Table 1. Second, a comprehensive review of criterion validity is beyond the scope of this paper. Suffice it to say that research measuring sales performance should take into account such factors as contamination and deficiency of measures (cf. Borman 1991). Further, researchers should consider examination of prior mete-analyses that compare performance related validities across a variety of measures (e.g., Bommer et al. 1995; Heneman 1986). In addition, potential rater bias should be considered (cf. Campbell 1990).

Third, we selected role conflict and role ambiguity as predictor variables used in the demonstration study because of the frequency with which these two constructs have been related to sales performance and because role ambiguity has demonstrated a negative relationship to sales performance (e.g., Brown and Peterson 1993). Our purpose is not to shed further light on these relationships. Rather, we wish to illustrate the potential variations in reported statistical relationships that might result depending upon either the timing of performance measurement or nature of performance measures used in research. Since role ambiguity has a demonstrated negative relationship with performance, assessing its relationship to ten different measures of performance for the same salespeople should shed some light on the impact of timing of performance measurement and type of performance measure used.

The Sample

Data were obtained from a sample of industrial salespeople and information found in company accounting records. The sample consists of 143 sales representatives employed by one industrial products firm. A total of 121 (85%) salespeople responded to the survey: 86 (60%) responded to the first mailing, and 35 (24%) responded to the second mailing. No significant response differences were found between the two mailings (Armstrong and Overton 1978), suggesting that non-response bias is not a major concern in this study. Also, it should be noted that nine salespeople were recent hires and only have one managerial performance observation. All other salespeople in the study have three managerial performance observations.


Several measures of sales performance were examined in this study. The managerial performance evaluations were readily available from the company or created from those provided by the company. The self-report measures were collected as a part of a larger survey of salesperson reactions to various aspects of their jobs.

Managerial Performance Evaluations (Readily Available Measures). Company records provided the management evaluations of salesperson performance and consisted of a single measure-percent salary increase-based on six months prior sales performance. Sales managers assigned each sales representative a percentage increase in salary. This percentage increase was based on a number of objective criteria (including sales volume, sales vs. quota, number of sales calls made) derived from the sales job description. Percent salary increase data were collected for three performance review periods for each salesperson. In addition, six-month dollar salary adjustments were obtained for each of these three time periods. The three time periods employed were current performance, performance six months prior to current performance and performance 12 months prior to current performance. Since data on dollar salary increase was also readily available, this measure of performance was also used.

Self Report Performance (Easily Created Measures). One self report measure, created by the researchers, consisted of a single-item scale that asked each salesperson to rate their overall performance in comparison to other salespeople. A second self report performance measure consisted of ten aspects of the sales job derived from the sales job description provided by the host firm. Each salesperson in the firm was asked to rate him or herself on the ten criteria in comparison to all other sales representatives in the firm doing a similar job. The ten criteria consisted of the following. (1) sales volume, (2) ability to reach quotas, (3) customer relations, (4) expense accounts, (5) company knowledge, (6) customer knowledge, (7) product knowledge, (8) competitor knowledge, (9) time management, and (10) planning.

Salespeople were asked to evaluate their performance on both self report measures by checking one of the following categories:

·     Near the top (better than 75%)

·     Above average (better than 50%)

·     About average (better than 25%)

·     Below average (below 25%)


Correlations among Ten Performance Measures

Table 3 presents the correlation matrix for the ten performance measures examined in this example. Briefly, the highlights of Table 3 are:

The two easily created self-report measures (X^sub 1^, X^sub 2^) had a correlation of .43.

Readily available company provided performance data (X^sub 3^ through X^sub 6^: current percent salary increase, percent salary increase six months prior, percent salary increase twelve months prior, current dollar salary increase) were not very highly correlated among themselves (r=.02 to .31).

Most recent percent salary increase(X ^sub 3^) was significantly correlated to both self report measures (r=.21 to .31).

There is a wide range of correlations (.01 to .79) among the four performance measures (X^sub 7^ to X^sub 10^ created from company file data. Not surprisingly many of these "created" performance measures are also highly correlated with the company measures (X^sub 3^ to X^sub 6^) from which they were derived. However, they were not highly correlated with the self-report performance measures (.05 to .20).

In viewing Table 3, the highest correlation between performance measures is .80, indicating that only about 64 percent of the variation between the two measures is accounted for. Most of the correlations (33 of 45; 73 percent) in Table 3 are below .20, indicating very little common variance among the performance measures. The preponderance of low correlations among the performance measures indicates that the different performance measures may be measuring different phenomena.

How Sales Performance Can Change over Time OR between Measures

The data in Tables 4a through 4e compare the performance of the sample of salespeople on different measures of sales performance or the same sales performance measure taken at different time periods. In the following analysis, salespeople were categorized as high, medium, or low performers based on each of the performance measures identified in Table 4. For each performance measure, approximately one-third of the responding sales representatives were placed in each of the three performance categories. Our objective was to see if salespeople who were rated in high, medium or low performance categories using one performance measure would be rated similarly when other performance measures were observed. In other words, are salespeople classified differently depending upon either the time a performance measure was obtained or the nature of the performance measure?

Before we present the results in Table 4, Tables 4a through 4e can be interpreted as crosstabular tables. In each case, two performance measures are compared, each performance measure being converted to one of the three categories mentioned in the preceding paragraph. Since ten different performance measures were used, presenting all possible combinations was judged impractical for purposes of this illustrative presentation. In each table, nine combinations of performance are presented, ranging from salespeople who are high on both measures of performance to salespeople who are rated low on both measures of performance, with all seven other performance level combinations represented. If both performance measures in each table (4a to 4e) are measuring the same thing, we would expect the majority of observations to fall on the main diagonal (high-high, etc.) in each table. As can be seen, this is not the case.

Table 4a. The main diagonal in Table 4a suggests that only a small percentage of salespeople (30.0%) were classified similarly based on current and six months prior (X^sub 4^) salary increases. In Table 4a, 9 salespeople who were in the low performance category according to their current percent salary increase (X^sub 3^) were also in the low performance category according to their percent salary increase awarded 6 months prior (X^sub 4^). However, 10 salespeople in the low performance category on X^sub 3^ were rated in the middle performance category on X^sub 4^). And, 20 salespeople in the low performance category on X^sub 3^ were rated in the high performance category on X^sub 4^, the percent salary increase awarded 6 months earlier than X^sub 3^. In other words, looking at Table 4a, of the 39 salespeople rated in the low performance category on current percent salary increase (X^sub ^3), only 9 of them were rated in the low performance category on percent salary increase awarded 6 months prior (X^sub 4^). Thirty of our "low performing" salespeople (when X^sub 3^ was the performance measure) were rated in different performance categories on X^sub 4^, percent salary increase measured 6 months earlier.

As noted previously, we would expect the main diagonal to have the majority of the salespeople if the two performance measures X^sub 3^ and X^sub 4^ are consistent in their rating of salespeople. In Table 4a, only 34 (30%) of our salespeople are rated similarly by both measures (9 in the low-low category, 10 in the medium-medium category, and 15 in the high-high category). The relative performance position of these salespeople did not change from one time period to the next. Thirty-five (31%) salespeople increased their performance over time (X^sub 4^ to X^sub 3^) and forty-three (39%) salespeople declined in performance over six months time (X^sub 4^ to X^sub 3^). Put another way, the classification of seventy-eight salespeople (70%) changed in a six month period of time. The relative performance of most salespeople (70%) changed in a six month time period when the same measure of performance was used. The timing of sales performance measurement makes a difference. There are significant implications if researchers and managers used the performance measure from only one of these time periods, given the dramatic changes in performance that are observed in Table 4a.

Table 4b. A similar pattern exists in Table 4b when two different performance measures are used (current dollar salary increase (X^sub 8^) vs. current percent salary increase (X^sub 3^). In table 4b, 39.3 (44 of 111) percent of the salespeople stayed in the same performance category. Again, the difference in relative performance position of salespeople is dramatic when the two performance measures are compared, with sixty (60.4) percent of salespeople being categorized differently. What are the implications that might exist if two studies examined exactly the same predictor-performance relationships, but each used only one of these measures of performance?

In summary, data in Table 4b identify the following facts about our sample of salespeople:

Forty-four salespeople (36%) were rated similarly on the two performance measures.

Thirty-nine salespeople (32%) received improved performance ratings when X^sub 3^ was used as a performance measure instead of X^sub 6^.

Thirty-eight salespeople (32%) improved performance ratings when X^sub 6^ was used as a performance measure instead of X^sub 3^.

Seventy-seven salespeople (64%) were classified differently when their performance categories on each of two performance measures (X^sub 3^ and X ^sub 6^) were compared.

Table 4c. Table 4c also compares two different performance measures: current percent salary increase (X^sub 3^) to the 10-item self report performance measure (X^sub 1^). These data were obtained approximately three months apart. Only 26.2 percent (32 of 122) of the salespeople are categorized in the same performance category on both performance measures. Ninety (73.8 percent) salespeople where classified in different performance level categories when these two performance measures are compared. Even two performance measures taken in close proximity to each other yield differences in the relative performance positions of salespeople. Do these differences exist because of time or because of the different measures employed, or some combination of both?

As a summary the data in Table 4c reveal the following:

Thirty-two salespeople (26%)were rated similarly on two performance measures (X^sub 3^ and X^sub 1^).

Forty-two salespeople (35%) received improved performance ratings when X^sub 3^ was used as a measure of performance instead of X^sub 1^.

Forty-seven salespeople ( 29%) received improved performance ratings when X^sub 1^ was used as a measure of performance instead of X^sub 3^.

Eighty-nine salespeople were classified differently when their performance categories on each of the two performance measures (X^sub 3^ and X^sub 1^) were compared.

Table 4d. In Table 4d, forty salespeople (35.7%) were rated similarly on both change in dollar salary (X^sub 8-^X^sub 6^ [current dollar salary increase] minus dollar salary awarded 6 months earlier) and percent salary increase (X^sub 7-^X^sub 3^ minus X^sub 4^ ) over the same period. Again, considerable discrepancy (64.3 percent of salespeople are classified differently) still exists. As a summary of Table 4d, the data reveal the following:

Forty salespeople (36%) were rated similarly on two performance measures (X^sub 8^ and X^sub 7^).

Thirty-seven salespeople (33%) received improved performance ratings when X^sub 8^ was used as a measure of performance instead of X^sub 7^.

Thirty-five salespeople (31%) received improved performance ratings when X^sub 7^ was used as a measure of performance instead of X^sub 8^.

Sixty-two salespeople (64%) were classified dif ferently when their performance categories on each of two performance measures (X^sub 7^and X^sub 8^) were compared.

Table 4e. In Table 4e, only 33.9 percent of salespeople are in the same self report performance (X^sub 1^) category as their change in percent salary adjustment (X^sub ^7). As a summary, the data in Table 4e reveal the following facts about sales performance:

Thirty-eight salespeople (34%) were rated similarly on two performance measures (X^sub 7^ and X^sub 1^).

Forty salespeople (36%) received improved performance ratings when X^sub 1^ was used as a performance measure instead of X^sub 7^.

Thirty-four . salespeople (30%) received improved performance ratings when X^sub 7^ was used as a performance measure instead of X^sub 1^.

Seventy-four salespeople (68%) were classified differently when their performance categories on each of two performance measures (X^sub 1^ and X^sub 7^) were compared.

To summarize, the relative performance position of salespeople changes dramatically in a short period of time or depending upon the nature of the performance measure used. In Tables 4a to 4e, the percent of salespeople who are rated similarly by the two performance measures range from 30 to 60 percent. At least half of the sales representatives are rated differently depending on the timing of the measurement of the performance variable or the type of performance variable used. This change in relative performance position was evident when a current measure of sales performance (percent salary increase) was compared to three other distinct measures of sales performance-prior percent salary increase, a self report performance measure, and total dollar salary increases. One performance measure compared against itself from one time period to the next also resulted in dramatic differences in the relative performance position of the sampled salespeople. The data presented in Tables 4a through 4e suggest two conclusions about the use of sales performance data:

Salesperson rank on a particular measure of performance can change dramatically over a short time span-in this case 6 months.

Salesperson rank on each of two performance measures taken at the same time can vary dramatically.

Relating Performance to Role Conflict and Role Ambiguity

Role conflict and role ambiguity data were related to the ten sales performance measures used in this study. Two role conflict and ambiguity scales were used. The first considered was developed by Rizzo, House and Lirtzman (1970). The individual conflict and ambiguity scales and the combined scales were compared to the performance measures. The second set of scales are those reported by Chonko and Burnett (1982), consisting of five sources of conflict and five sources of ambiguity. Table 5 shows the correlations between performance measures and role conflict and role ambiguity. Briefly in Table 5:

Significant correlations between role conflict and role ambiguity occur, primarily, with the self report measures of performance (X^sub1^ and X^sub 2^). The range of correlations for each conflict and ambiguity measure varies considerably across the ten performance measures, the magnitude of the ranges being .19 to .55.

Based on the data in Table 5, the timing and type of the measurement of sales performance appear to take on a critical dimension in sales force research. While many of the correlations in Table 5 are not significant, some are. However, the significance of the correlations is not the key aspect of this table. Rather, the range of correlations between each predictor variable and the 10 different measures of performance is its focal point. The correlations between the ten performance measures range from a low of .19 to a high of .55. This range of correlations suggests that the timing and type of performance measure used makes a difference. While not within the scope of this paper, the correlations presented in this demonstration study between role conflict, role ambiguity and the various measures of sales performance are in the same general direction and magnitude as those reported by Brown and Peterson (1993) in their meta-analysis.

Discussion and Implications for Sales Management

Research efforts in the area of sales performance have taken three basic approaches. First, a number of empirical studies have been concerned with factors (e.g., role conflict, role ambiguity, motivation) related to sales success (e.g., Bagozzi 1978, 1980a, 1980b; Behrman and Perreault 1982; Churchill et al.1985; Cravens et al. 1993; Cron and Levy 1987; Futrell, Swan, and Todd 1976; Kohli 1989; Oliver 1974; Oliver and Anderson 1995; Small and Rosenberg 1977). This line of research has been the focus of this paper. A second approach has been the development of sales force decision models (e.g., Cravens, Woodruff, and Stamper 1972; Hess and Samuels 1971; Ryans and Weinberg 1979). Third, several conceptual models of sales performance have appeared in the literature (e.g., Enis and Chonko 1978; Sujan, Weitz, and Sujan 1988; Walker, Ford, and Churchill 1977).

The underlying theme of these efforts is an attempt to develop an improved understanding of the nature of sales performance and the ultimate measures used to evaluate individual sales performance. Historically, most of the methodological rigor of these efforts has been directed toward the determinants of sales performance. The typical criterion measure used in these efforts has been readily available from a sponsoring company (e.g., sales volume, percent salary increase) or easily created by researchers (e.g., self-report performance measures). Often, such readily available or easily created performance measures have not been subjected, conceptually or empirically, to the same analytical rigor as have the determinants of performance. Avila, Fern, and Mann (1988, p. 53) comment on the results of a sales performance assessment project: "It appears that overall performance is a multidimensional concept. For diagnostic purposes, some combination of sales behavior measures may be necessary. Single measures simply do not provide each enough explanations of overall performance." The implication is that sales researchers must seek, as some are now doing, to improve sales performance measurement. Moreover, sales managers must use caution in presuming that any single composite measure of sales performance is indicative that the salesperson has really performed desired job activities.

In line with Avila, Fern and Mann's (1988) assertion, a fourth approach to the study of sales performance has emerged. In this approach, the development of performance measures proceeds well beyond the simple use of sales performance measures that are readily available or easily created. In an effort that preceded the call of Avila et al.(1988), Weitz (1981) delineated four behavioral dimensions of sales effectiveness.

Similarly, Churchill, Ford, and Walker (1985) identified specific selling behaviors that were antecedents to performance. Anderson and Oliver (1987) presented a behavior vs. outcome taxonomy based on methods of monitoring sales effort, subsequently providing evidence of the existence of several combinations of behavior and outcome based control (Oliver and Anderson 1995). Cravens, LaForge, Pickett and Young (1993) advocate the use of a quality improvement perspective in the development of sales performance measures. Finally, Boles, Donthu, and Lohtia (1995) propose a data envelope analysis in which salesperson performance is measured in terms of performance efficiency.

While there is considerable merit in all these efforts, particularly those examining the nature of sales performance measures, the element of time has been largely overlooked. Weitz (1961) provided some benchmark work on criterion development. In his work he listed time, type, and level as "criterional dimensions" that must be considered in the development and understanding of criterion measures. Time, according to Weitz (1961), refers to when the measure was taken. Studies of the impact of the timing of measurement (when performance measurement actually occurs) of sales performance in relation to the timing of the measurement of the variables hypothesized to be related to sales performance may provide valuable insight into performance-related results obtained in sales force research.

Managerial Implications

Of course, the development of performance criteria is the problem that led to the current manuscript. There remains the problem of what constitutes suitable performance criteria. From a managerial perspective, the first step in the development of suitable sales performance criteria would be the definition of the job situation. Specifically, in what type of sales activity are we seeking to determine success? Such activity must be defined as precisely as possible to avoid confusion. Obviously, sales volume is one measure of success, but this measure does not speak to job activity. However, other measures of success are also pertinent, including: (1) the ability to develop and sustain long-term customer relationships, (2) the ability to identify and convert new business, and (3) the ability to "grow" old business. All of these are acceptable salesperson goals that may not be captured by the sales volume criterion alone. What is suggested, then, is to specify the purposes and objectives of various sales behaviors before asking the question of why some salespeople are successful and others are not. Such a question leads to the second step in the performance criteria development process, that of analyzing sales activities. This step involves asking several questions including the following.

1.  What is the purpose of each activity?

2.  What are the goals of each activity?

3.  What does a salesperson working at this activity have to do?

4.  What standards of performance for the activity should be stated?

5.  What levels of performance should be required for the various behaviors of the activity?

6.  What is the relative importance of the various behaviors?

Researchers, too, can make a contribution here by simply providing as much detail about sales job factors like: 1) the nature of the sales job (e.g., transactional vs. relationships oriented), 2) the type of customers called on, 3) buying decision factors (e.g., individual vs. buying center), and 4) the competitive structure of the industry. Such information would allow readers of research to ascertain how well a selected performance measure captures the essence of the sales job being undertaken by salespeople participating in the study.

A third step in the criteria development process is to define success. Managers must develop a list of elements of an activity that differentiate a successful salesperson from an unsuccessful salesperson (e.g., Morris et al. 1994). Pursuant to the definition of success is the question of why some salespeople are successful and others are not. The critical incidence approach may be one way of assessing the determinant aspects of each activity that leads to success. After a thorough analysis of the activities described in Step 2, managers should be able to identify aspects of each activity that are related to success. Again, researchers would also aid readers by providing insights into this issue.

Finally, criteria to measure each of the elements of sales success must be developed. All criterion measures must be subject to the analysis of relevancy and reliability to determine their acceptability of use. If managers are unable to develop criteria for some success elements then a deficiency in the ability to capture the complete nature of the sales job exists. The created criterion variable will be unable to capture the complete essence of some of the aspects of the "ultimate" criterion. Similarly, some criterion may inject measures of success elements that are not included in the "ultimate" criterion. From a research perspective, it is incumbent upon researchers to spend considerable time examining the likely sources of contamination. Such contamination exists when a measure taps variance irrelevant to performance requirements. For example, sales volume may be a function of the ability of the salesperson and the ease of sales in a district (Borman 1991). To the extent that such is the case, sales volume, as a measure of performance, is contaminated. Unfortunately, since most performance measures employed are readily available from companies or easily created, little is ever discussed concerning such contamination factors. The consequence may be that research results concerning the reporting of relationships with outcomes is, at best, flawed.

Once components of sales performance have been defined, these components should be empirically related to other variables from a theoretical point of view. The notion of causal priorities is important here. In general, it is assumed that performance is a dependent variable and that various other factors cause performance. For example, until recently it was contended that satisfaction causes performance. Indeed, the reverse may be the case: Better performers may be more satisfied with their jobs (see, for example, the Williams and Livingstone [1994] meta-analysis).

Undoubtedly, the above approach involves considerable effort. The work of Campbell (1990) is illustrative of the effort required. At the risk of minimizing Campbell's work, we briefly mention that he conceptualized the determinants of job performance as consisting of the following three elements:

Declarative Knowledge-facts, principles, goals and self-knowledge;

Procedural Knowledge and Skill-cognitive, psychomotor, physical, self management and interpersonal;

Motivation-choice to perform, levels of effort and persistence to effort.

Campbell (1990) asserts that the major implication of all this is that performance is directly determined by some combination of these three elements. However, the combinations of these elements that predict performance for individuals in different job settings are likely to be different. Campbell continued further. He proposed a taxonomy of performance. The taxonomy consists of a hierarchical model that has eight factors including job specific task proficiency, non job specific task proficiency, written and oral communication tasks, demonstrating effort, maintaining personal discipline, facilitating peer and team performance, supervision and management/administration. Campbell's taxonomy indicates the dif faculty in describing jobs and the factors that impact the performance of these jobs.

Our findings provide evidence to support the findings of Churchill et al. (1985), who state, "Perhaps the most important finding in our analysis is the observation that the type of product sold influences the correlations between the various predictors and performance" (p. 117). We agree with their contention that the determinants of sales performance include a job specific component. In line with this, then, it is incumbent on sales researchers examining performance to provide information concerning the nature of the sales job. Churchill et al. (1985) also assert that influencing characteristics are more likely to be job specific and are more likely to be measured with instruments that take into account specific aspects of the job. In other words, there are job elements, both chronic and episodic, that will impact performance. When those situations occur in relation to the measurement of salesperson performance they can have a great impact on reported relationships and the relative standing of salespeople among their peers. Weeks and Kahle (1990) also provide evidence to suggest that specific events that take place during a salesperson-customer interaction can be influential in the ultimate performance of the salesperson. Managers who evaluate sales performance must be conscious of this. Our data suggests that the relative performance position of a large majority of salespeople can change in a relatively short period of time or depending on the type of performance measure used.

Certainly, an overall managerial rating or an objective piece of information such as sales volume is much easier to obtain and much less time consuming. However, the procedure outlined above has the likelihood of leading to much more relevant criteria than achieved by such overall ratings. The Campbell taxonomy includes the built-in hope that researchers can continuously improve on the criteria used to evaluate sales performance. Similarly, the elements that go into the criterion measure can provide a useful diagnostic instrument. Such success elements can be invaluable in suggesting the training needs of salespeople.

Research Implications

One might infer from this paper that all sales force performance studies are suspect because of the inadequacy of sales performance measures. This is not the case. Rather, readers must view performance studies in the context of the sales situation being examined. Thus, in any performance study, it is paramount that researchers provide extensive detail on the nature of the job being examined. In single-firm studies or studies that sample a few firms, such documentation should be relatively easy to provide. Of course, such documentation is much more difficult to provide in research that examines national samples. In such cases we suggest reviewers articulate how the sample was selected (e.g., consumer products firms, medical supply firms) and provide as much sample detail as possible so that readers can make inferences about the nature of the performance relationships presented and draw their own conclusions as the to generalizability of the findings.

Our results suggest that consideration must be given to the timing of the collection of performance data. In fact, as a short-term remedy, performance data might be collected before, during, and after other data collection procedures. At a minimum, having such performance data would allow researchers and managers to examine performance trends and derive some implications about relationships between performance and predictor variables. Further, collection of multiple performance points might shed some light on the types of events (chronic or episodic) that impact a salesperson's performance and the degree of impact on that performance.

In many sales performance studies, the single criterion of sales volume has been used as a measure of performance. The use of sales volume, as is the case with other aggregate performance measures, rests on the assumption that, as an empirical indicator, it can be used to represent more complex phenomena. Aggregate performance measures, like sales volume, are presumed to mean a great deal, but the underlying structure of such measures is often unspecified. Moreover, aggregate measures like sales volume are not consistent with the element of a single act criterion as specified by Fishbein and Ajzen (1975). Sales volume does not indicate whether a salesperson performed a specified behavior. Rather, a sale usually results from a series of behaviors (e.g., prospecting, phone calls, sales calls) as well as buyer behavior and system and environmental related variation, none of which is specified by any single criterion measure. As an example, sales volume, as an aggregate measure consistent with those described by Epstein (1980), represents the accumulation of results over time. This paper used several aggregate measures observed at different points in time. In the case of the single aggregate, there may be substantive factors related to the context of the individual salesperson's job that account for differences in performance. We hope that our illustrative study enhances this argument by revealing that many performance patterns occur over three distinct measurement periods during the course of one year when the same aggregate is measured in multiple time periods.

The percent salary change performance measure used in this study is, in fact, a composite of a series of results, consistent with the assertions of Epstein (1980) that composite measures eliminate randomness over time. In this study, salespeople were awarded salary adjustments that were based on the presumption that sales results occurred because the "right" specific behaviors were directed toward different customers, in different situations, at different times and with different responses. In other words, percent salary increase is a composite measure of performance. However, observations of this composite measure over time, while purportedly able to generalize across customers and situations did not provide consistent ratings of salespeople.

Compared to the research in the sales management area on the psychometric properties of predictor variables, a paucity of research exists on criterion variables in the field. In the sales literature, we validate our measures by testing various predictor variables against a criterion (e.g., sales). Sales, often used as a criterion variable, provides a metric criterion variable for various predictors to determine their relationship to sales success. As predictor variables are related to the criterion, these predictors are selected as "key information" used in a variety of other sales organization's activities (e.g., recruiting). In the current study, sales volume was not employed, but several other single criterion measures of performance were. Our results differed across each of these sales performance criterion suggesting that the choice of a performance variable makes a difference in the outcome of sales research. However, we cannot comment on which, if any, of the criterion measures used in this study is the best. Each has some face validity, either by means of use by the company or based on existing subjective performance measures in the sales literature. And, each was readily available or easily created. Yet, the relationships between these criterion measures and role conflict and role ambiguity were quite different (correlation ranges from as low as .19 to as high as .55 when role conflict and role ambiguity are correlated to ten different performance measures). Those involved in improving the performance of salespeople would want to know which of these relationships is the correct one before spending time, effort and resources on changing aspects of the sales job in order to enhance performance.

We agree with others who argue that the subjective-objective performance measure distinction has been given too much attention (Campbell 1990). This is not to say that this is an unimportant issue and that there is no distinction. Rather, it is to say that researchers would serve their publics well by taking time to develop improved performance measures, be they subjective or objective, using some taxonomy as a basis. A taxonomy such as Campbell's provides a checklist against which to compare planned measures of performance and reduce the incidence of error that can occur by simply using readily available or easily created measures of sales performance.

Conceptual advances in the development of performance criteria have been substantial But, the actual measurement of sales performance criteria continues to be a problem. Researchers and raters, alike, seem limited in their ability to obtain or create accurate measures of performance. Objective measures are almost always deficient (Borman 1991). Nevertheless, an effort to improve an performance measurement is worthy of continuance.

The sales literature heavily emphasizes the development of predictor variables. However, predictor variables are always subordinate to the criterion. Only through their relationships with the criterion variable do predictor variables achieve significance. If the criterion changes, or if a different criterion is used, a predictor variable may achieve significance. If the predictor changes, or if a different predictor is used, a criterion variable may lose significance. If the criterion changes, or if a different criterion is used, a predictor variable's validity loses significance. But if a predictor variable changes, it does not necessarily follow that the criterion variable will change. And if no criteria were ever used, researchers would know nothing about the validity of their predictors. Consequently, sales research can be no better than the criterion variables used. Therefore, we must endeavor to thoroughly understand the nature of the criterion variable in use. As noted by Avila, Fern and Mann (1988), "Using single indicators of a multidimensional construct not only oversimplifies a complex phenomenon but may account for the lack of significant relationships in prior research" (p. 53). Through an examination of the timing and nature of various sales performance measures, this study brings us one small step closer to understanding the importance of this quote and to the need to provide much more information in research projects that undertake to measure sales performance.

Lawrence B. Chonko (Ph.D., University of Houston), Holloway Professor of Marketing, is author or co-author of five books, Direct Marketing, Direct Selling, and the Mature Consumer, Professional Selling, Managing Salespeople, Business, the Economy and World Affairs, and Ethics and Marketing Decision Making. He has also served as editor of the Journal of Personal Selling and Sales Management. Author of over 100 papers, his articles have appeared in leading journals.

Terry W. Loe (Ph.D., University of Memphis) is an Assistant Professor of Marketing and the Associate Director of the Center for Professional Selling at Baylor University. His research has appeared in the Journal of Business Ethics, the Journal of Marketing Management, the Journal of Contemporary Business Issues, and Proceedings in a number of regional, national and international conferences.

James A. Roberts (Ph.D., University of Nebraska-Lincoln? is W.A. Maya Professor of Entrepreneurship, and Associate Professor of Marketing at Baylor University. He has had articles published in numerous journals. Areas of research include selling and sales force management, compulsive buying, socially and ecologically conscious consumer behavior, and advertising related issues. Current research efforts focus on the marketing/entrepreneurship interface.

JohnF. (Jeff)Tanner, Jr. (Ph.D., University of Georgia? is Associate Dean for Undergraduate Programs at Baylor University. He ie coauthor or author often books and over thirty journal articles, including such journals as the Journal of Marketing.


Armstrong, J. Scott and Terry S. Overton (1978), "Estimating NonResponse bias in Mail Surveys," Journal of Marketing Research, 14, (August), 396-402.

Anderson, Erin and Richard Oliver (1987), "Perspectives on Behavior - Based Versus Outcome-Based Salesforce Control Systems," Journal of Marketing, (October), 51, 76-88.

Avila, Ramon A., Edward F. Fern and O. Karl Mann (1988), "Unraveling Criteria for Assessing the Performance of Salespeople: A Casual Analysis," Journal of Personal Selling and Sales Management, 8, 45-54.

Bagozzi, Richard P. (1978), "Salesforce Performance and Satisfaction as a Function of Individual Difference, Interpersonal, and Situational Factors," Journal of Marketing Research, 15, 517-531.

(1980a), "Performance and Satisfaction in an Industrial Sales Force: An Examiniation of Their Antecedents and Their Simultaneity," Journal of Marketing, 44 (No. 2), 65-77.

( 1980b), "The Nature and Causes of Self Esteem, Performance and Satisfaction in the Sales Force: A Structural Equation Approach,"Journal of Business, 53 (No. 3), 315-331.

Behrman, Douglas N. and Perreault, William D., Jr. (1982), "Measuring the Performance of Industrial Salespersons,"Journal ofBusiness Research, 10, 355-370.

Boles, James S., Nausea Donthu and Ritu Lohtia (1995), "Salesperson Evaluation Using Relative Performance Efficiency: The Application of Data Envelopment Analysis," The Journal of Personal Selling and Sales Management, 15, No. 3, 319.

Bommer, William H., Jonathan L. Johnson, Gregory A. Rich, Philip M. Podsakoff and Scott B. McKenzie ( 1995), "On the Interchanbility of Objective and Subjective Measures of Employee Performance: A Meta-Analysis," personnel Psychology, (autumn), 587-66.

Borman, Walter C. ( 1991), "Job Behavior, Performance and Effectiveness," in Marvin D. Dunnette and Leaetta Hough (acts.), Handbook of Industrial and Organizational Psychology, VoL 2, 271-326.

Brown, Stephen P. and Robert A. Peterson (1993), "Antecedents and Consequences of Salesperson Job Satisfaction: A Meta-Analysis," Journal of Marketing Research, 30 (1), 63-77.

Campbell, John P. (1990), "Modeling the Performance Prediction Problem in Industrial and Organizational Psychology," in Marvin D. Dunnette and Leaetta Hough (acts.), Handbook of Industrial and

Organizational Psychology, Vol. 2 687-732.

Chonko, Lawrence B. and John J. Burnett, (1982), "Measuring the Importance of Ethical Situations as a Source of Role Conflict: A Survey of Salespeople, Sales Managers and Support Personnel," Journal of Personal Selling and Sales Management, (May), 14, 156-168.

Churchill, Gilbert A., Neil Ford, Steven W. Hartley and Orville C. Walker (1985), "Me Determinants of Salesperson Performance: A MetaAnalysis," Journal of Marketing Reseach, (May), 103-118.

Cravens, David W., Robert W. Woodruff and Joe C. Scamper, (1972), "An Analytical Approach for Evaluating Sales Territory Performance," Journal of Marketing, 36 (Jan), 31-37.

Raymond W. LaForge, Gregory M. Pickett, and Clifford E. Young (1993), "Incorporating a Quality Improvement Perspective into Measures of Salesperson Performance," Journal of Personal Selling and Sales Management, 13 (No. 1), 1-14.

Cron, William L. and Michael Levy (1987), "Sales Management Performance Evaluation: A Residual Income Perspective,"Journal of Personal Selling and Sales Management, 7, 57-66.

Enis, Ben M. and Lawrence B. Chonko (1978), "A Review of Personal Selling: Implications for Managers and Researchers," in G. Zaltman and T. Bonoma (eds.), Review of Marketing, (American Marketing Association, Chicago).

Epstein, Seymour ( 1980), "The Stability of Behavior, II: Implications for Psychological Research," American Psychologist, 35, 790-806. Fishbein, M. and I. Ajzen (1975), Belief, Attitudes, Intention and

Behavior: An Introduction to Theory and Research, Reading, MA., Addison-Wesley.

Futrell, Charles M., John E. Swan and John T. Todd (1976), "Job Performance Related to Management Control Systems for Pharmaceutical Salesmen," Journal of Marketing Research, 13, 25-33.

Henemen, R.L. (1986), "The Relationship Between Supervisory Ratings and Results Oriented Measures of Performance: A Metaanalysis," Personnel Psychology, 39, 811-826.

Hess, Sidney W. and Stuart A Samuels (1971), "Experiences with a Sales Districting Model,"Management Science, 108, 41-54.

Kohli, Ajay K. (1989), "Effects of Supervisory Behavior: The Role of Individual Differences Among Salespeople," Journal of Marketing, 53 (No. 4), 40-50.

Morris, Michael H., Raymond W. LaForge and Jeffrey Allen ( 1994), "Salesperson Failure: Definition, Determinants, and Outcomes," Journal of Personal Selling and Sales Management, 14, (Winter), 1-16.

Oliver, Richard L. ( 1974), "Expectancy Theory Predictions of Salesmen's Performance," Journal of Marketing Research, 11, 243-253.

and Erin Anderson (1995), "Behavior- and OutcomeBased Sales Control Systems: Evidence and Consequences of Pure-form and Hybrid Governance,"Journal of Personal Selling and Sales Management, (Fall), No. 4, 1-16.

Rizzo, J.R., R.J. House and S.I. Lirtzman (1970), "Role Conflict and Role Ambiguity in Complex Organizations,"Administrative Science Quarterly, 15, 150-163.

Ryans, Adrian B. and Charles B. Weinberg (1979), `Territory Sales Response," Journal of Marketing Research, 16, 453-465.

Sager, Jeffrey and Philip H. Wilson (1995), "Clarification of the Meaning of Job Stress in the Context of Sales Force Research," Journal of Personal Selling and Sales Management, 15, (Summer), 51-64.

Small, Robert J. and Larry J. Rosenberg (1977), "Determining Job Performance in the Industrial Sales Force," Industrial Marketing Management, 6 (No. 2), 99-102.

Smith, P.C. (1976), "Behaviors, Results and Organizational Effectiveness: The Problem of Criteria," in Marvin D. Dunnette (ed.), Handbook of Industrial and Organizational Psychology, Chicago: Rand McNally.

Sujan, Harish, Burton A. Weitz and Mita Sujan (1988), "Increasing Sales Productivity by Getting Salespeople to Work Smarter,"Journal of Personal Selling and Sales Management, (August), 9-19.

Walker, Orville, C., Jr. Gilbert A. Churchill, Jr., and Neil M. Ford (1977), "Motivation and Performance in Industrial Selling: Present Knowledge and Needed Research," Journal of Marketing Research, (May), 156-168.

Weeks, William A. and Lynn R. Kahle (1990), "Salespeople's Time Use and Performance," Journal of Personal Selling and Sales Management, 10, 1 (Winter), 29-38.

Weitz, Burton A. (1981), "Effectiveness in Sales Interactions: A Contingency Framework," Journal of Marketing, 45, 85-103.

Weitz, J. (1961), "Criteria for Criteria," American Psychologist, 16, 228-231.

top of page