Home > Measurement Resources > Performance Measurement in the News >

Multisource Assessment Programs in Organizations: An insider's perspective

Stephane Brutus; Mehrdad Derayeh
07/01/2002
Human Resource Development Quarterly

Copyright (c) 2002 ProQuest Information and Learning. All rights reserved.
Copyright Jossey-Bass, Incorporated Summer 2002

This study is an overview of multisource assessment (MSA) practices in organizations. As a performance evaluation process, MSA can take various forms and can be complex for an organization to use. Although the literature on MSA is extensive, little information exists on how these programs are perceived by the individuals responsible for their implementation and maintenance. The purpose of this study was twofold: to describe the current MSA practices used in organizations and to assess the issues associated with implementation and management of these practices from the perspective of the individual responsible for managing an MSA program. One hundred one companies located in Canada were surveyed for the study; almost half of these organizations (43 percent) were using MSA. Interviews of managers responsible for MSA in various organizations and some archival data on these organizations were the main source of data for the study. The study revealed that the use of MSA differs widely from one company to another In addition, results show that, once implemented, MSA requires a number of adjustments. The source of these adjustments centered on employee resistance, lack of strategic purpose for MSA, poor design of the instrument, and problems with the technology used to support MSA. These results are discussed and a proposed research agenda is outlined.

Of the many trends that have swept organizations in the past decade, few have had the impact of multisource assessment (MSA). Romano (1994) estimated that, in 1992 alone, companies spent more than $150 million on developing and implementing MSA programs. Although the origins of MSA can be traced back to the late 1960s (Hedge, Borman, and Birkeland, 2001) its widespread application in organizations is relatively recent. The rise in the number of consulting groups specializing in MSA is an indication that interest in these programs is at an all-time high and shows no sign of leveling off.

Multisource assessment, also known as 360 degree feedback, refers to the process by which performance evaluations are collected from many individuals-supervisors, peers, subordinates, and customers (London and Smither, 1995; Dunnette, 1993; Tornow, 1993). Collection of performance information from multiple sources assumes that different evaluation perspectives offer unique information and thus add incremental validity to assessing individual performance (Borman, 1998). MSA embraces a dynamic and multidimensional view of individual performance, one that is best captured by these multiple perspectives (Borman, 1998; London and Smither, 1995). The shift in evaluation duties from the supervisor, the traditional bearer of evaluative tasks, to all relevant individuals has profound implications for individuals and organizations alike. Accordingly, MSA is a phenomenon that has taken center stage in the practice of and research on performance evaluation.

On a theoretical level, MSA stems from the argument that, in terms of reliability (reduction in measurement error) and validity (greater coverage of the individual performance domain), assessment of individual performance benefits from using multiple evaluators. One could argue that the popularity of MSA is due, in large part, to the intuitive appeal of this argument. However, empirical support for the impact of MSA on the reliability and the validity of performance evaluations has been somewhat mixed (for example, Mount and others, 1998; Greguras and Robie, 1998).

After roughly a decade of widespread adoption of MSA in organizations, the question Does it work? has been on the minds of many, researchers and practitioners alike (Antonioni, 1996; Borman, 1998; McLean, 1997; Nowack, Hartley, and Bradley, 1999; Tornow, 1993). Naturally, the answer to this question can be found at a number of levels. A majority of the empirical work conducted on MSA has focused on individual-level outcomes. This approach has helped clarify the affect, cognitions, and behaviors of the many actors directly involved in the process, mainly raters and ratees. Research has focused on the reactions of raters (London, Wohlers, and Gallagher, 1990), the reactions of ratees (Levy Cawley, and Foti, 1998; Smither, Wohlers, and London, 1995), and the impact of MSA feedback on subsequent behavior (Atwater and Yammarino, 1997; Smither and others, 1995).

However, the impact of MSA is probably best addressed at the organizational level. Performance evaluation systems are among the most important human resource components of an organization (Judge and Ferris, 1993; Cleveland, Murphy, and Williams, 1989; Smith, Hornsby, and Shirmeyer, 1996); the belief that MSA yields performance information superior to that resulting from traditional performance appraisal systems may explain its popularity The utility of feedback-based developmental practices and of any administrative decisions is intimately linked to the quality of the individual performance information collected. However, there is more to this appeal than measurement quality An important consequence of MSA stems from the dissociation of evaluative duties from hierarchical standing. The democratization of performance appraisal is believed to have a substantial impact on organizational culture in terms of promoting participation, increasing the level of trust and satisfaction, and facilitating communication between employees (Hall, Leidecker, and DiMarco, 1996; Waldman and Atwater, 1998). For many, the latter may well be the most important outcome of MSA.

Interestingly, there is an increasing amount of evidence cautioning against an overly optimistic view toward the organizational benefits of MSA. Waldman, Atwater, and Antonioni (1998), for example, suggested that some of the motives behind adopting MSA might be misguided. Using institutional theory to support their argument, the authors propose that the desire to imitate competing organizations greatly influences an organization's decision to implement MSA. Also, criticism pertaining to lack of strategic alignment of performance evaluation systems (Schneier, Shaw, and Beatty, 1991) has also been levied against users of MSA. Finally, cultural factors have often been identified as a barrier to successful adoption of MSA (Waldman and Atwater, 1998).

The difficulty with evaluating MSA is that it is not a "categorically unique method" (London and Smither, 1995, p. 804). MSA is a process that can take various shapes. Reliance on multiple raters exacerbates issues such as rater anonymity and participants' trust; it renders this practice a lot more complex to manage than more traditional performance appraisal systems. Practical issues such as maintenance of anonymity and the method by which raters are selected represent tricky, but unavoidable, issues in the use of MSA.

Probably the most pressing decision for an organization is the choice between using MSA information for developmental or for administrative purposes. Most organizations use MSA as a tool to facilitate employee development (London and Smither, 1995; Waldman and Atwater, 1998). Implementing MSA for developmental purposes (commonly referred to as 360 degree feedback) centers on communicating MSA information to the focal employee. This process is aimed at increasing employees' self-awareness, helping them set goals, focusing on areas of development and, ultimately, altering their behavior to improve job performance (London and Smither, 1995). As such, MSA processes are typically embedded in a development system that includes feedback mechanisms, development planning, and support.

In addition to being put to developmental use, the same MSA information can also be fed into existing performance appraisal practices and used for job placement, pay decisions, and downsizing (Fleenor and Brutus, 2000). Scholars have warned against dual-purpose performance evaluations (Cleveland, Murphy, and Williams, 1989) and a similar debate has risen in regard to use of MSA (Dalton, 1996; London, 2000). The decision to use MSA for either purpose has been shown to be consequential. For example, research shows that performance ratings differ substantially as a function of the intended use of the process (Bracken, 1994; Timmereck, 1995). In a recent survey, Timmereck and Bracken (1996) reported that 50 percent of those organizations that had adopted MSA for decision-making purposes abandoned it within a year because of poor acceptance by users and rating inflation. The point that we want to make here is not to promote the virtue of a particular use of MSA; rather, it is to highlight the multiple variations of the process available to an organization and the importance of understanding the ramifications of these variations.

Surprisingly, most of the evidence on whether MSA "works" from an organizational standpoint is indirect. For example, the literature is rife with lists of prescriptions and key elements for successful implementation of MSA (Antonioni, 1996; Hall, Leidecker, and DiMarco, 1996; Lepsinger and Lucia, 1998; Nowack, Hartley, and Bradley, 1999; Waldman, Atwater, and Antonioni, 1998). These lists include, among other suggestions, identifying key stakeholders (e.g., Lepsinger and Lucia, 1998), clearly communicating the process to future users (Hall, Leidecker, and DiMarco, 1996), using extensive pilot testing (Waldman, Atwater, and Antonioni, 1998), training all users (Hall, Leidecker, and DiMarco, 1996), identifying effectiveness measures to evaluate the effectiveness of the program (Nowack, Hartley, and Bradley, 1999), and focusing on appropriate performance dimensions (London and Smither, 1995). These prescriptions are useful but may lack external validity as they are mostly anecdotal in nature and are based on an outsider's perspective of the process (usually that of the consultant).

As more organizations seem to be increasing their use of MSA, there is an urgent need to go beyond the perspective of the individual user (the rater or ratee) and the consultant and take stock of the significant experiences of the individuals responsible for implementing MSA programs. The goals of this study are (1) to describe current MSA practices used in organizations and (2) to assess the issues associated with implementing and managing these practices from the perspective of the individual responsible for managing them.

Methodology

The following section describes the methodology used in this study.

Population and Sample. The targeted population consisted of all Canadian organizations of more than one thousand employees, a total of 506 organizations at the time of the study Large organizations were targeted because they are likely to adopt MSA programs (Edwards and Ewen, 1998). These organizations were identified by means of Strategic, an online database provided by the government of Canada to offer comprehensive business and consumer information on Canadian companies. Two hundred six organizations were randomly selected from this list and contacted by a member of the research team. One hundred one individuals (49 percent) agreed to participate in the study.

The results of independent sample t-tests indicated no evidence of sample bias; there was no significant difference in the number of employees or in total sales volume between participating and nonparticipating organizations. The size of the 101 organizations surveyed ranged from 1,010 to 51,000 employees, with a mean of 4,778. Their total sales volume (a categorical variable) was, on average, between CAN$25,000,000 and CAN$49,999,999. The organizations surveyed represented a variety of industries: manufacturing (38 percent), food and agricultural (11 percent), computer (10 percent), mining and forestry (6 percent), consulting (5 percent), education (4 percent), aerospace (4 percent), health (3 percent), energy (3 percent), finance (2 percent), and others (4 percent).

Data Collection. Interviews were conducted with those individuals responsible for MSA processes in their respective organizations. In addition to the interview data, two archival measures were collected for each organization: number of employees and sales volume.

An interview protocol was developed for this study The content of the interview was based on a review of the available literature on characteristics of MSA. The protocol was revised as a result of a pilot test conducted with three human resource managers currently overseeing MSA programs. The pilot study also permitted calibration of the interviewers as these three interviews were conducted jointly The final interview protocol contained eighteen questions pertaining to the characteristics of the MSA in place and the issues related to its usage. The interview protocol included open-ended questions (for example, What decisions are based on the results?) and questions that were more restrictive (such as What type of technology supports the MSA process: Web, phone, paper, or other?). If an organization had more than one MSA process in place, the participant was asked to describe the programs separately Data collection lasted three months and was completed in the second quarter of the year 2000.

Telephone interviews targeted people who oversaw the MSA initiatives in place in their organization. To locate these individuals, a telephone call was made to each organization to identify the person responsible for the human resource function. This individual was then contacted and asked to refer the researcher to the appropriate MSA representative. Of the 101 organizations surveyed, close to half (44, or 43 percent) used MSA at the time of the interview. If MSA was not used, the data collection process was terminated. Otherwise, the individual responsible for MSA was asked to participate in a telephone interview to be scheduled at a convenient time. For various reasons, 17 of the 44 users of MSA could not be interviewed; a total of 27 interviews were conducted. At the beginning of these interviews, the researcher made clear to participants that their anonymity and that of their organization would be protected. Most participants were enthusiastic about the project and appeared to answer the interview questions quite freely and openly.

A majority of the respondents were from the human resource function: sixteen were vice presidents or directors of human resources (59 percent), six were managers of training or of organizational development (22 percent), two were managers of competencies (7 percent), and one was the manager of recruitment and selection (4 percent) while another managed succession planning (4 percent). Only two respondents (7 percent) were general managers who did not hold a formal human resource function.

Data Analysis. Responses collected from the interviews were compared and aggregated over the course of several meetings. Agreement between the researchers as to the meaning of answers was high as the interview questions were quite specific. When present, disagreement between the two researchers was resolved by consensus. Emergent themes were tabulated according to frequency of occurrence.

Results

The results of the study are summarized into two broad sections: description of the MSA practices used in organizations and issues involved in their implementation and management.

Description of the MSA Practices Used in Organizations. The first research question focused on the description of the MSA used. The participants were asked to describe, in detail, the MSA program in place in their organization. These results are summarized in Figure 1.

Extent of Use. Forty-three percent of the organizations surveyed used MSA. This adoption rate shows a substantial increase from Antonioni's research (1996), in which 20 percent of the organizations surveyed used MSA. Worthy of mention is the fact that nine of the organizations that were not using MSA at the time of the interview were planning to implement it in the near future. To investigate organizational trends in adopting MSA, comparison was made between the forty-four organizations using MSA and those fifty-seven currently not using it. Although no difference was found in terms of sales figures, a significant effect was found in terms of the number of employees (t [93] = -318; p < .01). Organizations using MSA had significantly more employees (average of 8,001) than those not using it (average of 2,362 employees).

Number of Programs. A large majority of organizations using MSA had, at the time of the interview, implemented a single program (twenty-four, or 77 percent). Two organizations used two distinct programs; just one had three. These programs involved different instruments and were targeted at different segments of the organization. In total, the survey allowed the research team to investigate the characteristics of thirty-one separate MSA programs.

Age. The "oldest" program described was introduced ten years ago; on average, the programs were initiated 3.5 years ago. These results point to the increasing popularity of MSA.

Developmental or Administrative Purpose. A large majority of the MSA programs investigated claimed to be for developmental purposes only (twenty-three, or 74 percent). However, many respondents mentioned confusion regarding these developmental programs. In certain situations, MSA information collected for a developmental purpose somehow found its way to individuals responsible for making administrative decisions. For example, a majority of these so-called development programs used immediate supervisors to facilitate the link with developmental planning (twelve of the twenty-three, or 52 percent). Many respondents commented on the dilemma of having the same person who facilitates MSA also being the one responsible for the performance management process. In one extreme example, a copy of the "developmental" feedback report was sent directly to the manager and reviewed with the senior management team.

The remaining programs, with the exception of one, were used for both development and administrative purposes (seven, or 23 percent). Two organizations used an original way to skirt this thorny issue, combining developmental and administrative use through two separate sets of questionsone for development (whose results were sent to the employee) and another for administrative purposes (whose results were also sent to the supervisor).

Links with Development Support. The thirty programs that were used, as a whole or in part, for developmental purposes were investigated further in term of their links with developmental support. Some programs used an external facilitator (separate from the recipient's immediate work unit) to manage the feedback process (eleven, or 37 percent). A few of these facilitators were external consultants (eight, or 27 percent) while others relied on internal HR staff to do the facilitation (three, or 10 percent). As stated in the previous section, many organizations used the recipient's supervisor to facilitate the feedback. The extent to which the feedback was linked to a development plan tended to vary. A majority of programs had a formal link with developmental planning activities (twenty-one, or 70 percent).

Target Employees. The group of employees targeted by the MSA program varied. Although a few processes (five, or 16 percent) were used for every employee in the organization, most were specifically intended for managerial positions (nineteen, or 61 percent). When the program was used for managers, the most common target was the total managerial population (fourteen, or 45 percent) while others focused only on high-potentials (four, or 13 percent); one of these programs was used for corporate trainees only Finally, some programs (seven, or 23 percent) were reserved exclusively for executives.

Internal Versus External Design and Management. As stated in the introduction, MSA is a complex system to develop and manage. Results show that a majority of MSA programs rely on external consultants (twenty-six, or 83 percent). In some instances, an external group used their own instrument and administered the whole process (eleven, or 35 percent), while in others consultants shaped the process according to the organization's competency models (fifteen, or 48 percent). Relatively few organizations did not use external assistance for their MSA programs (five, or 16 percent).

Use of Technology. MSA programs still rely on traditional paper administration for data collection (twenty, or 65 percent) but other technologies were also used: Web (four, or 13 percent), e-mail (five, or 16 percent), and computer diskette (two, or 6 percent).

Frequency of Administration, In terms of frequency of administration, a majority of organizations used MSA cyclically: organizations used it annually (twelve, or 38 percent), every two years (three, or 9 percent), or more than once a year (three, or 9 percent). Many use MSA infrequently, as needed (seven, or 24 percent); these organizations used it to develop highpotentials, to address performance problems with some employees, or only if requested by an employee. Interestingly, some organizations have used MSA only once and have no plan to use it in the future (six, or 19 percent).

Rater Selection. A majority of programs gave complete freedom to their employees in choosing who will rate them (twenty-seven, or 87 percent). Two programs had supervisors sanction the list of raters they chose, in that employees had to submit their list of raters for approval prior to distribution of the surveys. Only one program hadthe supervisor choose the raters. In an interesting variation on this theme, two programs specified the percentage of raters to be selected by the employee and by the supervisor (for instance, fifty-fifty).

Special Features of Program. Many programs used mechanisms that are worthy of mention because of their uniqueness. One program required employees to select raters one year prior to completing the evaluation. This was aimed at raising raters' awareness of their evaluative duties and augmenting the quality of their evaluation. In another organization, supervisors managed the whole process; they selected all raters for their subordinates and received the MSA report directly. Moreover, ratings included in the reports were not anonymous (the supervisor knew who provided the ratings) and supervisors could use their discretion to "screen" the information or seek additional data from selected raters.

Issues Involved with MSA Usage. In this section, we focus on the issues that arose as a result of MSA, and on the solutions that were proposed by the various respondents. The strongest theme to emerge from the interviews was the resistance that various constituencies offered to the MSA process. This resistance appears to stem from three sources: time and effort required, lack of trust, and lack of strategy.

Resistance Owing to Time and Effort. Twenty-one (out of twenty-seven) respondents mentioned that the time and effort associated with implementing and managing MSA may be a deterrent to future use. These comments focused on the burden of introducing MSA. As one participant stated, "The process is very expensive in terms of infrastructure; it is also very time-consuming-there is a two-to-three-month time lag between the assessment and the feedback period." Another said, "It is simply not worth it; it is way too labor-, time-, cost-, and energy-consuming."

Furthermore, time requirements on the part of the user also arose as a significant issue regarding MSA. Seventeen respondents (63 percent) mentioned this as being a problem. One participant stated that "many employees received a lot of surveys, and the process simply overloaded a few managers." Another respondent gave the example of a manager who received more than three thousand feedback requests! This issue seems especially problematic for high-level managers, for which situation respondents used such expressions as "being bombarded" and "flooded" to describe the amount of evaluation requests submitted to them.

The proposed solutions to rater overload took various forms. Five respondents (18 percent) mentioned the transition from paper-based to Web-based administration as a way to increase efficiency Four respondents (15 percent) also mentioned using intensive follow-up to get the surveys completed. Some (three, or 11 percent) discussed modifying the survey itself (for example, reducing the number of performance items) while others (three, also 11 percent) lowered the number of raters required for completion of the process. Two organizations tied completion of the process to the compensation of key managers. In both cases, however, these incentives were perceived to be harmful by diminishing the legitimacy of the process. Other solutions to rater overload were less frequent administration or staggered administration of the process, offering MSA to fewer employees (only high-potentials, only "red flag" employees), and allowing a longer time frame to complete the surveys.

Resistance Owing to Lack of Trust. Another emerging theme that was related to participant resistance is the user's trust (or lack thereof) in the process. Nine respondents (33 percent) commented on the fact that raters viewed MSA suspiciously These negative attitudes appeared to have a significant impact on the process. Despite the promise of rater anonymity, individuals feared identification from a peer or supervisor. This led to rating inflation or plain refusal to participate in the process. One respondent said, "Employees did not want to evaluate their direct supervisors for fear that their feedback could be traced back to them."

An additional repercussion of the lack of trust was political maneuvering in rater selection. Four respondents (15 percent) commented on reliance on "wrong" or "safe" raters and use of "I scratch your back, you scratch mine" strategies. Interestingly, useof anonymity appears to also have a downside in that it offers cover for certain counterproductive behaviors. One respondent commented, "Some raters abused the system and used it to stab colleagues." In three instances, respondents commented that anonymity ran counter to the culture of openness present in their organizations.

The solutions offered to counter the lack of trust focused on the need to communicate, clearly and extensively, the purpose of the process. A respondent commented that "when people do not understand the benefits, either of being critical in evaluating or really trying to use the feedback, they do not buy 'into it." Communication efforts seem particularly important before the implementation and when the feedback results are obtained. Prior to implementation, sessions targeting all raters (not simply the ratee) focused on clarifying the purpose of the process, explaining the steps involved, training future raters on how to use the instrument, and putting the emphasis on how anonymity would be protected. Other sessions were conducted at the feedback stage and targeted mostly the ratees and their supervisors. These focused on report interpretation, use of the information, and the future use of the process. Finally, three respondents (11 percent) mentioned the importance being successful with the first group of participants in gaining the trust of subsequent users. One comment highlights this issue: "The first wave of participants, who were the top echelons of the organization, never really took the process seriously This had a snowball effect for the rest of the organization."

Resistance Owing to Lack of Strategy. As stated earlier, the lack of strategic alignment of performance evaluations with organizational objectives is a common weakness (Schneier, Shaw, and Beatty, 1991). The absence of links between MSA and other developmental systems was mentioned by five respondents (19 percent). One participant commented that "we do not have a clear strategy for using MSA. We use it almost randomly We need to integrate it with the other systems in place." Note that seventeen organizations surveyed (63 percent) used their own competency model to design their MSA. In other word, this lack of strategy did not refer to the content of the competencies measured; rather, it had more to do with use of MSA and its integration with other systems. The strategic issues focused on participant selection (who gets it), coherent incorporation with formal performance appraisal practices, and the need to follow up with developmental plans and other appropriate developmental support (such as coaching).

Design Issues. The need to anchor an MSA process with well-designed surveys is critical for its success (London and Beatty, 1993). Many respondents questioned the relevance of the MSA instruments used. A few (four, or 15 percent) commented that MSA instruments were not detailed enough or contained poorly phrased items, while others (three, or 7 percent) mentioned the lack of face validity of the instrument as a deterrent for users who failed to see the relevance of many items. This was especially evident when the same instrument was used for employees at varying organizational levels, an issue raised by four respondents. One respondent commented, "If the raterssee no link between the questions used in the surveys and our competencies, how are they supposed to take the feedback seriously?"

As a solution to this issue, a few respondents identified the need to use separate instruments for each organizational level. Also, using behavioral anchors for each item was suggested. Two respondents argued for reliance on more qualitative feedback; using written statements was considered a good alternative for some of the design.

Technical Issues. Although advances in technology have made implementing and managing MSA a lot easier (Bracken, Summers, and Fleenor, 1998; Coates, 1998), these were also mentioned as the cause of unique issues. Five participants experienced major technical difficulties during administration of MSA. Technology failure was identified as the cause of substantial delays in implementation. Other technological glitches occurred more sporadically throughout the process: feedback reports that were sent to the wrong person, data processing errors, the spread of a virus via diskette administration.

Other Trends. Analysis of the relationship between the perceived success of MSA usage and characteristics of the processes revealed four general themes. Two were made salient by contrasting the results of the twenty organizations that reported meeting all of their objectives through the use of MSA with the five stating that none or only part of their objectives were met. Note that two respondents felt it was too early to evaluate the program in place.

First, there appears to be a relationship between the degree to which MSA relied on external consultants and the perceived utility of the process. Five of the twenty organizations that met their objectives relied on external consultants for design and implementation of MSA (25 percent); four of the five that did not meet their objectives relied on external consultants (80 percent). Second, a majority of those organizations that were successful used feedback facilitation (twelve, or 63 percent). Interestingly, every organization that failed to meet its objectives with MSA did not facilitate the feedback process. In general, these observations are indicative of the importance of planning and strategic use of MSA. Successful companies were less reactive; they had greater initial planning and anticipated potential problems before they arose.

Discussion

Although much has been written on MSA, the perspective of the organization using the process has rarely been considered in the literature. The purpose of this study was to describe MSA programs currently used in organizations and to outline some of the learnings that could be derived from the experience of individuals responsible for these processes. The results of the study confirm the widespread usage of MSA previously reported in the literature (London and Smither, 1995; Romano, 1994; Timmereck and Bracken, 1996) as more than half of the companies surveyed used MSA at the time of the study or were considering using it in the near future. Note that these results are conservative since our criterion for inclusion was fairly restrictive and only those programs that used the full 360 degree evaluations (self, peers, subordinates, and supervisor) were considered. More limited application of multisource assessment, such as relying solely on subordinate or customer evaluations, is also prevalent in organizations (Walker and Smither, 1999).

If anything, this study confirmed the fact that MSA is not a unitary process (London and Smither, 1995). On the contrary, the diversity of MSA practices is considerable. This study demonstrated that MSA practices differed widely in their stated purpose, the group of employees targeted, whether external consultancies were relied upon, the process used for rater selection, and the type of technological platform used. The breadth of this practice places substantial demands on the manager in terms of choosing or designing an equitable program for the organization. In our opinion, selecting the right MSA is critical because, despite the overall positive views held of MSA, our results also showed variations in the perceived success of MSA. Many of the organizations surveyed actually curtailed their use of MSA as a result of poor implementation.

Basically, MSA is an expensive system; it requires a substantial amount of time and resources to implement it successfully. Although some may argue that any new HRD initiative requires such effort, what distinguishes MSA from other initiatives may be the energy to sustain the process over time. The number of constituents involved in producing the expected developmental or administrative outcomes is high; it comprises those in top management in charge of communicating the process, the subordinates or peers or supervisors solicited to evaluate each focal employee, the recipients of these evaluations expected to put these results to good use, and all the personnel used to support them in their development (supervisors, coaches, and so on). The need to obtain the commitment of all these constituents is critical to the success of MSA.

The issue of accountability is the key to obtaining the commitment of all these actors. For raters, accountability is in direct opposition to provision of anonymity, since the presence of the latter severely limits the former (London, Smither, and Adsit, 1997). On the one hand, the protection afforded by anonymity is essential in producing valid evaluations (Dalton, 1996). On the other hand, this same protection allows users to skirt their responsibility. Apparently, organizations have yet to produce a satisfying solution to this paradox. Instead, many have blurred the distinction in use of MSA, as is the case when a supervisor handles both the developmental and administrative aspects of the process.

This may well jeopardize the integrity of MSA and its impact. Those who have committed their MSA practices to development and purposefully isolated them from decision-making processes appear to be the most satisfied. Clearly establishing the purpose of the process-and, more specifically, making it consequent with that goal-is important since the success of MSA rests on the tacit trust that exists between users and the organization. Organizational readiness for MSA has been discussed by Waldman and Atwater (1998). They made the distinction between using MSA to instill trust and cooperation in an organization, which they refer to as the "push" phenomenon; and using it to formalize an already existing trusting and collaborative culture (the "pull" phenomenon). HRD professionals need to be cognizant of the maturity of their organization before engaging in MSA.

The lack of strategic alignment of MSA programs with other activities was surprising. Despite its associated costs, many organizations have engaged in this process without considering its alignment with other organizational processes (Waldman and Atwater, 1998). Not surprising was the fact that those who did mention integration of MSA with existing organizational competencies, developmental planning, and targeted employee groups reported the most success.

Although many have attributed the surge of MSA to advances in technology (Coates, 1998; Bracken, Summers, and Fleenor, 1998), this study revealed many problems associated with technological support. It may be true that, without technology, MSA would just be too cumbersome to administer. However, that same technology appears to also create some issues for organizations.

Limitations

In answering the question "Does MSA work?" multiple perspectives have to be considered-a conclusion in the true spirit of MSA! As stated earlier, MSA involves a large number of individuals; the respondents used in this study present only one of many valid perspectives on this process. Although our reliance on those individuals responsible for MSA addresses a perspective yet untapped by the literature, it also creates issues. For one, some programs may have been left undetected. Accordingly, our result may represent a conservative estimate of the adoption of MSA programs. In addition, other "managerial" perspectives, such as that of the supervisor using MSA for his or her employees, were not addressed in this study Finally, much of the data on which this study is based are subjective in nature (issues related to MSA, success of MSA, and so forth). Relying on a single respondent did not allow us to capture the withinorganization variation in perception that was most likely present.

It is important to note that, although human resource practices in Canada closely resemble those in place in the United States, the use of Canadian organizations may limit the generalizability of our findings. Brutus, Leslie, and McDonald (2001) have commented on the multitude of issues involved in applying nontraditional performance appraisal in various cultures. The extent to which MSA is used outside North America certainly deserves further exploration.

Conclusion

This study highlighted the dynamic nature of MSA programs and the need for those responsible for their implementation and administration to be aware of the implications of using such a system. A cursory look at the existing literature on MSA paints an overly optimistic picture of the effectiveness of these programs; the reality of those involved in designing and administering the programs is a more nuanced story. Organizations need to engage in the MSA process with clear and realistic expectations in terms of the time and energy required to bring the process to fruition, the state of readiness of their employees, and the degree of fit between MSA and existing HRD practices.

Authors:

  • Stephane Brutus is assistant professor at the John Molson School of Business, Concordia University.
  • Mehrdad Derayeh is a graduate student in industrial-organizational psychology at the University of Waterloo.

References:

Antonioni, D. (1996). Designing an effective 360 degree appraisal feedback process. Organizational Dynamics, 25, 24-38.

Atwater, L., & Yammarino, E J. (1997). Antecedents and consequences of self-other rating agreement: A review and model. In G. Ferris (Ed.), Research in Personnel and Human Resources Management. Greenwich, CT: JAI Press.

Borman, W C. (1998). 360 degree ratings: An analysis of assumptions and a research agenda for evaluating their validity Human Resource Management Review, 7, 299-315.

Bracken, D. W (1994, Sept.). Straight talk about multirater feedback. Training and Development, 44-51.

Bracken, D. W, Summers, L., & Fleenor, J. (1998, Aug.). High-tech 360. Training and Development, 42-45.

Brutus, S., Leslie, J., & McDonald, D. M. (2000). Cross-cultural issues in multisource feedback. In D. Bracken, C. W Timmereck, & A. Church (Eds.), Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Cleveland, J. N., Murphy, K. R., & Williams, R. E. (1989). Multipleuses of performance appraisal: Prevalence and correlates. Journal of Applied Psychology, 74, 130-135.

Coates, D. E. (1998). Breakthrough in multisource feedback software. Human Resources Professional, 6, 7-11.

Dalton, M. (1996). Multirater feedback and conditions for change. Consulting Psychology Journal, 48, 12-16.

Dunnette, M. D. (1993). My hammer or your hammer? Human Resource Management, 32, 373-384.

Edwards, M. R., & Ewen, A. J. (1998). Multisource assessment survey of industry practice. Paper presented at the 360 Degree Feedback Conference, Orlando.

Fleenor, J. W, & Brutus, S. (2000). The use of multisource feedback for succession planning, staffing and severance. In D. Bracken, C. W Timmereck, bt A. Church (Eds.), Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Greguras, G. J., dr Robie, C. (1998). A new look at within-source interrater reliability of 360-degree feedback ratings. Journal of Applied Psychology, 6, 960-968.

Hall, J. L., Leidecker, J. K., & DiMarco, C. (1996). What we know about upward appraisal of management: Facilitating the future use of UPAs. Human Resource Development Quarterly, 7, 209-226.

Hedge, J. W, Borman, W C., & Birkeland, S. A. (2001). History and development of multisource feedback as a methodology. In D. Bracken, C. W Timmereck, & A. Church (Eds.), Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Judge, T. A., & Ferris, G. R. (1993). Social context of performance evaluation decisions. Academy of Management Journal, 36, 80-105.

Lepsinger, R., & Lucia, A. L. (1998, Feb.). Creating champions for 360 degree feedback. Training and Development, 49-52.

Levy, P E., Cawley, B. D., dr Foti, R. J. (1998). Reactions to appraisal discrepancies: Performance ratings and attributions. Journal of Business and Psychology, 12, 437-455.

London, M. (2000). The great debate: Should 360 be used for administration or development only? In D. Bracken, C. W Timmereck, & A. Church (Eds.), Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

London, M., & Beatty, R. W (1993). A feedback approach to management development. Human Resource Management, 32, 353-372.

London, M., di Smither, J. W (1995). Can multisource feedback change perceptions of goal accomplishment, self-evaluations, and performance-related outcomes? Theory-based applications and direction for research. Personnel Psychology, 48, 803-839.

London, M., Smither, J. W, dr Adsit, D. J. (1997). Accountability: The Achilles' heel of multisource feedback. Group and Organization Management, 22, 162-184.

London, M., Wohlers, A. J., & Gallagher, P (1990). 360 feedback surveys: A source of feedback to guide managerial development. Journal of Management Development, 9, 17-31.

McLean, G. N. (1997). Multirater 360 feedback. In L. J. Bassi & D. Russ-Eft (Eds.), What works: Assessment, development, and measurement. Alexandria, VA: American Society for Training and Development.

Mount, M., Judge, T., Scullen, S. E., Systma, M. R., dT Hezlett, S. A. (1998). Trait, rater, and level effects in 360-degree performance ratings. Personnel Psychology, 51, 557-576.

Nowack, K. M., Hartley, J., & Bradley, W (1999, Apr.). How to evaluate your 360 feedback efforts. Training and Development, 48-53.

Romano, C. (1994). Conquering the fear of feedback. HR Focus, 71, 9-19.

Schneier, C. E., Shaw, D., & Beatty, R. W (1991). Performance measurement and management: A new tool for strategy execution. Human Resource Management, 30, 279-301.

Smith, B. N., Hornsby, J. S., & Shirmeyer, R. (1996, Summer). Currenttrends in performance

appraisal: An examination of managerial practice. SAM Advanced Management Journal, 10-15. Smither, J. W, London, M., Vasilopoulos, N. L., Reilly, R. R., Millsap, R. E., dr Salvemini, N.

(1995). An examination of the effects of an upward feedback program over time. Personnel Psychology, 48, 1-34.

Smither, J. W, Wohlers, A. J., do London, M. (1995). A field study of reactions to normative versus individualized upward feedback. Group and Organization Management, 20, 61-89. Timmereck, C. W (1995). Upward feedback in the trenches: Challenges and realities. Paper presented

in May at the Tenth Annual Conference of the Society for Industrial and Organizational Psychology, Orlando.

Timmereck, C. W, & Bracken, D. (1996). Multisource assessment: Reinforcing the preferred "means" to the end. Paper presented in April at the meeting of the Society for Organizational and Industrial Psychology, San Diego.

Tornow, W W (1993). Perceptions or reality: Is multi-perspective measurement a means or an end? Human Resource Management, 32, 221-230.

Waldman, D. A., & Atwater, L. E. (1998). The power of 360-degree feedback: How to leverage performance evaluations for top productivity. Houston: Gulf.

Waldman, D. A., Atwater, L. E., & Antonioni, D. (1998). Has 360-degree feedback gone amok? Academy of Management Executive, 12, 86-94.

Walker, A. G., & Smither, J. W (1999). A five-year study of upward feedback: What managers do with their results matters. Personnel Psychology, 52, 393-423.

top of page