One of the key benefits of 360 degree feedback is the ability to gain perspective from a wide variety of sources. As a result, the selection of raters for 360 degree feedback is important to ensure the acceptance of feedback and ownership over future development.
- Self ratings should always be included.
- The participant's Manager/s should be required to provide feedback.
- The participant's Direct Reports should all be invited to provide feedback. Inviting all Direct Reports sends the message that feedback from all members of the team is equally important, even when spans of control are large.
How to Select Other 360 Raters:
Beyond Self, Manager and Direct Reports, other rater group roles can include:
- Internal Customers/Business Partners
- External Customers
While all three of these additional groups do not need to be included, most participants will have a group of Peers with whom they interact on a frequent basis. Peers can provide insightful feedback on interpersonal relations and teamwork behaviors not observed by Direct Reports. Similarly, including External Customers in the feedback process can add a unique perspective on behaviors related to service quality.
Raters for these groups are typically selected in one of four ways:
- Raters are selected by the participant based upon a set of guidelines,
- Raters are selected by the participant's direct Manager/s,
- Raters are selected by HR based upon a set of specific guidelines (i.e, all peers must be included), or
- Raters are selected using a collaborative process where participant and manager agree upon the list of raters.
Some research has shown that when used for developmental purposes, allowing participants to select their own raters may enhance the acceptance of feedback without reducing the accuracy of ratings (Nieman-Gonder, 2006). However, keep in mind that additional input and even formal rater list approval from a participant's manager can be very beneficial in providing a balanced perspective on performance.
Whether 360 degree feedback is being integrated with performance evaluation or is being used for strictly developmental purposes, the 4th option - a collaborative process - is the optimal choice. Ideally, the participant can select a group of raters and then the manager can review and make suggestions before finalizing the list. This facilitates transparency in the process and ensures a well-rounded list of feedback providers.
Interaction is important to consider.
The nature of and amount of interaction that a rater has with the participant plays an important role in the accuracy and helpfulness of the feedback that is provided.
- Raters should have known and worked with the participant a minimum of 4-6 months.
- Raters should have frequent work-related interactions with the participant.
- Raters should understand the nature of the participant's role and job duties.
- Managers should assist participants in selecting raters with whom they work well, as well as individuals with whom they have not worked well.
If possible, it is always best to provide the rater with an opportunity to opt out of the ratings process should he/she feel unable to provide accurate ratings due to the length or amount of time he/she has worked with the participant.
A flexible 360 degree feedback solution can make all the different in helping participants easily select their reviews, managing the approval process and inviting raters to to provide feedback. Click here to download viaPeople's Ultimate Guide to Selecting a 360 Vendor.