Guest Contributor:
Scott Erker, PhD
SVP Selection Solutions – DDI

 

“Champions eat feedback for breakfast.” In the business world where learning and continuous improvement are necessary for survival, feedback (and the subsequent actions people take to improve!) are central to success. Many organizations have turned to 360 degree or multi-source feedback programs as standard procedure for upping their game on feedback. The hope is that making feedback a standard operating procedure will increase timeliness, accuracy, and focus so that information gained will drive insight and higher levels of performance. Unfortunately, the gap between hope and reality is wide.

There are a number of common problems with multi-source feedback programs that diminish the value and impact of the practice:

  • Unreliable evaluation of competencies by raters (i.e., managers, direct reports, peers, others)
  • Overinflated or underinflated evaluation of competencies by raters because of personal relationships and concern about how the data will be used
  • Subjects see feedback as a “report card” rather than a road map for development
  • Subjects often have difficulty accepting the need for action on their multi-source feedback
  • Subjects (and managers!) are not skilled at interpreting the reports
  • Difficulty translating feedback into meaningful development plans

When done poorly, the damage created by an unsuccessful multi-source survey implementation can be extreme. This is due to inaccurate data for making talent/development decisions, mistrust in management over the purpose of the program, apathy toward participation, no meaningful action taken to improve performance, and perceived waste of time by business partners

For a multi-source feedback program to meet its intended purpose, a number of important factors should be considered. These factors will shape subjects’, raters’, and stakeholders’ expectations for the program:

  • Purpose of the effort and business context – how will success of the program be measured?
  • Data sharing policy – who will see the data and what decisions will be made based on results?
  • Training and calibration required for subjects and raters – how much time will it take?
  • Next steps after the data are gathered – will subjects get feedback and in what manner? What support will be provided for follow-up (e.g., development planning)?
  • Program evaluation – who will know if the program is successful? How will they know? What are the implications if it is or is not?

These factors might make perfect intuitive sense, but their importance and shaping them to the specifics of the multi-source feedback program cannot be underestimated. By taking the time upfront to answer these questions, and to transparently communicate with participants, you’re much more likely to have a successful implementation.

At the Qualtrics Insight Summit, I’ll be presenting on this very issue – how 360-degree feedback goes wrong and what organizations can do to fix it. I’ll be joined at the conference by others talking about employee feedback, leadership development and employee engagement. There are few insights as important to an organization as the insights required to help employees improve, and I’m excited to be part of the Summit to share my experience.