Innovation Spotlight

Humans vs AI: The Battle for Reliable Language Testing

November 17, 2021 | 11:00 AM MDT 

Summary

There are a lot of things that humans are good at, and a lot of things they’re not so good at. Is language testing one of the things we’re not very good at? Join industry experts in discussing the debate of relying on humans vs technology for effective language skills testing.

Highlights

  • It’s possible to use human evaluators for language ability screening, but it’s not a great solution when you’re hiring often and a lot of candidates at once.
  • When human evaluators conduct language interviews, they’re often biased in their results because of factors like fatigue, distraction, and hiring quotas.
  • Our customer data has shown that AI is much better at consistently and accurately evaluating candidates based on their actual language ability. 
  • Humans are best at evaluating candidate qualities like personability, warmth, and ability to connect with customers—all important qualities for agent success.
"Raters aren’t independent of the draws on their time and their attention and their emotions, and you stop seeing the [systematic benefits of mass language assessment efforts] you would expect as you try to scale a human-style interaction."
Judson Hart
Director of Language Assessments, Emmersion