Smartphones with emotional intelligence might not be as far-fetched as they sound, according to engineers at the University of Rochester. They’ve developed a computer program that is able to gauge human emotions based on speech much better than other approaches used today.
“The research is still in its early days,” said Wendi Heinzelman, a professor of electrical and computer engineering, “but it is easy to envision a more complex app that could use this technology for everything from adjusting the colors displayed on your mobile to playing music fitting to how you’re feeling after recording your voice.”
Heinzelman and her team are collaborating with Rochester psychologists Melissa Sturge-Apple and Patrick Davies, who are studying interactions between teenagers and their parents.
“A reliable way of categorizing emotions could be very useful in our research,” Sturge-Apple said. “It would mean that a researcher doesn’t have to listen to the conversations and manually input the emotion of different people at different stages.”
Before developing a computer program that can recognize different human emotions, researchers need to first understand how people recognize them.
“You might hear someone speak and think, ‘Oh, he sounds angry!'” said Sturge-Apple. “But what is it that makes you think that?”
Volume, pitch and even harmonics of speech give clues to the speaker’s emotion, she said.
“We don’t pay attention to these features individually, we have just come to learn what angry sounds like – particularly for people we know,” said Sturge-Apple.
After using recordings to identify and categorize 12 specific features that can indicate a speaker’s feelings, the researchers developed a program to test on new voice recordings.
They found the program could determine with 81 percent accuracy the emotion of a speaker whose voice it had analyzed before. But the results dropped to around 30 percent when the program was used to analyze an unfamiliar speaker’s voice.
The team is now looking at ways to minimize this effect, for example, by “training” the computer system with a voice in the same age group and of the same gender as the voice it’s already analyzed.
“There are still challenges to be resolved if we want to use this system in an environment resembling a real-life situation, but we do know that the algorithm we developed is more effective than previous attempts,” Heinzelman said.