Engineers at the University of Rochester have developed an experimental program that gauges the emotional state of a speaker based on "the volume, pitch, and even the harmonics of their speech." This program achieves 81 percent accuracy in gauging the emotions of the person who's speaking to it, "a significant improvement on earlier studies that only achieved about 55 percent accuracy."
When I got to that part, I began to get worried that this technology could be used to monitor the emotions of your friends on the phone, which strikes me intuitively as invasive. But when the program is used on a voice other than the one it's trained on, its accuracy drops from 81 percent to about 30. They're trying to fix that, but it seems to me like it's a good thing -- it'd be great to have a cell phone app that can keep track of how I'm feeling, but which wasn't capable of letting my friends monitor my emotions in the same way.
I also wonder about how this technology might affect society if it becomes ubiquitous. Would people use it to begin training themselves out of expressing emotion? Use the phone to give them instant feedback on how to minimize the signs of sadness or anger, or how to fake either? I'm sure some people would.
But maybe it would lead people to get more in touch with their emotions -- it's easier to keep track of a part of your life like that if you can gather objective statistics about it. If your phone can tell you "You were more than usually sad this past week," you can get a clearer view about what kind of things make you sad.