The latest tech stories from around the world...

Neural Imaging Reveals Secret Conversational Cues

Studying human conversations isn’t a simple challenge. For instance, when humans start to talk to one another in a conversation, they coordinate their speech very tightly—people very rarely talk over one another, and they rarely leave long, unspoken, silent gaps. A conversation is like a dance with no choreography and no music—spontaneous but structured. To support this coordination, the people having the conversation begin to align their breath, their eye gaze, their speech melody and their gestures. 

To understand this complexity, studying research participants in a lab looking at computer screens—the traditional setup of psychology experiments—isn’t enough. We need to study how people behave naturally in the real world, using novel measurement techniques that allow us to capture their neural and physiological responses. For instance, Antonia Hamilton, a neuroscientist at University College Londond, has recently used motion capture to identify a pattern of very rapid nods that listeners make to show that they are paying attention when someone is speaking. Hamilton shows that the interaction is improved by these subtle signals, but what’s also fascinating is that although the speakers can actually perceive this information, these body signals are not discernible to the naked eye. 

In 2023, we will also finally be able to start capturing neural data while people are moving and talking to each other. This isn’t easy: Brain imaging techniques such as functional magnetic resonance imaging (fMRI) involve inserting participants inside 12-ton brain scanners. A recent study, however, managed that with a cohort of autistic participants. This paper represents a terrific achievement, but, of course, until fMRI techniques become much smaller and more mobile, it is not going to be possible to see how the neural data relates to the pattern of movements and speech in conversations, ideally between both participants in a conversation. On the other hand, a different technique—called functional near infrared dpectroscopy (fNIRS)—can be used while people move around naturally. fNIRS measures the same index of neural activity as fMRI via optodes, which shine light through the scalp and analyze the reflected light. fNIRS has already been deployed while people performed tasks outdoors in central London, proving that this method can be used to gather neural data in parallel with movement and speech data, while people interact naturally. 

In 2023 we will also for the first time be able to look at how this would work in large-group conversations, which tend to reach their limit with around five people. This is, of course, a big challenge, as conversations can be so flexible and open-ended, but it’s essential if we want to understand how the participants’ brains coordinate these finely timed conversational dances. 

These breakthroughs will represent great strides in the scientific study of human conversation, one of the most fascinating areas of cognitive neuroscience and psychology. Of course, I’m slightly biased: I have studied human speech perception and production for decades, and I think conversations are where our linguistic, social, and emotional brain processes come together. Conversations are universal, and they are the main way that humans use to manage social interactions and connections. They matter hugely to our mental and our physical health. When we can fully crack the science of conversations, we’ll have come a long way to understanding ourselves.