Cocktail party effect

The cocktail party effect refers to a phenomenon wherein the brain focuses a person's attention on a particular stimulus, usually auditory. This focus excludes a range of other stimuli from conscious awareness, as when a partygoer follows a single conversation in a noisy room. This ability is widely distributed among humans, with most listeners more or less easily able to portion the totality of sound detected by the ears into distinct streams, and subsequently to decide which streams are most pertinent, excluding all or most others.

It has been proposed that a person's sensory memory subconsciously parses all stimuli and identifies discrete portions of these sensations according to their salience. This allows most people to tune effortlessly into a single voice while tuning out all others. The phenomenon is often described as a "selective attention" or "selective hearing". It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name among a wide range of auditory input.

A person who lacks the ability to segregate stimuli in this way is often said to display the cocktail party problem or cocktail party deafness. This may also be described as auditory processing disorder or King-Kopetzky syndrome.

Neurological basis (and binaural processing)
Auditory attention in regards to the cocktail party effect primarily occurs in the left hemisphere of the superior temporal gyrus, a non-primary region of auditory cortex; a fronto-parietal network involving the inferior frontal gyrus, superior parietal sulcus, and intraparietal sulcus also accounts for the acts of attention-shifting, speech processing, and attention control. Both the target stream (the more important information being attended to) and competing/interfering streams are processed in the same pathway within the left hemisphere, but fMRI scans show that target streams are treated with more attention than competing streams.

Furthermore, activity in the superior temporal gyrus (STG) toward the target stream is decreased/interfered with when competing stimuli streams (that typically hold significant value) arise. The "cocktail party effect" – the ability to detect significant stimuli in multi-talker situations – has also been labeled the "cocktail party problem", because the ability to selectively attend simultaneously interferes with the effectiveness of attention at a neurological level.

The cocktail party effect works best as a binaural effect, which requires hearing with both ears. People with only one functioning ear seem much more distracted by interfering noise than people with two typical ears. The benefit of using two ears may be partially related to the localization of sound sources. The auditory system is able to localize at least two sound sources and assign the correct characteristics to these sources simultaneously. As soon as the auditory system has localized a sound source, it can extract the signals of this sound source out of a mixture of interfering sound sources. However, much of this binaural benefit can be attributed to two other processes, better-ear listening and binaural unmasking. Better-ear listening is the process of exploiting the better of the two signal-to-noise ratios available at the ears. Binaural unmasking is a process that involves a combination of information from the two ears in order to extract signals from noise.

Early work
In the early 1950s much of the early attention research can be traced to problems faced by air traffic controllers. At that time, controllers received messages from pilots over loudspeakers in the control tower. Hearing the intermixed voices of many pilots over a single loudspeaker made the controller's task very difficult. The effect was first defined and named "the cocktail party problem" by Colin Cherry in 1953. Cherry conducted attention experiments in which participants listened to two different messages from a single loudspeaker at the same time and tried to separate them; this was later termed a dichotic listening task. His work reveals that the ability to separate sounds from background noise is affected by many variables, such as the sex of the speaker, the direction from which the sound is coming, the pitch, and the rate of speech.

Cherry developed the shadowing task in order to further study how people selectively attend to one message amid other voices and noises. In a shadowing task participants wear a special headset that presents a different message to each ear. The participant is asked to repeat aloud the message (called shadowing) that is heard in a specified ear (called a channel). Cherry found that participants were able to detect their name from the unattended channel, the channel they were not shadowing. Later research using Cherry's shadowing task was done by Neville Moray in 1959. He was able to conclude that almost none of the rejected message is able to penetrate the block set up, except subjectively "important" messages.

More recent work
Selective attention shows up across all ages. Starting with infancy, babies begin to turn their heads toward a sound that is familiar to them, such as their parents' voices. This shows that infants selectively attend to specific stimuli in their environment. Furthermore, reviews of selective attention indicate that infants favor "baby" talk over speech with an adult tone. This preference indicates that infants can recognize physical changes in the tone of speech. However, the accuracy in noticing these physical differences, like tone, amid background noise improves over time. Infants may simply ignore stimuli because something like their name, while familiar, holds no higher meaning to them at such a young age. However, research suggests that the more likely scenario is that infants do not understand that the noise being presented to them amidst distracting noise is their own name, and thus do not respond. The ability to filter out unattended stimuli reaches its prime in young adulthood. In reference to the cocktail party phenomenon, older adults have a harder time than younger adults focusing in on one conversation if competing stimuli, like "subjectively" important messages, make up the background noise.

Some examples of messages that catch people's attention include personal names and taboo words. The ability to selectively attend to one's own name has been found in infants as young as 5 months of age and appears to be fully developed by 13 months. Along with multiple experts in the field, Anne Treisman states that people are permanently primed to detect personally significant words, like names, and theorizes that they may require less perceptual information than other words to trigger identification. Another stimulus that reaches some level of semantic processing while in the unattended channel is taboo words. These words often contain sexually explicit material that cause an alert system in people that leads to decreased performance in shadowing tasks. Taboo words do not affect children in selective attention until they develop a strong vocabulary with an understanding of language.

Selective attention begins to waver as we get older. Older adults have longer latency periods in discriminating between conversation streams. This is typically attributed to the fact that general cognitive ability begins to decay with old age (as exemplified with memory, visual perception, higher order functioning, etc.).

Even more recently, modern neuroscience techniques are being applied to study the cocktail party problem. Some notable examples of researchers doing such work include Edward Chang, Nima Mesgarani, and Charles Schroeder using electrocorticography; Jonathan Simon, Mounya Elhilali, Adrian KC Lee, Shihab Shamma, Barbara Shinn-Cunningham, Daniel Baldauf, and Jyrki Ahveninen using magnetoencephalography; Jyrki Ahveninen, Edmund Lalor, and Barbara Shinn-Cunningham using electroencephalography; and Jyrki Ahveninen and Lee M. Miller using functional magnetic resonance imaging.

Models of attention
Not all the information presented to us can be processed. In theory, the selection of what to pay attention to can be random or nonrandom. For example, when driving, drivers are able to focus on the traffic lights rather than on other stimuli present in the scene. In such cases it is mandatory to select which portion of presented stimuli is important. A basic question in psychology is when this selection occurs. This issue has developed into the early versus late selection controversy. The basis for this controversy can be found in the Cherry dichotic listening experiments. Participants were able to notice physical changes, like pitch or change in gender of the speaker, and stimuli, like their own name, in the unattended channel. This brought about the question of whether the meaning, semantics, of the unattended message was processed before selection. In an early selection attention model very little information is processed before selection occurs. In late selection attention models more information, like semantics, is processed before selection occurs.

Broadbent
The earliest work in exploring mechanisms of early selective attention was performed by Donald Broadbent, who proposed a theory that came to be known as the filter model. This model was established using the dichotic listening task. His research showed that most participants were accurate in recalling information that they actively attended to, but were far less accurate in recalling information that they had not attended to. This led Broadbent to the conclusion that there must be a "filter" mechanism in the brain that could block out information that was not selectively attended to. The filter model was hypothesized to work in the following way: as information enters the brain through sensory organs (in this case, the ears) it is stored in sensory memory, a buffer memory system that hosts an incoming stream of information long enough for us to pay attention to it. Before information is processed further, the filter mechanism allows only attended information to pass through. The selected attention is then passed into working memory, the set of mechanisms that underlies short-term memory and communicates with long-term memory. In this model, auditory information can be selectively attended to on the basis of its physical characteristics, such as location and volume. Others suggest that information can be attended to on the basis of Gestalt features, including continuity and closure. For Broadbent, this explained the mechanism by which people can choose to attend to only one source of information at a time while excluding others. However, Broadbent's model failed to account for the observation that words of semantic importance, for example the individual's own name, can be instantly attended to despite having been in an unattended channel.

Shortly after Broadbent's experiments, Oxford undergraduates Gray and Wedderburn repeated his dichotic listening tasks, altered with monosyllabic words that could form meaningful phrases, except that the words were divided across ears. For example, the words, "Dear, one, Jane," were sometimes presented in sequence to the right ear, while the words, "three, Aunt, six," were presented in a simultaneous, competing sequence to the left ear. Participants were more likely to remember, "Dear Aunt Jane," than to remember the numbers; they were also more likely to remember the words in the phrase order than to remember the numbers in the order they were presented. This finding goes against Broadbent's theory of complete filtration because the filter mechanism would not have time to switch between channels. This suggests that meaning may be processed first.

Treisman
In a later addition to this existing theory of selective attention, Anne Treisman developed the attenuation model. In this model, information, when processed through a filter mechanism, is not completely blocked out as Broadbent might suggest. Instead, the information is weakened (attenuated), allowing it to pass through all stages of processing at an unconscious level. Treisman also suggested a threshold mechanism whereby some words, on the basis of semantic importance, may grab one's attention from the unattended stream. One's own name, according to Treisman, has a low threshold value (i.e. it has a high level of meaning) and thus is recognized more easily. The same principle applies to words like fire, directing our attention to situations that may immediately require it. The only way this can happen, Treisman argued, is if information was being processed continuously in the unattended stream.

Deutsch and Deutsch
Diana Deutsch, best known for her work in music perception and auditory illusions, has also made important contributions to models of attention. In order to explain in more detail how words can be attended to on the basis of semantic importance, Deutsch & Deutsch and Norman proposed a model of attention which includes a second selection mechanism based on meaning. In what came to be known as the Deutsch-Norman model, information in the unattended stream is not processed all the way into working memory, as Treisman's model would imply. Instead, information on the unattended stream is passed through a secondary filter after pattern recognition. If the unattended information is recognized and deemed unimportant by the secondary filter, it is prevented from entering working memory. In this way, only immediately important information from the unattended channel can come to awareness.

Kahneman
Daniel Kahneman also proposed a model of attention, but it differs from previous models in that he describes attention not in terms of selection, but in terms of capacity. For Kahneman, attention is a resource to be distributed among various stimuli, a proposition which has received some support. This model describes not when attention is focused, but how it is focused. According to Kahneman, attention is generally determined by arousal; a general state of physiological activity. The Yerkes-Dodson law predicts that arousal will be optimal at moderate levels - performance will be poor when one is over- or under-aroused. Of particular relevance, Narayan et al. discovered a sharp decline in the ability to discriminate between auditory stimuli when background noises were too numerous and complex - this is evidence of the negative effect of overarousal on attention. Thus, arousal determines our available capacity for attention. Then, an allocation policy acts to distribute our available attention among a variety of possible activities. Those deemed most important by the allocation policy will have the most attention given to them. The allocation policy is affected by enduring dispositions (automatic influences on attention) and momentary intentions (a conscious decision to attend to something). Momentary intentions requiring a focused direction of attention rely on substantially more attention resources than enduring dispositions. Additionally, there is an ongoing evaluation of the particular demands of certain activities on attention capacity. That is to say, activities that are particularly taxing on attention resources will lower attention capacity and will influence the allocation policy - in this case, if an activity is too draining on capacity, the allocation policy will likely cease directing resources to it and instead focus on less taxing tasks. Kahneman's model explains the cocktail party phenomenon in that momentary intentions might allow one to expressly focus on a particular auditory stimulus, but that enduring dispositions (which can include new events, and perhaps words of particular semantic importance) can capture our attention. It is important to note that Kahneman's model doesn't necessarily contradict selection models, and thus can be used to supplement them.

Visual correlates
Some research has demonstrated that the cocktail party effect may not be simply an auditory phenomenon, and that relevant effects can be obtained when testing visual information as well. For example, Shapiro et al. were able to demonstrate an "own name effect" with visual tasks, where subjects were able to easily recognize their own names when presented as unattended stimuli. They adopted a position in line with late selection models of attention such as the Treisman or Deutsch-Norman models, suggesting that early selection would not account for such a phenomenon. The mechanisms by which this effect might occur were left unexplained.

Effect in animals
Animals that communicate in choruses such as frogs, insects, songbirds and other animals that communicate acoustically can experience the cocktail party effect as multiple signals or calls occur concurrently. Similar to their human counterparts, acoustic mediation allows animals to listen for what they need to within their environments. For Bank swallows, cliff swallows, and king penguins, acoustic mediation allows for parent/offspring recognition in noisy environments. Amphibians also demonstrate this effect as evidenced in frogs; female frogs can listen for and differentiate male mating calls, while males can mediate other males' aggression calls. There are two leading theories as to why acoustic signaling evolved among different species. Receiver psychology holds that the development of acoustic signaling can be traced back to the nervous system and the processing strategies the nervous system uses. Specifically, how the physiology of auditory scene analysis affects how a species interprets and gains meaning from sound. Communication Network Theory states that animals can gain information by eavesdropping on other signals between others of their species. This is true especially among songbirds.

Hearables for the cocktail party effect
Hearable devices like noise-canceling headphones have been designed to address the cocktail party problem. These type of devices could provide wearers with a degree of control over the sound sources around them.

Deep learning headphone systems like target speech hearing have been proposed to give wearers the ability to hear a target person in a crowded room with multiple speakers and background noise. This technology uses real-time neural networks to learn the voice characteristics of an enrolled target speaker, which is later used to focus on their speech while suppressing other speakers and noise. Semantic hearing headsets also use neural networks to enable wearers to hear specific sounds, such as birds tweeting or alarms ringing, based on their semantic description, while suppressing other ambient sounds in the environment.

These devices could benefit individuals with hearing loss, sensory processing disorders and misophonia as well as people who require focused listening for their job in health-care and military, or for factory or construction workers.