Jump to content

Cocktail party effect

From Wikipedia, the free encyclopedia
(Redirected from Cocktail party deafness)
A crowded cocktail bar

The cocktail party effect refers to a phenomenon wherein the brain focuses a person's attention on a particular stimulus, usually auditory. This focus excludes a range of other stimuli from conscious awareness, as when a partygoer follows a single conversation in a noisy room.[1][2] This ability is widely distributed among humans, with most listeners more or less easily able to portion the totality of sound detected by the ears into distinct streams, and subsequently to decide which streams are most pertinent, excluding all or most others.[3]

It has been proposed that a person's sensory memory subconsciously parses all stimuli and identifies discrete portions of these sensations according to their salience.[4] This allows most people to tune effortlessly into a single voice while tuning out all others. The phenomenon is often described as a "selective attention" or "selective hearing". It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name among a wide range of auditory input.[5][6]

A person who lacks the ability to segregate stimuli in this way is often said to display the cocktail party problem[7] or cocktail party deafness.[8] This may also be described as auditory processing disorder or King-Kopetzky syndrome.

Neurological basis (and binaural processing)

[edit]

Auditory attention in regards to the cocktail party effect primarily occurs in the left hemisphere of the superior temporal gyrus, a non-primary region of auditory cortex; a fronto-parietal network involving the inferior frontal gyrus, superior parietal sulcus, and intraparietal sulcus also accounts for the acts of attention-shifting, speech processing, and attention control.[9][10] Both the target stream (the more important information being attended to) and competing/interfering streams are processed in the same pathway within the left hemisphere, but fMRI scans show that target streams are treated with more attention than competing streams.[11]

Furthermore, activity in the superior temporal gyrus (STG) toward the target stream is decreased/interfered with when competing stimuli streams (that typically hold significant value) arise. The "cocktail party effect" – the ability to detect significant stimuli in multi-talker situations – has also been labeled the "cocktail party problem", because the ability to selectively attend simultaneously interferes with the effectiveness of attention at a neurological level.[11]

The cocktail party effect works best as a binaural effect, which requires hearing with both ears. People with only one functioning ear seem much more distracted by interfering noise than people with two typical ears.[12] The benefit of using two ears may be partially related to the localization of sound sources. The auditory system is able to localize at least two sound sources and assign the correct characteristics to these sources simultaneously. As soon as the auditory system has localized a sound source, it can extract the signals of this sound source out of a mixture of interfering sound sources.[13] However, much of this binaural benefit can be attributed to two other processes, better-ear listening and binaural unmasking.[12] Better-ear listening is the process of exploiting the better of the two signal-to-noise ratios available at the ears. Binaural unmasking is a process that involves a combination of information from the two ears in order to extract signals from noise.

Early work

[edit]

In the early 1950s much of the early attention research can be traced to problems faced by air traffic controllers. At that time, controllers received messages from pilots over loudspeakers in the control tower. Hearing the intermixed voices of many pilots over a single loudspeaker made the controller's task very difficult.[14] The effect was first defined and named "the cocktail party problem" by Colin Cherry in 1953.[7] Cherry conducted attention experiments in which participants listened to two different messages from a single loudspeaker at the same time and tried to separate them; this was later termed a dichotic listening task.[15] His work reveals that the ability to separate sounds from background noise is affected by many variables, such as the sex of the speaker, the direction from which the sound is coming, the pitch, and the rate of speech.[7]

Cherry developed the shadowing task in order to further study how people selectively attend to one message amid other voices and noises. In a shadowing task participants wear a special headset that presents a different message to each ear. The participant is asked to repeat aloud the message (called shadowing) that is heard in a specified ear (called a channel).[15] Cherry found that participants were able to detect their name from the unattended channel, the channel they were not shadowing.[16] Later research using Cherry's shadowing task was done by Neville Moray in 1959. He was able to conclude that almost none of the rejected message is able to penetrate the block set up, except subjectively "important" messages.[16]

More recent work

[edit]

Selective attention shows up across all ages. Starting with infancy, babies begin to turn their heads toward a sound that is familiar to them, such as their parents' voices.[17] This shows that infants selectively attend to specific stimuli in their environment. Furthermore, reviews of selective attention indicate that infants favor "baby" talk over speech with an adult tone.[15][17] This preference indicates that infants can recognize physical changes in the tone of speech. However, the accuracy in noticing these physical differences, like tone, amid background noise improves over time.[17] Infants may simply ignore stimuli because something like their name, while familiar, holds no higher meaning to them at such a young age. However, research suggests that the more likely scenario is that infants do not understand that the noise being presented to them amidst distracting noise is their own name, and thus do not respond.[18] The ability to filter out unattended stimuli reaches its prime in young adulthood. In reference to the cocktail party phenomenon, older adults have a harder time than younger adults focusing in on one conversation if competing stimuli, like "subjectively" important messages, make up the background noise.[17]

Some examples of messages that catch people's attention include personal names and taboo words. The ability to selectively attend to one's own name has been found in infants as young as 5 months of age and appears to be fully developed by 13 months.[18] Along with multiple experts in the field, Anne Treisman states that people are permanently primed to detect personally significant words, like names, and theorizes that they may require less perceptual information than other words to trigger identification.[19] Another stimulus that reaches some level of semantic processing while in the unattended channel is taboo words.[20] These words often contain sexually explicit material that cause an alert system in people that leads to decreased performance in shadowing tasks.[21] Taboo words do not affect children in selective attention until they develop a strong vocabulary with an understanding of language.

Selective attention begins to waver as we get older. Older adults have longer latency periods in discriminating between conversation streams. This is typically attributed to the fact that general cognitive ability begins to decay with old age (as exemplified with memory, visual perception, higher order functioning, etc.).[9][22]

Even more recently, modern neuroscience techniques are being applied to study the cocktail party problem. Some notable examples of researchers doing such work include Edward Chang, Nima Mesgarani, and Charles Schroeder using electrocorticography; Jonathan Simon, Mounya Elhilali, Adrian KC Lee, Shihab Shamma, Barbara Shinn-Cunningham, Daniel Baldauf, and Jyrki Ahveninen using magnetoencephalography; Jyrki Ahveninen, Edmund Lalor, and Barbara Shinn-Cunningham using electroencephalography; and Jyrki Ahveninen and Lee M. Miller using functional magnetic resonance imaging.

Models of attention

[edit]

Not all the information presented to us can be processed. In theory, the selection of what to pay attention to can be random or nonrandom.[23] For example, when driving, drivers are able to focus on the traffic lights rather than on other stimuli present in the scene. In such cases it is mandatory to select which portion of presented stimuli is important. A basic question in psychology is when this selection occurs.[15] This issue has developed into the early versus late selection controversy. The basis for this controversy can be found in the Cherry dichotic listening experiments. Participants were able to notice physical changes, like pitch or change in gender of the speaker, and stimuli, like their own name, in the unattended channel. This brought about the question of whether the meaning, semantics, of the unattended message was processed before selection.[15] In an early selection attention model very little information is processed before selection occurs. In late selection attention models more information, like semantics, is processed before selection occurs.[23]

Broadbent

[edit]

The earliest work in exploring mechanisms of early selective attention was performed by Donald Broadbent, who proposed a theory that came to be known as the filter model.[24] This model was established using the dichotic listening task. His research showed that most participants were accurate in recalling information that they actively attended to, but were far less accurate in recalling information that they had not attended to. This led Broadbent to the conclusion that there must be a "filter" mechanism in the brain that could block out information that was not selectively attended to. The filter model was hypothesized to work in the following way: as information enters the brain through sensory organs (in this case, the ears) it is stored in sensory memory, a buffer memory system that hosts an incoming stream of information long enough for us to pay attention to it.[15] Before information is processed further, the filter mechanism allows only attended information to pass through. The selected attention is then passed into working memory, the set of mechanisms that underlies short-term memory and communicates with long-term memory.[15] In this model, auditory information can be selectively attended to on the basis of its physical characteristics, such as location and volume.[24][25][26] Others suggest that information can be attended to on the basis of Gestalt features, including continuity and closure.[27] For Broadbent, this explained the mechanism by which people can choose to attend to only one source of information at a time while excluding others. However, Broadbent's model failed to account for the observation that words of semantic importance, for example the individual's own name, can be instantly attended to despite having been in an unattended channel.

Shortly after Broadbent's experiments, Oxford undergraduates Gray and Wedderburn repeated his dichotic listening tasks, altered with monosyllabic words that could form meaningful phrases, except that the words were divided across ears.[28] For example, the words, "Dear, one, Jane," were sometimes presented in sequence to the right ear, while the words, "three, Aunt, six," were presented in a simultaneous, competing sequence to the left ear. Participants were more likely to remember, "Dear Aunt Jane," than to remember the numbers; they were also more likely to remember the words in the phrase order than to remember the numbers in the order they were presented. This finding goes against Broadbent's theory of complete filtration because the filter mechanism would not have time to switch between channels. This suggests that meaning may be processed first.

Treisman

[edit]

In a later addition to this existing theory of selective attention, Anne Treisman developed the attenuation model.[29] In this model, information, when processed through a filter mechanism, is not completely blocked out as Broadbent might suggest. Instead, the information is weakened (attenuated), allowing it to pass through all stages of processing at an unconscious level. Treisman also suggested a threshold mechanism whereby some words, on the basis of semantic importance, may grab one's attention from the unattended stream. One's own name, according to Treisman, has a low threshold value (i.e. it has a high level of meaning) and thus is recognized more easily. The same principle applies to words like fire, directing our attention to situations that may immediately require it. The only way this can happen, Treisman argued, is if information was being processed continuously in the unattended stream.

Deutsch and Deutsch

[edit]

Diana Deutsch, best known for her work in music perception and auditory illusions, has also made important contributions to models of attention. In order to explain in more detail how words can be attended to on the basis of semantic importance, Deutsch & Deutsch[30] and Norman[31] proposed a model of attention which includes a second selection mechanism based on meaning. In what came to be known as the Deutsch-Norman model, information in the unattended stream is not processed all the way into working memory, as Treisman's model would imply. Instead, information on the unattended stream is passed through a secondary filter after pattern recognition. If the unattended information is recognized and deemed unimportant by the secondary filter, it is prevented from entering working memory. In this way, only immediately important information from the unattended channel can come to awareness.

Kahneman

[edit]

Daniel Kahneman also proposed a model of attention, but it differs from previous models in that he describes attention not in terms of selection, but in terms of capacity. For Kahneman, attention is a resource to be distributed among various stimuli,[32] a proposition which has received some support.[6][4][33] This model describes not when attention is focused, but how it is focused. According to Kahneman, attention is generally determined by arousal; a general state of physiological activity. The Yerkes-Dodson law predicts that arousal will be optimal at moderate levels - performance will be poor when one is over- or under-aroused. Of particular relevance, Narayan et al. discovered a sharp decline in the ability to discriminate between auditory stimuli when background noises were too numerous and complex - this is evidence of the negative effect of overarousal on attention.[4] Thus, arousal determines our available capacity for attention. Then, an allocation policy acts to distribute our available attention among a variety of possible activities. Those deemed most important by the allocation policy will have the most attention given to them. The allocation policy is affected by enduring dispositions (automatic influences on attention) and momentary intentions (a conscious decision to attend to something). Momentary intentions requiring a focused direction of attention rely on substantially more attention resources than enduring dispositions.[34] Additionally, there is an ongoing evaluation of the particular demands of certain activities on attention capacity.[32] That is to say, activities that are particularly taxing on attention resources will lower attention capacity and will influence the allocation policy - in this case, if an activity is too draining on capacity, the allocation policy will likely cease directing resources to it and instead focus on less taxing tasks. Kahneman's model explains the cocktail party phenomenon in that momentary intentions might allow one to expressly focus on a particular auditory stimulus, but that enduring dispositions (which can include new events, and perhaps words of particular semantic importance) can capture our attention. It is important to note that Kahneman's model doesn't necessarily contradict selection models, and thus can be used to supplement them.

Visual correlates

[edit]

Some research has demonstrated that the cocktail party effect may not be simply an auditory phenomenon, and that relevant effects can be obtained when testing visual information as well. For example, Shapiro et al. were able to demonstrate an "own name effect" with visual tasks, where subjects were able to easily recognize their own names when presented as unattended stimuli.[35] They adopted a position in line with late selection models of attention such as the Treisman or Deutsch-Norman models, suggesting that early selection would not account for such a phenomenon. The mechanisms by which this effect might occur were left unexplained.

Effect in animals

[edit]

Animals that communicate in choruses such as frogs, insects, songbirds and other animals that communicate acoustically can experience the cocktail party effect as multiple signals or calls occur concurrently. Similar to their human counterparts, acoustic mediation allows animals to listen for what they need to within their environments. For Bank swallows, cliff swallows, and king penguins, acoustic mediation allows for parent/offspring recognition in noisy environments. Amphibians also demonstrate this effect as evidenced in frogs; female frogs can listen for and differentiate male mating calls, while males can mediate other males' aggression calls.[36] There are two leading theories as to why acoustic signaling evolved among different species. Receiver psychology holds that the development of acoustic signaling can be traced back to the nervous system and the processing strategies the nervous system uses. Specifically, how the physiology of auditory scene analysis affects how a species interprets and gains meaning from sound. Communication Network Theory states that animals can gain information by eavesdropping on other signals between others of their species. This is true especially among songbirds.[36]

Hearables for the cocktail party effect

[edit]

Hearable devices like noise-canceling headphones have been designed to address the cocktail party problem.[37][38] These type of devices could provide wearers with a degree of control over the sound sources around them.[39][40][41]

Deep learning headphone systems like target speech hearing have been proposed to give wearers the ability to hear a target person in a crowded room with multiple speakers and background noise.[37] This technology uses real-time neural networks to learn the voice characteristics of an enrolled target speaker, which is later used to focus on their speech while suppressing other speakers and noise.[39][42] Semantic hearing headsets also use neural networks to enable wearers to hear specific sounds, such as birds tweeting or alarms ringing, based on their semantic description, while suppressing other ambient sounds in the environment.[38] Real-time neural networks have also been used to create programmable sound bubbles on headsets, allowing all speakers within the bubble to be audible while suppressing speakers and noise outside the bubble.[43][41]

These devices could benefit individuals with hearing loss, sensory processing disorders and misophonia as well as people who require focused listening for their job in health-care and military, or for factory or construction workers.

See also

[edit]

References

[edit]
  1. ^ Bronkhorst, Adelbert W. (2000). "The Cocktail Party Phenomenon: A Review on Speech Intelligibility in Multiple-Talker Conditions". Acta Acustica United with Acustica. 86: 117–128. Retrieved 2020-11-16.
  2. ^ Shinn-Cunningham BG (May 2008). "Object-based auditory and visual attention" (PDF). Trends in Cognitive Sciences. 12 (5): 182–6. doi:10.1016/j.tics.2008.02.003. PMC 2699558. PMID 18396091. Archived from the original (PDF) on 2015-09-23. Retrieved 2014-06-20.
  3. ^ Marinato G, Baldauf D (February 2019). "Object-based attention in complex, naturalistic auditory streams". Scientific Reports. 9 (1): 2854. Bibcode:2019NatSR...9.2854M. doi:10.1038/s41598-019-39166-6. PMC 6393668. PMID 30814547.
  4. ^ a b c Narayan R, Best V, Ozmeral E, McClaine E, Dent M, Shinn-Cunningham B, Sen K (December 2007). "Cortical interference effects in the cocktail party problem". Nature Neuroscience. 10 (12): 1601–7. doi:10.1038/nn2009. PMID 17994016. S2CID 7857806.
  5. ^ Wood N, Cowan N (January 1995). "The cocktail party phenomenon revisited: how frequent are attention shifts to one's name in an irrelevant auditory channel?". Journal of Experimental Psychology: Learning, Memory, and Cognition. 21 (1): 255–60. doi:10.1037/0278-7393.21.1.255. PMID 7876773.
  6. ^ a b Conway AR, Cowan N, Bunting MF (June 2001). "The cocktail party phenomenon revisited: the importance of working memory capacity". Psychonomic Bulletin & Review. 8 (2): 331–5. doi:10.3758/BF03196169. PMID 11495122.
  7. ^ a b c Cherry EC (1953). "Some Experiments on the Recognition of Speech, with One and with Two Ears" (PDF). The Journal of the Acoustical Society of America. 25 (5): 975–79. Bibcode:1953ASAJ...25..975C. doi:10.1121/1.1907229. hdl:11858/00-001M-0000-002A-F750-3. ISSN 0001-4966.
  8. ^ Pryse-Phillips W (2003). Companion to Clinical Neurology (2nd ed.). Oxford: Oxford University Press. p. 206. ISBN 0-19-515938-1.
  9. ^ a b Getzmann S, Jasny J, Falkenstein M (February 2017). "Switching of auditory attention in "cocktail-party" listening: ERP evidence of cueing effects in younger and older adults". Brain and Cognition. 111: 1–12. doi:10.1016/j.bandc.2016.09.006. PMID 27814564. S2CID 26052069.
  10. ^ de Vries IE, Marinato G, Baldauf D (August 2021). "Decoding object-based auditory attention from source-reconstructed MEG alpha oscillations". The Journal of Neuroscience. 41 (41): 8603–8617. doi:10.1523/JNEUROSCI.0583-21.2021. PMC 8513695. PMID 34429378.
  11. ^ a b Evans S, McGettigan C, Agnew ZK, Rosen S, Scott SK (March 2016). "Getting the Cocktail Party Started: Masking Effects in Speech Perception". Journal of Cognitive Neuroscience. 28 (3): 483–500. doi:10.1162/jocn_a_00913. PMC 4905511. PMID 26696297.
  12. ^ a b Hawley ML, Litovsky RY, Culling JF (February 2004). "The benefit of binaural hearing in a cocktail party: effect of location and type of interferer" (PDF). The Journal of the Acoustical Society of America. 115 (2): 833–43. Bibcode:2004ASAJ..115..833H. doi:10.1121/1.1639908. PMID 15000195. Archived from the original (PDF) on 2016-10-20. Retrieved 2013-07-21.
  13. ^ Fritz JB, Elhilali M, David SV, Shamma SA (August 2007). "Auditory attention--focusing the searchlight on sound". Current Opinion in Neurobiology. 17 (4): 437–55. doi:10.1016/j.conb.2007.07.011. PMID 17714933. S2CID 11641395.
  14. ^ Sorkin, Robert D.; Kantowitz, Barry H. (1983). Human factors: understanding people-system relationships. New York: Wiley. ISBN 978-0-471-09594-1. OCLC 8866672.
  15. ^ a b c d e f g Revlin R (2007). Human Cognition : Theory and Practice. New York, NY: Worth Pub. p. 59. ISBN 9780716756675. OCLC 779665820.
  16. ^ a b Moray N (1959). "Attention in dichotic listening: Affective cues and the influence of instructions" (PDF). Quarterly Journal of Experimental Psychology. 11 (1): 56–60. doi:10.1080/17470215908416289. ISSN 0033-555X. S2CID 144324766.
  17. ^ a b c d Plude DJ, Enns JT, Brodeur D (August 1994). "The development of selective attention: a life-span overview". Acta Psychologica. 86 (2–3): 227–72. doi:10.1016/0001-6918(94)90004-3. PMID 7976468.
  18. ^ a b Newman RS (March 2005). "The cocktail party effect in infants revisited: listening to one's name in noise". Developmental Psychology. 41 (2): 352–62. doi:10.1037/0012-1649.41.2.352. PMID 15769191.
  19. ^ Driver J (February 2001). "A selective review of selective attention research from the past century" (PDF). British Journal of Psychology. 92 Part 1: 53–78. doi:10.1348/000712601162103. PMID 11802865. Archived from the original (PDF) on 2014-05-21. Retrieved 2013-07-21.
  20. ^ Straube ER, Germer CK (August 1979). "Dichotic shadowing and selective attention to word meaning in schizophrenia". Journal of Abnormal Psychology. 88 (4): 346–53. doi:10.1037/0021-843X.88.4.346. PMID 479456.
  21. ^ Nielsen SL, Sarason IG (1981). "Emotion, personality, and selective attention". Journal of Personality and Social Psychology. 41 (5): 945–960. doi:10.1037/0022-3514.41.5.945. ISSN 0022-3514.
  22. ^ Getzmann S, Näätänen R (November 2015). "The mismatch negativity as a measure of auditory stream segregation in a simulated "cocktail-party" scenario: effect of age". Neurobiology of Aging. 36 (11): 3029–3037. doi:10.1016/j.neurobiolaging.2015.07.017. PMID 26254109. S2CID 25443567.
  23. ^ a b Cohen A (2006). "Selective Attention". Encyclopedia of Cognitive Science. doi:10.1002/0470018860.s00612. ISBN 978-0470016190.
  24. ^ a b Broadbent DE (March 1954). "The role of auditory localization in attention and memory span". Journal of Experimental Psychology. 47 (3): 191–6. doi:10.1037/h0054182. PMID 13152294.[dead link]
  25. ^ Scharf B (1990). "On hearing what you listen for: The effects of attention and expectancy". Canadian Psychology. 31 (4): 386–387. doi:10.1037/h0084409.
  26. ^ Brungart DS, Simpson BD (January 2007). "Cocktail party listening in a dynamic multitalker environment". Perception & Psychophysics. 69 (1): 79–91. doi:10.3758/BF03194455. PMID 17515218.
  27. ^ Haykin S, Chen Z (September 2005). "The cocktail party problem". Neural Computation. 17 (9): 1875–902. doi:10.1162/0899766054322964. PMID 15992485. S2CID 207575815.
  28. ^ Gray JA, Wedderburn AA (1960). "Grouping strategies with simultaneous stimuli". Quarterly Journal of Experimental Psychology. 12 (3): 180–184. doi:10.1080/17470216008416722. S2CID 143819583. Archived from the original on 2015-01-08. Retrieved 2013-07-21.
  29. ^ Treisman AM (May 1969). "Strategies and models of selective attention". Psychological Review. 76 (3): 282–99. doi:10.1037/h0027242. PMID 4893203.
  30. ^ Deutsch JA, Deutsch D (January 1963). "Some theoretical considerations". Psychological Review. 70 (I): 80–90. doi:10.1037/h0039515. PMID 14027390.
  31. ^ Norman DA (1968). "Toward a theory of memory and attention". Psychological Review. 75 (6): 522–536. doi:10.1037/h0026699.
  32. ^ a b Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall.
  33. ^ Dalton P, Santangelo V, Spence C (November 2009). "The role of working memory in auditory selective attention". Quarterly Journal of Experimental Psychology. 62 (11): 2126–32. doi:10.1080/17470210903023646. PMID 19557667. S2CID 17704836.
  34. ^ Koch I, Lawo V, Fels J, Vorländer M (August 2011). "Switching in the cocktail party: exploring intentional control of auditory selective attention". Journal of Experimental Psychology. Human Perception and Performance. 37 (4): 1140–7. doi:10.1037/a0022189. PMID 21553997.
  35. ^ Shapiro KL, Caldwell J, Sorensen RE (April 1997). "Personal names and the attentional blink: a visual "cocktail party" effect". Journal of Experimental Psychology. Human Perception and Performance. 23 (2): 504–14. doi:10.1037/0096-1523.23.2.504. PMID 9104007.
  36. ^ a b Bee MA, Micheyl C (August 2008). "The cocktail party problem: what is it? How can it be solved? And why should animal behaviorists study it?". Journal of Comparative Psychology. 122 (3): 235–51. doi:10.1037/0735-7036.122.3.235. PMC 2692487. PMID 18729652.
  37. ^ a b "Noise-canceling headphones use AI to let a single voice through". MIT Technology Review. Retrieved 2024-05-26.
  38. ^ a b Veluri, Bandhav; Itani, Malek; Chan, Justin; Yoshioka, Takuya; Gollakota, Shyamnath (2023-10-29). "Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables". Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. ACM. pp. 1–15. arXiv:2311.00320. doi:10.1145/3586183.3606779. ISBN 979-8-4007-0132-0.
  39. ^ a b Zmolikova, Katerina; Delcroix, Marc; Ochiai, Tsubasa; Kinoshita, Keisuke; Černocký, Jan; Yu, Dong (May 2023). "Neural Target Speech Extraction: An overview". IEEE Signal Processing Magazine. 40 (3): 8–29. arXiv:2301.13341. Bibcode:2023ISPM...40c...8Z. doi:10.1109/MSP.2023.3240008. ISSN 1053-5888.
  40. ^ "Noise-canceling headphones could let you pick and choose the sounds you want to hear". MIT Technology Review. Retrieved 2024-05-26.
  41. ^ a b Chen, Tuochao; Itani, Malek; Eskimez, Sefik Emre; Yoshioka, Takuya; Gollakota, Shyamnath (2024-11-14). "Hearable devices with sound bubbles". Nature Electronics: 1–12. doi:10.1038/s41928-024-01276-z. ISSN 2520-1131.
  42. ^ Veluri, Bandhav; Itani, Malek; Chen, Tuochao; Yoshioka, Takuya; Gollakota, Shyamnath (2024-05-11). "Look Once to Hear: Target Speech Hearing with Noisy Examples". Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM. pp. 1–16. arXiv:2405.06289. doi:10.1145/3613904.3642057. ISBN 979-8-4007-0330-0.
  43. ^ Ma, Dong (2024-11-14). "Creating sound bubbles with intelligent headsets". Nature Electronics. doi:10.1038/s41928-024-01281-2. ISSN 2520-1131.