By Douglas L. Beck, AuD, and Jackie L. Clark, PhD
The interaction and codependence of cognitive and sensory systems remains a paramount concern for audiologists as we consider our roles and responsibilities with respect to diagnosing and remediating hearing loss and providing amplification.
Sometimes what is old becomes new again. An example is the research looking into the impact of cognition on audition and, conversely, the impact of audition upon cognition. Though we strive to move forward integrating previously published findings into our current and collective knowledge base, it is almost unimaginable that some of the research giants (Houtgast, Plomp, Duquesnoy, and others) addressed audition and cognition more than three decades ago. Fortunately, we have witnessed a resurgence of interest in these same concepts. Much of the recent exciting findings will clearly impact patient care in a positive manner as we integrate new and old concepts regarding cognition and audition.
We know active and passive auditory processing allows us to receive and perceive multiple acoustic signals superimposed upon each other. Incredibly, when our peripheral and central nervous systems function optimally, we extract precise and extraordinary meaning from the cacophony of sounds around us almost effortlessly.
Fortunately audiologists, psychologists, neuroscientists, radiologists, speech-language pathologists, and others continue to explore interactions between sensory and cognitive processes. The interaction and codependence of cognition (top-down) and sensory (bottom-up) systems allows humans to perceive the world around them in ways unique from other beings. Humans are distinguished from all other beings by the ability to apply cognitive processes (knowledge, memory, attention, and intelligence) to sensory input, to communicate, to learn, and to share thoughts and ideas. The interaction and codependence of cognitive and sensory systems remains a paramount concern, as we audiologists consider our role in diagnosing and remediating hearing loss, and providing amplification.
Top-Down and Bottom-Up systems
Top-down (TD) systems receive, evaluate, and interpret bottom-up (BU) input. TD systems assign meaning. They determine psychological and emotional composition, and include cognition, executive functions, speech, language and auditory processing, and other self-driven analyses and interpretations. BU systems transfer and transmit “raw” externally derived sensory input (light, temperature, sound, smells, pressure, etc.) to the central auditory nervous system.
When we test and treat hearing loss (BU) in isolation from listening (TD) we separate the physical sound source from the meaning of the sound. The presence of sound without meaning (i.e., noise) is often a source of frustration and may cause patients to reject amplification. Patients live in a world where cognition, attention, memory, and hearing each play critical roles in listening. As such, when we separate hearing from listening, we measure and treat sensory ability in isolation from a comprehensive communication profile.
Akeroyd (2008) surveyed 20 articles regarding speech reception in noise and cognition. He noted there is a link between hearing and cognition. However, the specific “hinge-pin” that consistently relates the two has yet to be determined. Nonetheless, working memory appears more highly related to audition, while general scholastic tests appear less related. Akeroyd noted that in difficult and acoustically challenging situations, listeners use previously acquired knowledge and learned rules to “fill in the blanks.” Therefore, when speech signals are degraded, missing, or ambiguous, people use top-down skills to better resolve the acoustic input (see Gatehouse et al, 2003, 2006, 2006a; Lunner, 2003; Lunner & Sundewall-Thoren, 2007; RÖnnberg et al, 2008).
Pichora-Fuller (2007) reported that audiologists are keenly interested in the link between audition and cognition because measures of visual letter or digit monitoring, working memory, and IQ are significantly correlated with performance on speech tests. She further noted that landmark studies (see Gatehouse et al, 2003, 2006; Lunner, 2003) are addressing important and pragmatic connections between cognition and hearing aid outcomes. More critically, Pichora-Fuller (2008a) suggests important predictors of successful hearing aid use are closely related to cognitive level, vocabulary, and working memory. These specific abilities allow hearing aid wearers to “fill in the blanks” by tapping into cognitive and lexical reserves when suboptimal BU auditory signals occur. Although cognitive performance is maximized when listening is effortless, memory and comprehension are reduced as the BU signal is degraded via masking or temporal distortion. As listening effort increases, less cognitive resources are available for listening and reasoning, resulting in reduced retention and comprehension (Pichora-Fuller, 2008b). Research has shown older listeners need a 3 dB more favorable signal-to-noise ratio (SNR) than younger adults on speech in noise tests due to auditory aging, and possibly slower auditory temporal processing (Pichora-Fuller, 2006). To accurately perceive rapid speech one must sustain attention, then divide attention between monitoring the incoming signal and analysis, and integration of the speech elements (Rawool, 2007).
Listening as the Foundation of Learning
The “blank slate” (tabula rasa) argument states that we are born essentially in a neutral state of being, and as our unique world experience is written upon us, our ultimate personal cognitive identity emerges. Although the core philosophical discussion is beyond the scope of this article, we believe it is fair to say BU processes “drive” cognitive processes. That is, one cannot process that which is not perceived.
Most of what children learn about language, and societal rules is through passive listening. Flexer (2005) reminds us that “data input precedes data processing.” However, BU and TD processes are codependent. Audition is so vitally important to education that the entire premise of the educational system is undermined when a child cannot clearly hear spoken instruction (Flexer, 2005).
Newborns and babies perceive sensory stimuli via BU processes; they internalize information and then apply TD processes. This learning process likely involves “mirror neurons” as the foundation upon which learning occurs (Beck, 2008a). For example, children hear “ma, ma, ma, ma… .” After repeated exposure the sound is “internalized” and meaning is assigned; eventually (much to their mother’s delight) they repeat it expressively. This same process is repeated as additional sensory input is perceived and cognitive ability is applied. For example, children learn to recognize letters and assign meaning to specific alphabetic shapes, and then, after appreciating the sensory input, they eventually learn to write. Again, the process is driven by sensory input (BU) first, resulting in TD processes second. Over time, the cognitive ability of the child recognizes patterns, applies learned rules, and interprets novel sensory stimuli based on prior acquired knowledge. Perhaps “feature extraction” (see Levitin, 2006) plays a role as cognitive systems develop? That is, when a child notices the sky is blue and the big round ball is blue, then “blue” takes on meaning as a color and can be integrated and applied to other novel shapes. Likewise, the child realizes the dog bark is loud, as is the car horn, and then loudness too can take on independent cognitive value.
Neuroplasticity (also called cortical plasticity and neural plasticity) involves organizational changes of the brain in response to added or reduced sensations or experiences. Neuroplastic activity is maximal in early life (Yoshinaga-Itano et al, 1998; Eggermont, 2008) but is nonetheless apparent across the life span. When the brain has been deprived of sound to any degree, “cross-modal (plasticity) reorganization” occurs. Indeed, neural stimuli from other sensory systems can be received, thus altering the auditory capacity of the brain (Lee et al, 2001; Sarma 2002). When the auditory system is functioning normally, BU input finely tunes the form and function of the central auditory nervous system (CANS). Billings and Tremblay (2007) noted that people with hearing loss (and essentially all people seeking evaluation for hearing aids or cochlear implants) have auditory systems that have already undergone significant changes, primarily from auditory deprivation. Additionally, the successful amplification candidate will undergo further changes secondary to effective amplification. A plethora of research has demonstrated that the absence of auditory stimulation can arrest cortical development in children.
When the BU system (i.e., hearing) is compromised, the TD system (i.e., cognition) must work harder to make sense of the input. To compensate for reduced BU information the TD system reallocates resources to increase attention, attends more to context, maximizes short-term memory, and applies knowledge previously acquired. These reallocations of energy and resources likely slow and reduce processing ability. People with hearing loss must dig more deeply into their cognitive abilities to make sense of a compromised auditory input. When multiple acoustic signals are superimposed upon each other, individuals with hearing loss may appear to have problems with recall, comprehending language, or other cognitive deficits due to a compromised BU input, secondary to hearing loss.
Negative synergy (Cognitive Decline and hearing Loss)
Palmer and colleagues (1999) reported that diseases can interact such that the comorbid result is worse than either disease in isolation. In 2008, Schum and Beck argued that “Negative Synergy” occurs when aging and cognitive decline occur in tandem with hearing loss (i.e., presbycusis) resulting in a worse situation than the individual components might indicate. Negative synergy can be realized as secondary to common presentations of age-related dementia. In 2007, the National Institutes of Health (NIH, 2007) estimated that for people between the ages of 65 and 74 years of age, about five percent have Alzheimer’s Disease (AD). For those between 74 and 85 years, about 20 percent have AD, and almost half of all people aged 85 years and older have AD. An extraordinarily high prevalence of hearing loss occurs in individuals with cognitive deficits as compared to healthy, similarly aged cohorts (Gold et al, 1996).
Hearing loss negatively impacts social and behavioral interactions. Arlinger (2003) noted hearing loss initiates a number of negative consequences and disabilities. Negative consequences include poorer quality of life from isolation, reduced social activity, depression, feelings of being excluded, as well as loss of cognitive function, and further, uncorrected hearing loss may contribute to cognitive decline. Indeed, research on aging indicates that compromised auditory input does exaggerate cognitive deficits.
Allen and colleagues (2003) noted cognitively normal elderly volunteers with hearing loss exhibited increased effort for speech recognition tasks due to their hearing loss. Thus, increased TD effort resulted in diminished cognitive reserve. The authors noted that higher IQs tended to mitigate the impact of hearing loss and speculated that this was perhaps due to higher processing speeds. For patients identified with dementia, who already had slower processing speeds, hearing loss further compromised their cognitive ability. In fact, twice as many subjects with dementia also had hearing loss than would otherwise be expected.
Another compelling longitudinal research program compared cognition changes as a function of hearing loss in a cohort group of elderly adults diagnosed with dementia (Peters et al, 1988). All subjects had comparable living arrangements, medical illnesses, years of education, mood, number of drugs taken, and baseline cognitive function. Alarmingly, cognitive function decline was greater in the hearing impaired individuals. For those with Alzheimer’s, their hearing impairment predicted a more rapid cognitive decline.
Aural Rehab: train the Brain to Improve Cognition
It seems that, regardless of age, a listener is more able to use the information that has been heard if the quality of input is better. Thus, cognitive performance is optimal when listening is effortless and (cognitive performance) is reduced when listening is effortful. (Pichora-Fuller, 2008b)
Human brains are highly amenable to training, habilitation, and rehabilitation due to neuroplasticity. Though cognitive rehabilitation is more apparent in the speechlanguage pathology, physical therapy, and occupational therapy literature, cognitive rehabilitation strategies offer value and benefit as applied to aural rehabilitation (AR) programs. AR programs strive to improve audibility and provide accurate acoustic representation of the sound signal. However, AR also strives to maximize the individual’s abilities with respect to hearing and listening based on the type and degree of hearing loss, and the specific amplification system(s) recommended.
Palmer and colleagues (1999) suggested that hearing loss may accelerate cognitive decline, and sensory intervention could potentially reduce cognitive deficits, such as dementia. In a recent investigation on the relationship between effortful cognitive function (via working memory and verbal information processing in the presence of noise) and experimental hearing aid use, Lunner (2003) found a strong correlation between performance in demanding listening situations and cognitive function with and without hearing aids. Subjects with high working memory capacity were better able to identify and report specific processing effects of experimental hearing aids. Lunner suggested that careful attention should be paid to the cognitive status of the listeners, as it may have a significant influence on their ability to use hearing aids. Clearly, the stakes are high when audiologists select amplification and AR protocols.
Negative behaviors in elderly subjects with dementia and AD may be reduced with amplification. Allen and colleagues (2003) demonstrated global improvements in mental state in 40 percent of subjects, and improvements in speech intelligibility in 33 percent of subjects after 24 weeks of monaural hearing aid fittings. The authors reported that patients with dementia tolerated routine audiometry; almost half of their patients with mild-moderate SNHL and dementia improved when hearing was restored, and some benefited from simple cerumen removal.
Palmer et al (1999) reported a small cohort project encompassing hearing-impaired AD subjects. They found between one and four problem behaviors were significantly reduced for all subjects undergoing hearing aid treatment. Additionally, caregivers and spouses reported that their hearing handicap had been significantly reduced and all subjects were wearing their hearing aids between four and 13 hours daily.
Implications for Aural Rehabilitation
As noted earlier, when we test and treat hearing loss (BU) in isolation from listening (TD), we separate the physical sound source from the meaning of the sound. The presence of sound without meaning (i.e., noise) is often a source of frustration and may cause patients to reject amplification. Patients live in a world where cognition, attention, memory, and hearing each play critical roles in listening. As such, when we only measure hearing without regard to listening, we fail to provide a comprehensive patient-based communication profile, and we may inadvertently “miss the forest for the trees.”
Research indicates that measures of hearing loss (BU) in isolation from measures of listening (TD) ability are less than ideal. As we assemble and implement AR programs, we should remain mindful of and address (within our scope of practice and expertise) the interaction and codependence of these two systems (BU and TD).
Rawool (2007) offers six specific considerations for aural rehabilitation of older adults:
- Amplification can compensate hearing loss (sensory deprivation) and can deliver auditory information so as to reduce cognitive demand.
- Auditory training can improve neural timing in the auditory brainstem.
- Processing speed can improve through practice.
- Plasticity can be used to improve cognition.
- Context training can be useful.
- Healthy habits such as exercise can assist in rehabilitation.
Measures of cognition can be incorporated into routine diagnostic audiologic assessments and will be the topic of an upcoming article. An individual’s ability to listen in noise places significant demands on cognitive reserve. Consequently, assessing performance in noise appears to be a viable and valuable diagnostic indicator and may serve as an outcome measure following treatment.
Conclusion: Cognition-Friendly Amplification
It can be argued that the goal of AR is to provide the best possible sound quality delivered at the best possible signalto-noise ratio to ease the cognitive burden. For individuals with hearing loss and cognitive deficits, high quality “cognitive-friendly” amplification choices would be ideal. Fortunately, audiologists have sophisticated features and hearing aid programmability at their disposal to offer more than simply making sounds louder. Of course, making sounds louder is the core goal of amplification, and indeed, audibility addresses a significant portion of BU deficits.
However, for many patients, reducing the cognitive burden while listening requires more than basic amplification. It seems reasonable that amplification features, which facilitate ease-of-listening, should likewise reduce cognitive burden. For example, binaural amplification generally facilitates appreciation of interaural differences for improved auditory processing (Bronkhorst and Plomp, 1988; Duquesnoy, 1983) and allows appreciation of spatial cues via interaural timing and loudness perception. Features such as directionality to improve the signalto-noise ratio (SNR) have been proven to be beneficial in multiple situations. FM systems substantially improve the signal-to-noise ratio while reducing or eliminating background noise and reverb (Beck, 2008b and see Beck, Doty-Tomasula, and Sexton, 2006). It is well known that effective noise reduction programs in modern hearing aids provide an easier-to-listen-to signal in annoying and noisy situations. Further, modern amplification provides improved representation of spatial separation between the target and competing sounds while allowing interaural timing and interaural latency parameters to be better preserved and delivered, ultimately facilitating improved spatial perceptions. Wireless connectivity allows multiple sound sources to be delivered to both ears, improving sound quality as well as SNRs. Extended bandwidths present high fidelity information-packed speech, music, and sound cues (Beck and Olsen, 2008). Open fittings help preserve many natural acoustic cues (Beck, 2000b) and likely enhance spatial fidelity. Feedback management through phase cancellation facilitates more high frequency gain, i.e., speech intelligibility cues, and helps preserve (prescribed) high frequency speech sounds.
Therefore, “cognition friendly” amplification is a concept that recognizes we have extraordinary hearing aids and fitting tools at our disposal. As outcome-based and translational research supports and avails new knowledge and tools to us, we should consider the potential cognitive benefits of advanced technology and AR alternatives. Appropriately employing advanced and proven hearing aid features and AR strategies will reduce the cognitive burden that accompanies compromised bottom-up processing.
Unequivocally, evidence is mounting, and arguably our historic BU view of audition may need further consideration. Although we advocate consideration for “human connectivity” (Beck and Harvey, 2009) and “cognition friendly” hearing aid and alternative amplification fittings as concepts, it is indeed too early to say that hearing aid XYZ offers more “cognitive relief” than hearing aid ABC.
Nonetheless, people with hearing loss must dig deeply into their cognitive reserve and abilities to make sense of a world delivered to them via compromised auditory input. Once the auditory system experiences reduction in BU information, TD’s reallocation of resources (i.e., neuroplasticity) is cause enough to consider “cognition friendly” amplification, and to include AR in the equation to assess and treat the “whole patient.” Lastly, it appears audition matters more as cognitive ability decreases, and conversely, cognition matters more as auditory ability decreases.
Special thanks to Thomas Lunner, PhD, Oticon A/S Research Centre Eriksholm, Denmark, and Technical Audiology Department of Clinical and Experimental Medicine, Linköping University, Sweden), for his thoughtful review and comments on this article.
Douglas L. Beck, AuD, is the director of professional relations for Oticon Inc, Somerset, NJ. Jackie L. Clark, PhD, is the clinical assistant professor at the School of Behavioral & Brain Sciences, UT Dallas/Callier Center Speech & Hearing Therapy, U Witwatersrand, Johannesburg.
References and Recommendations
Akeroyd MA (2008). “Are individual differences in speech reception related to individual differences in cognitive ability? A survey of 20 experimental studies with normal and hearingimpaired adults,” International Journal of Audiology; 47 (Suppl.2): S125–S143.
Allen NH, Burns A, Newton V, Hickson F, Ramsden R, Rogers, et al. “The effect of improving hearing in dementia,” Age and Aging, (2003) 32 (2), p 189–193. Arlinger S. “Negative Consequences of Uncorrected Hearing Loss – A Review.“ International Journal of Audiology, July 2003, 42 Supplement 2:2S17–20.
Beck DL. (2008a). Auditory Reflections: Mirror Neurons, www.audiology.org/news/pages/20081215a.aspx.
Beck DL (2008b): Interview with Michael Valente, PhD, www.audiology.org/news/pages/20081218a.aspx.
Beck DL and Olsen J. (2008): Extended Bandwidth in Hearing Aids. Hearing Review, October 2008. www.hearingreview.com/issues/articles/2008-10_02.asp.
Beck DL and Harvey M. “Traditional and Nontraditional Communication and Connectivity,” Hearing Review, January 2009. www.hearingreview.com/issues/articles/2009-01_04.asp.
Beck DL, Doty-Tomasula M, and Sexton J. (2006) FM Made Friendly. www.oticonusa.com/eprise/main/sitegen/oticon/content/professionals/library/new_from_oticon_ /fm_made_friendly.html.
Billings CT, Tremblay KA. “Hearing Aids and the Brain: What’s the Connection?” ASHA Leader, May 29, 2007, p 5.
Bronkhorst AW, Plomp R. “Binaural speech intelligibility in noise for hearing-impaired listeners,” J Acoust Soc Am, (1988) 86, p 1374–1383.
Duquesnoy AJ. “The intelligibility of sentences in quiet and in noise in aged listeners,” J Acoust Soc Am, (1983) 74, p 1136–1144.
Eggermont JJ. “The Role of Sound in Adult and Developmental Auditory Cortical Plasticity,” Ear & Hearing, December 2008. Vol 29, No 6.
Flexer C. Rationale for the Use of Sound Field Systems in Classrooms: The Basis of Teacher In-Services. In: Sound Field Amplification—Application to Speech Perception and Classroom Acoustics, 2nd Edition. Editors; Crandell, CC., Smaldino, JJ. & Flexer, C. Thomson Delmar Learning. (2005) ISBN 1-4018-5145-2.
Gatehouse S, Naylor G, Elberling C. “Benefits from hearing aids in relation to the interaction between the user and the environment,” International Journal of Audiology, (2003) 42: (Suppl 1), S77–S85.
Gatehouse S, Naylor G, & Elberling C. “Linear and nonlinear hearing aid fittings – 2. Patterns of candidature,” International Journal of Audiology, (2006) 45, p 153–171.
Gatehouse S, Naylor G, Elberling C. (2006a) Linear and nonlinear hearing aid fittings—1. Patterns of benefit. International Journal of Audiology, 45, 130–152.
Gold M, Lightfoot LA, and Hnath-Chisolm T. “Hearing loss in memory disorders. A specially vulnerable population,” Arch Neurol. (1996) Sep; 53(9): 922–8.
Harvey M. In Odyssey of Hearing Loss – Tales of Triumph. DawnSignPress. San Diego, California, (1998), ISBN 1-58121-006-X.
Lee DS, Lee JS, Oh SH, Kim SK, Kim JW, Chung JK, et al. “Cross-modality plasticity and cochlear implants,” Nature, (2001) 409: p 149–150.
Levitin DJ, This is Your Brain on Music—The Science of a Human Obsession. Plume Books, ISBN 978-0-452-28852-2, (2006), p 233.
Lunner T. “Cognitive Function in Relation to Hearing Aid Use,” International Journal of Audiology, 2003; 42:S49-S58.
Lunner T, Sundewall-Thorén E. “Interactions between cognition, compression, and listening conditions: effects on speech-innoise performance in a two-channel hearing aid,” Journal of the American Academy of Audiology, (2007), 18, p 539–552.
NIH (2007): Alzheimer’s Disease Fact Sheet, www.nia.nih.gov/alzheimers/publications/adfact.htm
Palmer CV, Adams SW, Bourgeois M, Durrant J, Rossi M. “Reduction in Caregiver-Identified Problem Behaviours in Patients with Alzheimer Disease Post Hearing Aid Fitting,” Journal of Speech, Language and Hearing Research, April, 1999, Vol 42, p 312–328.
Pichora-Fuller MK. “Audition and cognition: What audiologists need to know about listening,” In C. Palmer & R. Seewald (eds.) Hearing Care for Adults. Stäfa, Switzerland: Phonak, (2007). p 71–85.
Pichora-Fuller MK & Singh G (2006). “Effects of Age on Auditory and Cognitive Processing: Implications for Hearing Aid Fitting and Audiologic Rehabilitation.” Trends Amplif; 10; p 29–59.
Pichora-Fuller MK. (2008a) quoted in Convention News, “Celebrating 20 Years, AAA is Hear to Stay” from: Advance for Speech-Language Pathologists and Audiologists. By Jason Mosheim, speech-languagepathology-audiology.advance.web.com/editorial.
Pichora-Fuller MK. (2008b) “Audition and Cognition: Where the Lab Meets Clinic,” The ASHA Leader, 13(10), 14–17, August 12, 2008.
Peters CA, Potter JF, Scholer SG. “Hearing impairment as a predictor of cognitive decline in dementia,” J Am Geriatric Society, (1988), 37 (5): p 489–90.
Rawool VW. “The Aging Auditory System, part 3: Slower Processing, Cognition and Speech Recognition,” The Hearing Review. September, 2007.
Rönnburg J, Rudner M, Foo C, and Lunner T. “Cognition Counts: a working memory system for ease of language understand (ELU).” International Journal of Audiology, 47 (Suppl. 2): S99–S105. November 2008.
Schum DJ, Beck DL. “Negative Synergy—Hearing Loss and Aging,” Republished in the Oticon Library, (2008).
Sharma A., Dorman MF, Spahr AJ. “A sensitive period for the development of the central auditory system in children with cochlear implants: Implications for age of implantation,” Ear and Hearing, (2002), 43 (6): p 532–539.
Yoshinaga-Itano C, Sedey A, Coulter DK, Mehl AL. “Language of early and later identified children with hearing loss,” Pediatrics, (1998), 102: 1161-1171.
Republished from Audiology Today, March/April 2009, pages 48-59.