Javascript must be enabled to continue!
Spoken Word Recognition
View through CrossRef
The core question that spoken word recognition research attempts to address is: How does a phonological word-form activate the corresponding lexical representation that is stored in the mental lexicon? While speech perception research (see the separate Oxford Bibliographies in Linguistics article “Speech Perception”) focuses on the mapping of highly variable acoustic signal onto more abstract phonological units, spoken word recognition focuses on the mapping of phonological information onto lexical and semantic representations—the repository of linguistic knowledge stored in a “mental dictionary” or the mental lexicon (see the separate Oxford Bibliographies in Linguistics article “Mental Lexicon”). Earlier theoretical work considers the three following stages as being fundamental to spoken word recognition. First, there is activation of multiple word forms that share some phonological similarity to the auditory input. Second, there is a selection stage whereby activated word forms compete with each other for recognition. Finally, when a single lexical candidate remains, its meaning is accessed and is then integrated with higher levels of processing (e.g., with sentential or discourse information). Although these stages of spoken word recognition are presented as being part of a serial process, it is important to note that current theoretical and empirical work in the field emphasize the highly parallel, incremental, and continuous nature of spoken word recognition—even though theories of spoken word recognition continue to differ greatly in their description and conceptualization of these “stages,” and in the computational implementation of competition and lexical selection mechanisms. The temporal, fleeting nature of acoustic input creates unique theoretical and empirical challenges for the field, for instance, the challenge of word segmentation in continuous speech and for embedded words, which has traditionally progressed at a more gradual pace relative to research in visual word recognition (see the separate Oxford Bibliographies in Linguistics article “Visual Word Recognition”). Nevertheless, in the almost sixty years of its history, spoken word recognition research has led to the discovery of a number of lexical-semantic and contextual factors that influence the speed and accuracy of spoken word recognition. Lexical-semantic factors refer to the lexical and semantic properties of individual words, for instance, its frequency of occurrence in the language or its extent of phonological similarity to other words in the language. Contextual factors refer to how characteristics of the talker and listener, as well as environmental features or noise, can create suboptimal conditions for spoken word recognition. In addition, the robust top-down influences of lexical knowledge on sublexical representations highlight how the integration of top-down information and bottom-up perceptual input forms a crucial feature of models of spoken word recognition. These empirical findings provide important constraints on the development of models and theories that attempt to explain the cognitive mechanisms that support the retrieval of spoken words from the lexicon.
Title: Spoken Word Recognition
Description:
The core question that spoken word recognition research attempts to address is: How does a phonological word-form activate the corresponding lexical representation that is stored in the mental lexicon? While speech perception research (see the separate Oxford Bibliographies in Linguistics article “Speech Perception”) focuses on the mapping of highly variable acoustic signal onto more abstract phonological units, spoken word recognition focuses on the mapping of phonological information onto lexical and semantic representations—the repository of linguistic knowledge stored in a “mental dictionary” or the mental lexicon (see the separate Oxford Bibliographies in Linguistics article “Mental Lexicon”).
Earlier theoretical work considers the three following stages as being fundamental to spoken word recognition.
First, there is activation of multiple word forms that share some phonological similarity to the auditory input.
Second, there is a selection stage whereby activated word forms compete with each other for recognition.
Finally, when a single lexical candidate remains, its meaning is accessed and is then integrated with higher levels of processing (e.
g.
, with sentential or discourse information).
Although these stages of spoken word recognition are presented as being part of a serial process, it is important to note that current theoretical and empirical work in the field emphasize the highly parallel, incremental, and continuous nature of spoken word recognition—even though theories of spoken word recognition continue to differ greatly in their description and conceptualization of these “stages,” and in the computational implementation of competition and lexical selection mechanisms.
The temporal, fleeting nature of acoustic input creates unique theoretical and empirical challenges for the field, for instance, the challenge of word segmentation in continuous speech and for embedded words, which has traditionally progressed at a more gradual pace relative to research in visual word recognition (see the separate Oxford Bibliographies in Linguistics article “Visual Word Recognition”).
Nevertheless, in the almost sixty years of its history, spoken word recognition research has led to the discovery of a number of lexical-semantic and contextual factors that influence the speed and accuracy of spoken word recognition.
Lexical-semantic factors refer to the lexical and semantic properties of individual words, for instance, its frequency of occurrence in the language or its extent of phonological similarity to other words in the language.
Contextual factors refer to how characteristics of the talker and listener, as well as environmental features or noise, can create suboptimal conditions for spoken word recognition.
In addition, the robust top-down influences of lexical knowledge on sublexical representations highlight how the integration of top-down information and bottom-up perceptual input forms a crucial feature of models of spoken word recognition.
These empirical findings provide important constraints on the development of models and theories that attempt to explain the cognitive mechanisms that support the retrieval of spoken words from the lexicon.
Related Results
Written rather than spoken language experience predicts speed of spoken word recognition
Written rather than spoken language experience predicts speed of spoken word recognition
Cultural experiences can be a powerful influence on human cognition. Here, we asked whether the experience with written language, a human cultural invention, predicts the speed of ...
The Existential and Anthropological Semantics of the Word in Late 17th-Century Sermons
The Existential and Anthropological Semantics of the Word in Late 17th-Century Sermons
This article describes the semantics of the word concept, which is represented in late 17th-century homiletic texts. It is defined by the topics of sermons in terms of their ontolo...
Die Soefi-denkwêreld van Rumi na aanleiding van sy gedig “Die rietfluitlied”
Die Soefi-denkwêreld van Rumi na aanleiding van sy gedig “Die rietfluitlied”
Hierdie artikel het ten doel om vir die leser iets van die denkwêreld van die gewilde 13de-eeuse Persiese digter en Soefi-mistikus, Jalāl al-Dīn Muhammad Rūmī, te bied. ’n Afrikaan...
Phonological Word and Grammatical Word
Phonological Word and Grammatical Word
‘Word’ is a cornerstone for the understanding of every language. It is a pronounceable phonological unit. It will also have a meaning, and a grammatical characterization-a morpholo...
Aspects of Authentic Spoken German: Awareness and Recognition of Elision in the German Classroom
Aspects of Authentic Spoken German: Awareness and Recognition of Elision in the German Classroom
This work discusses the importance of spoken German in classroom instruction. The paper examines the nature of natural spoken language as opposed to written language. We find a gen...
Spoken vs. Written or Dialogue vs. Non-Dialogue? Frequency Analysis of Verbs, Nouns and Prepositional Phrases in Bulgarian
Spoken vs. Written or Dialogue vs. Non-Dialogue? Frequency Analysis of Verbs, Nouns and Prepositional Phrases in Bulgarian
In linguistics, the difference between spoken and written language is often interpreted in terms of frequency, meaning the extent of the likelihood that some constructions will occ...
Identifying Links Between Latent Memory and Speech Recognition Factors
Identifying Links Between Latent Memory and Speech Recognition Factors
Objectives:
The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but i...
Feature recognition of spoken Japanese input based on support vector machine
Feature recognition of spoken Japanese input based on support vector machine
The feature recognition of spoken Japanese is an effective carrier for Sino-Japanese communication. At present, most of the existing intelligent translation equipment only have equ...

