Javascript must be enabled to continue!
A Perspective Study on Speech Recognition
View through CrossRef
Emotions play an extremely important role in human mental life. It is a medium of expression of one’s perspective or one’s mental state to others. Speech Emotion Recognition (SER) can be defined as extraction of the emotional state of the speaker from his or her speech signal. There are few universal emotions including Neutral, Anger, Happiness, and Sadness in which any intelligent system with finite computational resources can be trained to identify or synthesize as required. In this work spectral and prosodic features are used for speech emotion recognition because both of these features contain the emotional information. Mel-Frequency Cepstral Coefficients (MFCC) is one of the spectral features. Fundamental frequency, loudness, pitch and speech intensity and glottal parameters are the prosodic features which are used to model different emotions. The potential features are extracted from each utterance for the computational mapping between emotions and speech patterns. Pitch can be detected from the selected features, using which gender can be classified. The audio signal is filtered using a method known as feature extraction technique. In this article, the feature extraction technique for speech recognition and voice classification is analyzed and also centered to comparative analysis of different types of Mel frequency cepstral coefficients (MFCC) feature extraction method. The MFCC technique is used for deduction of noise in voice signals and also used for voice classification and speaker identification. The statistical results of the different MFCC techniques are discussed and finally concluded that the delta-delta MFCC feature extraction technique is better than the other feature extraction techniques..
Title: A Perspective Study on Speech Recognition
Description:
Emotions play an extremely important role in human mental life.
It is a medium of expression of one’s perspective or one’s mental state to others.
Speech Emotion Recognition (SER) can be defined as extraction of the emotional state of the speaker from his or her speech signal.
There are few universal emotions including Neutral, Anger, Happiness, and Sadness in which any intelligent system with finite computational resources can be trained to identify or synthesize as required.
In this work spectral and prosodic features are used for speech emotion recognition because both of these features contain the emotional information.
Mel-Frequency Cepstral Coefficients (MFCC) is one of the spectral features.
Fundamental frequency, loudness, pitch and speech intensity and glottal parameters are the prosodic features which are used to model different emotions.
The potential features are extracted from each utterance for the computational mapping between emotions and speech patterns.
Pitch can be detected from the selected features, using which gender can be classified.
The audio signal is filtered using a method known as feature extraction technique.
In this article, the feature extraction technique for speech recognition and voice classification is analyzed and also centered to comparative analysis of different types of Mel frequency cepstral coefficients (MFCC) feature extraction method.
The MFCC technique is used for deduction of noise in voice signals and also used for voice classification and speaker identification.
The statistical results of the different MFCC techniques are discussed and finally concluded that the delta-delta MFCC feature extraction technique is better than the other feature extraction techniques.
Related Results
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
<p dir="ltr">Most individuals with Parkinson's disease (PD) experience changes in speech, voice or communication. Speech changes often manifest as hypokinetic dysarthria, a m...
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
<p dir="ltr">Most individuals with Parkinson's disease (PD) experience changes in speech, voice or communication. Speech changes often manifest as hypokinetic dysarthria, a m...
Speech, communication, and neuroimaging in Parkinson's disease : Characterisation and intervention outcomes
Speech, communication, and neuroimaging in Parkinson's disease : Characterisation and intervention outcomes
<p dir="ltr">Most individuals with Parkinson's disease (PD) experience changes in speech, voice or communication. Speech changes often manifest as hypokinetic dysarthria, a m...
The Neural Mechanisms of Private Speech in Second Language Learners’ Oral Production: An fNIRS Study
The Neural Mechanisms of Private Speech in Second Language Learners’ Oral Production: An fNIRS Study
Background: According to Vygotsky’s sociocultural theory, private speech functions both as a tool for thought regulation and as a transitional form between outer and inner speech. ...
Identifying Links Between Latent Memory and Speech Recognition Factors
Identifying Links Between Latent Memory and Speech Recognition Factors
Objectives:
The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but i...
Robust speech recognition based on deep learning for sports game review
Robust speech recognition based on deep learning for sports game review
Abstract
To verify the feasibility of robust speech recognition based on deep learning in sports game review. In this paper, a robust speech recognition model is bui...
Formation of speech culture of primary schoolchildren by means of speech metaphoricity
Formation of speech culture of primary schoolchildren by means of speech metaphoricity
Modern education and upbringing is characterized by qualitatively new requirements imposed by educational standards, not only for the content of the educational process, but also f...
Research on supplier center speech recognition technology based on artificial intelligence
Research on supplier center speech recognition technology based on artificial intelligence
In response to the lagging speech recognition capabilities in supplier services, this study integrates speech recognition, speech synthesis, and semantic understanding technologies...

