コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 ion cues may be a strong predictor for aided speech recognition.
2 to show the application of these devices in speech recognition.
3 hesized to have additive negative effects on speech recognition.
4 eech sounds and is behaviorally relevant for speech recognition.
5 neighborhoods may shift, adversely affecting speech recognition.
6 cular frequencies, or formants, essential in speech recognition.
7 complex forms of sensory processing, such as speech recognition.
8 ory modalities--face recognition and speaker/speech recognition.
9 ren's multitasking abilities during degraded speech recognition.
10 ask costs while multitasking during degraded speech recognition.
11 s demonstrate that individual differences in speech recognition abilities are reflected in the underl
12 istic modeling methods akin to those used in speech recognition and computational linguistics were us
13 l source separation can be applied to robust speech recognition and hearing aids and may be extended
17 ng human and ASR solutions to the problem of speech recognition, and suggest the potential for furthe
19 ocial cognition and communication (affective speech recognition (ASR), reading the mind in the eyes,
20 ata imply that previously reported emotional speech recognition deficits in basal ganglia patients ma
21 longer-term improvements in the accuracy of speech recognition following perceptual learning resulte
23 eficial in many disciplines including visual speech recognition, for surgical outcome assessment in p
27 ners in everyday conversations, meaning that speech recognition in conventional tests might overestim
32 ognition in quiet, FM significantly enhances speech recognition in noise, as well as speaker and tone
35 mber of spectral bands may be sufficient for speech recognition in quiet, FM significantly enhances s
36 for experimental and clinical assessment of speech recognition, in which good performance can arise
39 onsiderable overlap in the audiograms and in speech recognition performance in the unimplanted ear be
41 This research aims to bridge the gap between speech recognition processes in humans and machines, usi
47 e 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the v
50 and implant experience to undergo adult-type speech recognition tests, surgical series show that thes
51 o and audio processing, computer vision, and speech recognition, their applications to three-dimensio
53 omplementary contributions to support robust speech recognition under realistic listening situations.
54 ramatically improved the state-of-the-art in speech recognition, visual object recognition, object de
55 ols, behavioral improvement in auditory-only speech recognition was based on an area typically involv
56 evealed that FM is particularly critical for speech recognition with a competing voice and is indepen
WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。