戻る
「早戻しボタン」を押すと検索画面に戻ります。

今後説明を表示しない

[OK]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1 ion cues may be a strong predictor for aided speech recognition.
2  to show the application of these devices in speech recognition.
3 hesized to have additive negative effects on speech recognition.
4 eech sounds and is behaviorally relevant for speech recognition.
5 neighborhoods may shift, adversely affecting speech recognition.
6 cular frequencies, or formants, essential in speech recognition.
7 complex forms of sensory processing, such as speech recognition.
8 ory modalities--face recognition and speaker/speech recognition.
9 ren's multitasking abilities during degraded speech recognition.
10 ask costs while multitasking during degraded speech recognition.
11 s demonstrate that individual differences in speech recognition abilities are reflected in the underl
12 istic modeling methods akin to those used in speech recognition and computational linguistics were us
13 l source separation can be applied to robust speech recognition and hearing aids and may be extended
14                                              Speech recognition and language learning can be affected
15  the TFS in natural speech sentences on both speech recognition and neural coding.
16 ic technologies such as machine translation, speech recognition, and speech synthesis.
17 ng human and ASR solutions to the problem of speech recognition, and suggest the potential for furthe
18                                    Automatic Speech Recognition (ASR) systems with near-human levels
19 ocial cognition and communication (affective speech recognition (ASR), reading the mind in the eyes,
20 ata imply that previously reported emotional speech recognition deficits in basal ganglia patients ma
21  longer-term improvements in the accuracy of speech recognition following perceptual learning resulte
22 e for normal hearing listeners and automatic speech recognition for machines.
23 eficial in many disciplines including visual speech recognition, for surgical outcome assessment in p
24 ication, but their relative contributions to speech recognition have not been fully explored.
25  reliable neural representation suitable for speech recognition, however, remains elusive.
26                                              Speech recognition in a single-talker masker differed on
27 ners in everyday conversations, meaning that speech recognition in conventional tests might overestim
28        The masking release (MR; i.e., better speech recognition in fluctuating compared with continuo
29 portant in acoustic communication, including speech recognition in humans.
30                                              Speech recognition in noise can be challenging for older
31                                              Speech recognition in noise was compared for cochlear im
32 ognition in quiet, FM significantly enhances speech recognition in noise, as well as speaker and tone
33 cal representation limits the performance of speech recognition in noise.
34 larly their effectiveness in improving human speech recognition in noise.
35 mber of spectral bands may be sufficient for speech recognition in quiet, FM significantly enhances s
36  for experimental and clinical assessment of speech recognition, in which good performance can arise
37                                              Speech recognition is remarkably robust to the listening
38 ng normal aging in humans, preserving robust speech recognition late into life.
39 onsiderable overlap in the audiograms and in speech recognition performance in the unimplanted ear be
40                      In general, measures of speech recognition performance were well accounted for b
41 This research aims to bridge the gap between speech recognition processes in humans and machines, usi
42      These data, combined with more rigorous speech recognition results in older children, merit a gr
43 modulation is positively correlated with the speech recognition scores of individual subjects.
44                                              Speech recognition starts with representations of basic
45           Reports were then entered into the speech recognition system so that each report was associ
46 abled automatic report population within the speech recognition system.
47 e 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the v
48 re has direct translational implications for speech recognition technology.
49                                              Speech recognition telephone calls to parents in the int
50 and implant experience to undergo adult-type speech recognition tests, surgical series show that thes
51 o and audio processing, computer vision, and speech recognition, their applications to three-dimensio
52                     Originally developed for speech recognition, this method has been used in data mi
53 omplementary contributions to support robust speech recognition under realistic listening situations.
54 ramatically improved the state-of-the-art in speech recognition, visual object recognition, object de
55 ols, behavioral improvement in auditory-only speech recognition was based on an area typically involv
56 evealed that FM is particularly critical for speech recognition with a competing voice and is indepen
57               Each circuit markedly improved speech recognition, with greater improvement observed fo

WebLSDに未収録の専門用語(用法)は "新規対訳" から投稿できます。