コーパス検索結果 (1語後でソート)
通し番号をクリックするとPubMedの該当ページを表示します
1 for bench-marking such devices is automatic speech recognition.
2 r unmask speech, thus hindering or improving speech recognition.
3 kthroughs in natural language processing and speech recognition.
4 e-warping algorithm, originally designed for speech recognition.
5 s of the cortical processes supporting human speech recognition.
6 ided speech recognition and postimplantation speech recognition.
7 tures like georeferencing, image tagging and speech recognition.
8 sts an involvement of the sensory thalami in speech recognition.
9 ing input to the auditory cortex facilitates speech recognition.
10 ren's multitasking abilities during degraded speech recognition.
11 ask costs while multitasking during degraded speech recognition.
12 specific tasks, such as image processing and speech recognition.
13 ion cues may be a strong predictor for aided speech recognition.
14 to show the application of these devices in speech recognition.
15 hesized to have additive negative effects on speech recognition.
16 eech sounds and is behaviorally relevant for speech recognition.
17 neighborhoods may shift, adversely affecting speech recognition.
18 cular frequencies, or formants, essential in speech recognition.
19 complex forms of sensory processing, such as speech recognition.
20 ory modalities--face recognition and speaker/speech recognition.
21 l-world tasks, such as vision, language, and speech recognition.
22 s demonstrate that individual differences in speech recognition abilities are reflected in the underl
24 bed spectrotemporal statistics, we show that speech recognition accuracy at a fixed noise level varie
25 erberation can cause severe difficulties for speech recognition algorithms and hearing-impaired peopl
26 istic modeling methods akin to those used in speech recognition and computational linguistics were us
27 ng Lips under face masks, enabling effective speech recognition and fostering conversational accessib
28 l source separation can be applied to robust speech recognition and hearing aids and may be extended
30 ata involved speaker segmentation, automatic speech recognition and machine learning classification.
31 g fully automated methods based on automatic speech recognition and natural language processing algor
33 dynamic behaviors (motifs), as it is done in speech recognition and other data mining applications.
34 hat report comparisons of preoperative aided speech recognition and postimplantation speech recogniti
36 ural networks (RNN) that are widely used for speech-recognition and natural language processing have
37 l to significantly improve music perception, speech recognition, and speech prosody perception in CI
39 ng human and ASR solutions to the problem of speech recognition, and suggest the potential for furthe
40 pectral resolution, temporal resolution, and speech recognition are well defined in adults with cochl
41 sks, like image classification and automatic speech recognition, are now the best predictors of neura
42 omputational tasks, from image processing to speech recognition, artificial intelligence and deep lea
46 By leveraging text-to-speech and automatic speech recognition (ASR) technologies, the cost, time, a
47 ocial cognition and communication (affective speech recognition (ASR), reading the mind in the eyes,
49 mple, nonbiological motion as well as visual speech recognition compared with TMS over the vertex, an
50 ata imply that previously reported emotional speech recognition deficits in basal ganglia patients ma
56 est that tinnitus negatively affected masked speech recognition even in individuals with no measurabl
57 f vocoder CI simulations is assessed through speech recognition experiments with normally-hearing sub
58 longer-term improvements in the accuracy of speech recognition following perceptual learning resulte
59 (n = 7) for actual continuous and fluent lip speech recognition for 93 English sentences, even observ
61 eficial in many disciplines including visual speech recognition, for surgical outcome assessment in p
66 ners in everyday conversations, meaning that speech recognition in conventional tests might overestim
69 o-haptic" stimulation substantially improved speech recognition in multi-talker noise when the speech
72 gnition of time-compressed speech and poorer speech recognition in noise for both younger and older a
74 come Inventory for Hearing Aids (IOI-HA) and speech recognition in noise measured using an abbreviate
78 ognition in quiet, FM significantly enhances speech recognition in noise, as well as speaker and tone
83 o the auditory cortex.SIGNIFICANCE STATEMENT Speech recognition in noisy environments is a challengin
86 esolution were significantly correlated with speech recognition in quiet or noise for children with C
87 mber of spectral bands may be sufficient for speech recognition in quiet, FM significantly enhances s
88 h-frequency pure-tone average (4-12 kHz) and speech-recognition in noise performance measured with WI
89 for experimental and clinical assessment of speech recognition, in which good performance can arise
90 ause of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload.
91 that (1) this task-dependent modulation for speech recognition increases in parallel with the sensor
92 factor suggested to correlate with CI-aided speech recognition is frequency-to-place mismatch, or th
93 er, results suggest that noise adaptation in speech recognition is probably mediated by neural dynami
97 usoidal amplitude modulation detection), and speech recognition (measured via monosyllabic word recog
98 etween 12-month postoperative improvement in speech recognition measures and screening positive or no
99 roach, we utilized an advanced deep learning speech recognition model to investigate the intelligibil
100 Our findings demonstrate the potential of speech recognition models in facilitating auditory resea
101 arious contexts, such as computer vision and speech recognition, multiview learning has not yet been
102 ound in numerous fields, including image and speech recognition, natural language processing, and aut
103 strategy employed in, for example, image or speech recognition or health data evaluations, among oth
104 associated with the degree of improvement of speech recognition or patient-reported outcome measures
105 r understanding of individual differences in speech recognition outcomes and contributes to more comp
107 atch was negatively correlated with CI-aided speech recognition outcomes, but the association was onl
109 amount of task-dependent modulation and the speech recognition performance across participants withi
110 onsiderable overlap in the audiograms and in speech recognition performance in the unimplanted ear be
114 al study, with respect to preoperative aided speech recognition, postoperative cochlear implant outco
115 This research aims to bridge the gap between speech recognition processes in humans and machines, usi
116 These data, combined with more rigorous speech recognition results in older children, merit a gr
117 noise ratio) scores, and association of each speech recognition score change with aided preoperative
119 CIQOL-35 domains had greater improvement in speech recognition scores than patients who did not, but
120 ns between age at implantation and change in speech recognition scores were -0.12 (95% CI, -0.23 to -
124 al synchrony were the strongest predictor of speech recognition, such that poorer synchrony predicted
125 Associations between neural synchrony and speech recognition suggest that individual and age-relat
127 uce a variant model of the WHISPER automatic speech recognition system that flags intonation unit bou
130 e 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the v
133 niculate body, MGB) response is modulated by speech recognition tasks and the amount of this task-dep
134 : there are higher responses in left MGB for speech recognition tasks that require tracking of fast-v
135 for ameliorating hearing loss and improving speech recognition technology in the presence of backgro
137 alian prospective cohort study used advanced speech recognition technology to capture young children'
140 statistically significant improvement in all speech recognition tests postoperatively beyond measurem
142 and implant experience to undergo adult-type speech recognition tests, surgical series show that thes
143 o and audio processing, computer vision, and speech recognition, their applications to three-dimensio
150 -2.0 self-supervised framework for automatic speech recognition to continuous seismic signals emanati
152 omplementary contributions to support robust speech recognition under realistic listening situations.
153 ramatically improved the state-of-the-art in speech recognition, visual object recognition, object de
155 ols, behavioral improvement in auditory-only speech recognition was based on an area typically involv
156 evealed that FM is particularly critical for speech recognition with a competing voice and is indepen