戻る
「早戻しボタン」を押すと検索画面に戻ります。 [閉じる]

コーパス検索結果 (1語後でソート)

通し番号をクリックするとPubMedの該当ページを表示します
1  for bench-marking such devices is automatic speech recognition.
2 r unmask speech, thus hindering or improving speech recognition.
3 kthroughs in natural language processing and speech recognition.
4 e-warping algorithm, originally designed for speech recognition.
5 s of the cortical processes supporting human speech recognition.
6 ided speech recognition and postimplantation speech recognition.
7 tures like georeferencing, image tagging and speech recognition.
8 sts an involvement of the sensory thalami in speech recognition.
9 ing input to the auditory cortex facilitates speech recognition.
10 ren's multitasking abilities during degraded speech recognition.
11 ask costs while multitasking during degraded speech recognition.
12 specific tasks, such as image processing and speech recognition.
13 ion cues may be a strong predictor for aided speech recognition.
14  to show the application of these devices in speech recognition.
15 hesized to have additive negative effects on speech recognition.
16 eech sounds and is behaviorally relevant for speech recognition.
17 neighborhoods may shift, adversely affecting speech recognition.
18 cular frequencies, or formants, essential in speech recognition.
19 complex forms of sensory processing, such as speech recognition.
20 ory modalities--face recognition and speaker/speech recognition.
21 l-world tasks, such as vision, language, and speech recognition.
22 s demonstrate that individual differences in speech recognition abilities are reflected in the underl
23 task-dependent modulation is associated with speech recognition abilities.
24 bed spectrotemporal statistics, we show that speech recognition accuracy at a fixed noise level varie
25 erberation can cause severe difficulties for speech recognition algorithms and hearing-impaired peopl
26 istic modeling methods akin to those used in speech recognition and computational linguistics were us
27 ng Lips under face masks, enabling effective speech recognition and fostering conversational accessib
28 l source separation can be applied to robust speech recognition and hearing aids and may be extended
29                                              Speech recognition and language learning can be affected
30 ata involved speaker segmentation, automatic speech recognition and machine learning classification.
31 g fully automated methods based on automatic speech recognition and natural language processing algor
32  the TFS in natural speech sentences on both speech recognition and neural coding.
33 dynamic behaviors (motifs), as it is done in speech recognition and other data mining applications.
34 hat report comparisons of preoperative aided speech recognition and postimplantation speech recogniti
35 chieved spectacular performance in image and speech recognition and synthesis.
36 ural networks (RNN) that are widely used for speech-recognition and natural language processing have
37 l to significantly improve music perception, speech recognition, and speech prosody perception in CI
38 ic technologies such as machine translation, speech recognition, and speech synthesis.
39 ng human and ASR solutions to the problem of speech recognition, and suggest the potential for furthe
40 pectral resolution, temporal resolution, and speech recognition are well defined in adults with cochl
41 sks, like image classification and automatic speech recognition, are now the best predictors of neura
42 omputational tasks, from image processing to speech recognition, artificial intelligence and deep lea
43                                    Automatic Speech Recognition (ASR) systems with near-human levels
44                                    Automated speech recognition (ASR) systems, which use sophisticate
45 d speech technology tools, such as Automatic Speech Recognition (ASR) systems.
46   By leveraging text-to-speech and automatic speech recognition (ASR) technologies, the cost, time, a
47 ocial cognition and communication (affective speech recognition (ASR), reading the mind in the eyes,
48  exceptional performance of SSL in Automatic Speech Recognition (ASR).
49 mple, nonbiological motion as well as visual speech recognition compared with TMS over the vertex, an
50 ata imply that previously reported emotional speech recognition deficits in basal ganglia patients ma
51 d deficits in neural synchrony contribute to speech recognition deficits.
52 aks down in hearing-impaired individuals and speech recognition devices.
53 Ms show potential for automatic detection of speech recognition errors in radiology reports.
54 ative large language models (LLMs) to detect speech recognition errors in radiology reports.
55 MRI reports was assessed by radiologists for speech recognition errors.
56 est that tinnitus negatively affected masked speech recognition even in individuals with no measurabl
57 f vocoder CI simulations is assessed through speech recognition experiments with normally-hearing sub
58  longer-term improvements in the accuracy of speech recognition following perceptual learning resulte
59 (n = 7) for actual continuous and fluent lip speech recognition for 93 English sentences, even observ
60 e for normal hearing listeners and automatic speech recognition for machines.
61 eficial in many disciplines including visual speech recognition, for surgical outcome assessment in p
62                                        Using speech recognition, gaze points corresponding to each le
63 ication, but their relative contributions to speech recognition have not been fully explored.
64  reliable neural representation suitable for speech recognition, however, remains elusive.
65                                              Speech recognition in a single-talker masker differed on
66 ners in everyday conversations, meaning that speech recognition in conventional tests might overestim
67        The masking release (MR; i.e., better speech recognition in fluctuating compared with continuo
68 portant in acoustic communication, including speech recognition in humans.
69 o-haptic" stimulation substantially improved speech recognition in multi-talker noise when the speech
70 ABA levels, greater central gain, and poorer speech recognition in noise (SIN).
71                                              Speech recognition in noise can be challenging for older
72 gnition of time-compressed speech and poorer speech recognition in noise for both younger and older a
73                   In addition, participants' speech recognition in noise improved, with a lower score
74 come Inventory for Hearing Aids (IOI-HA) and speech recognition in noise measured using an abbreviate
75 formance on the adapted cognitive test and a speech recognition in noise task.
76                                              Speech recognition in noise was compared for cochlear im
77                               Improvement of speech recognition in noise was positively associated wi
78 ognition in quiet, FM significantly enhances speech recognition in noise, as well as speaker and tone
79 cal representation limits the performance of speech recognition in noise.
80 larly their effectiveness in improving human speech recognition in noise.
81 fect size r = 0.3 [95% CI, 0.0-0.5]) but not speech recognition in noise.
82 ce of "statistical" adaptation for improving speech recognition in noisy backgrounds.
83 o the auditory cortex.SIGNIFICANCE STATEMENT Speech recognition in noisy environments is a challengin
84 tures in a task-specific manner to deal with speech recognition in noisy environments.
85 ry assistive technologies aimed at enhancing speech recognition in noisy settings.
86 esolution were significantly correlated with speech recognition in quiet or noise for children with C
87 mber of spectral bands may be sufficient for speech recognition in quiet, FM significantly enhances s
88 h-frequency pure-tone average (4-12 kHz) and speech-recognition in noise performance measured with WI
89  for experimental and clinical assessment of speech recognition, in which good performance can arise
90 ause of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload.
91  that (1) this task-dependent modulation for speech recognition increases in parallel with the sensor
92  factor suggested to correlate with CI-aided speech recognition is frequency-to-place mismatch, or th
93 er, results suggest that noise adaptation in speech recognition is probably mediated by neural dynami
94                                              Speech recognition is remarkably robust to the listening
95 n deals with such sensory uncertainty during speech recognition is to-date missing.
96 ng normal aging in humans, preserving robust speech recognition late into life.
97 usoidal amplitude modulation detection), and speech recognition (measured via monosyllabic word recog
98 etween 12-month postoperative improvement in speech recognition measures and screening positive or no
99 roach, we utilized an advanced deep learning speech recognition model to investigate the intelligibil
100    Our findings demonstrate the potential of speech recognition models in facilitating auditory resea
101 arious contexts, such as computer vision and speech recognition, multiview learning has not yet been
102 ound in numerous fields, including image and speech recognition, natural language processing, and aut
103  strategy employed in, for example, image or speech recognition or health data evaluations, among oth
104 associated with the degree of improvement of speech recognition or patient-reported outcome measures
105 r understanding of individual differences in speech recognition outcomes and contributes to more comp
106                                              Speech recognition outcomes with a cochlear implant (CI)
107 atch was negatively correlated with CI-aided speech recognition outcomes, but the association was onl
108 nificantly associated with improved CI-aided speech recognition outcomes.
109  amount of task-dependent modulation and the speech recognition performance across participants withi
110 onsiderable overlap in the audiograms and in speech recognition performance in the unimplanted ear be
111                      In general, measures of speech recognition performance were well accounted for b
112  of the neuromorphic hardware to the overall speech recognition performance.
113 s where NH and HI listeners both showed high speech recognition performance.
114 al study, with respect to preoperative aided speech recognition, postoperative cochlear implant outco
115 This research aims to bridge the gap between speech recognition processes in humans and machines, usi
116      These data, combined with more rigorous speech recognition results in older children, merit a gr
117 noise ratio) scores, and association of each speech recognition score change with aided preoperative
118 modulation is positively correlated with the speech recognition scores of individual subjects.
119  CIQOL-35 domains had greater improvement in speech recognition scores than patients who did not, but
120 ns between age at implantation and change in speech recognition scores were -0.12 (95% CI, -0.23 to -
121  processed by a machine learning model using speech recognition software.
122                                              Speech recognition starts with representations of basic
123 mations and the neuromorphic hardware to the speech recognition success rate.
124 al synchrony were the strongest predictor of speech recognition, such that poorer synchrony predicted
125    Associations between neural synchrony and speech recognition suggest that individual and age-relat
126           Reports were then entered into the speech recognition system so that each report was associ
127 uce a variant model of the WHISPER automatic speech recognition system that flags intonation unit bou
128 diobook comprehension with the deep-learning speech recognition system Whisper.
129 abled automatic report population within the speech recognition system.
130 e 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the v
131                                         In a speech recognition task with no overt motor component, m
132          Artificial neural networks excel in speech recognition tasks and offer promising computation
133 niculate body, MGB) response is modulated by speech recognition tasks and the amount of this task-dep
134 : there are higher responses in left MGB for speech recognition tasks that require tracking of fast-v
135  for ameliorating hearing loss and improving speech recognition technology in the presence of backgro
136 uce these performance differences and ensure speech recognition technology is inclusive.
137 alian prospective cohort study used advanced speech recognition technology to capture young children'
138 re has direct translational implications for speech recognition technology.
139                                              Speech recognition telephone calls to parents in the int
140 statistically significant improvement in all speech recognition tests postoperatively beyond measurem
141 month postoperative measures using 1 or more speech recognition tests were studied.
142 and implant experience to undergo adult-type speech recognition tests, surgical series show that thes
143 o and audio processing, computer vision, and speech recognition, their applications to three-dimensio
144                     Originally developed for speech recognition, this method has been used in data mi
145 ts (aged 55 or older underwent pure-tone and speech-recognition thresholding.
146                                              Speech recognition thresholds (SRTs) were measured with
147                        For competing speech, speech recognition thresholds were measured for differen
148                                              Speech recognition thresholds were measured for target s
149 th standardized pure-tone averages (PTA) and speech-recognition thresholds (SRT).
150 -2.0 self-supervised framework for automatic speech recognition to continuous seismic signals emanati
151                                        Human speech recognition transforms a continuous acoustic sign
152 omplementary contributions to support robust speech recognition under realistic listening situations.
153 ramatically improved the state-of-the-art in speech recognition, visual object recognition, object de
154                            Eye tracking with speech recognition was 92% accurate in labeling lesion l
155 ols, behavioral improvement in auditory-only speech recognition was based on an area typically involv
156 evealed that FM is particularly critical for speech recognition with a competing voice and is indepen
157               Each circuit markedly improved speech recognition, with greater improvement observed fo

 
Page Top