Acoustical Science and Technology 45(5) 293-297 2024年9月
The medial olivocochlear reflex (MOCR) is reported to be modulated by the predictability of an upcoming sound occurrence. Here the relationship between MOCR and internal confidence in temporal anticipation evaluated by reaction time (RT) was examined. The timing predictability of the MOCR elicitor was manipulated by adding jitters to preceding sounds. MOCR strength/RT unchanged in a small (10%) jitter condition, and decrease/increase significantly in the largest (40%) jitter condition compared to the without-jitter condition. The similarity indicates that the MOCR strength reflects confidence in anticipation, and that the predictive control of MOCR and response execution share a common neural mechanism.
The Journal of the Acoustical Society of America 156(1) 610-622 2024年7月1日
Fluid-filled fractures involving kinks and branches result in complex interactions between Krauklis waves-highly dispersive and attenuating pressure waves within the fracture-and the body waves in the surrounding medium. For studying these interactions, we introduce an efficient 2D time-harmonic elastodynamic boundary element method. Instead of modeling the domain within a fracture as a finite-thickness fluid layer, this method employs zero-thickness, poroelastic Linear-Slip Interfaces to model the low-frequency, local fluid-solid interaction. Using this method, the scattering of Krauklis waves by a single kink along a straight fracture and the radiation of body waves generated by Krauklis waves within complex fracture systems are examined.
European Signal Processing Conference 1546-1550 2024年
Brain computer interfaces based on speech imagery have attracted attention in recent years as more flexible tools of machine control and communication. Classifiers of imagined speech are often trained for each individual due to individual differences in brain activity. However, the amount of brain activity data that can be measured from a single person is often limited, making it difficult to train a model with high classification accuracy. In this study, to improve the performance of the classifiers for each individual, we trained variational autoencoders (VAEs) using magnetoencephalographic (MEG) data from seven participants during speech imagery. The trained encoders of VAEs were transferred to EEGNet, which classified speech imagery MEG data from another participant. We also trained conditional VAEs to augment the training data for the classifiers. The results showed that the transfer learning improved the performance of the classifiers for some participants. Data augmentation also improved the performance of the classifiers for most participants. These results indicate that the use of VAE feature representations learned using MEG data from multiple individuals can improve the classification accuracy of imagined speech from a new individual even when a limited amount of MEG data is available from the new individual.