研究者業績

中川 誠司

Nakagawa Seiji  (Seiji NAKAGAWA)

基本情報

所属
千葉大学 フロンティア医工学センター 教授
(兼任)大学院 工学研究院 教授
(兼任)大学院 融合理工学府 基幹工学専攻 医工学コース 教授,コース長
(兼任)工学部 総合工学科 医工学コース 教授,コース長
(兼任)医学部附属病院 教授
国立研究開発法人 産業技術総合研究所 客員研究員
東京大学 大学院医学系研究科 客員研究員
Univ. of Washington Visiting Scholar
国立研究開発法人 量子科学技術研究開発機構 客員研究員
学位
博士(工学)(1999年3月 東京大学)

連絡先
s-nakagawachiba-u.jp
J-GLOBAL ID
200901063867675418
researchmap会員ID
5000005804

外部リンク

非侵襲的手法による神経生理計測(特に脳機能計測),心理計測,物理計測,さらにはコンピュータ・シミュレーションを駆使して,聴覚を中心とした知覚メカニズムや認知メカニズムの解明を進めています.また,知覚・認知メカニズム研究で得られた成果を利用することで,骨伝導補聴器や骨伝導スマートホンを初めとした福祉機器・医用機器の開発や,室内の視聴覚環境の最適化,騒音の快音化といった応用研究にも取り組んでいます.


論文

 323
  • Irwansyah, Sho Otsuka, Seiji Nakagawa
    HardwareX 21 e00618-e00618 2024年12月  
    Thanks to affordable 3D printers, creating complex designs like anatomically accurate dummy heads is now accessible. This study introduces dummy heads with 3D-printed skulls and silicone skins to explore crosstalk cancellation in bone conduction (BC). Crosstalk occurs when BC sounds from a transducer on one side of the head reach the cochlea on the opposite side. This can disrupt binaural cues essential for sound localization and speech understanding in noise for individuals using BC hearing devices. We provide a step-by-step guide to constructing the dummy head and demonstrate its application in canceling crosstalk. The 3D models used in this study are freely available for replication and further research. Several dummy heads were 3D-printed using ABS for the skull and silicone skins of varying hardness, with a 3-axis accelerometer at the cochlea location to simulate inner ear response. Since the cochlea is inaccessible in humans, we targeted crosstalk cancellation at the mastoid, assessing if this cancellation extended to the cochlea within the dummy heads. We compared these results with our previous experiments conducted on seven human subjects, who had their hearing thresholds measured with and without crosstalk cancellation, to evaluate if the dummy heads could reliably replicate human crosstalk cancellation effects.
  • Yuki Ishizaka, Sho Otsuka, Seiji Nakagawa
    Acoustical Science and Technology 45(5) 293-297 2024年9月  
    The medial olivocochlear reflex (MOCR) is reported to be modulated by the predictability of an upcoming sound occurrence. Here the relationship between MOCR and internal confidence in temporal anticipation evaluated by reaction time (RT) was examined. The timing predictability of the MOCR elicitor was manipulated by adding jitters to preceding sounds. MOCR strength/RT unchanged in a small (10%) jitter condition, and decrease/increase significantly in the largest (40%) jitter condition compared to the without-jitter condition. The similarity indicates that the MOCR strength reflects confidence in anticipation, and that the predictive control of MOCR and response execution share a common neural mechanism.
  • Yuki Ishizaka, Sho Otsuka, Seiji Nakagawa
    PLOS ONE 19(7) e0304027 2024年7月17日  
    Rhythms are the most natural cue for temporal anticipation because many sounds in our living environment have rhythmic structures. Humans have cortical mechanisms that can predict the arrival of the next sound based on rhythm and periodicity. Herein, we showed that temporal anticipation, based on the regularity of sound sequences, modulates peripheral auditory responses via efferent innervation. The medial olivocochlear reflex (MOCR), a sound-activated efferent feedback mechanism that controls outer hair cell motility, was inferred noninvasively by measuring the suppression of otoacoustic emissions (OAE). First, OAE suppression was compared between conditions in which sound sequences preceding the MOCR elicitor were presented at regular (predictable condition) or irregular (unpredictable condition) intervals. We found that OAE suppression in the predictable condition was stronger than that in the unpredictable condition. This implies that the MOCR is strengthened by the regularity of preceding sound sequences. In addition, to examine how many regularly presented preceding sounds are required to enhance the MOCR, we compared OAE suppression within stimulus sequences with 0-3 preceding tones. The OAE suppression was strengthened only when there were at least three regular preceding tones. This suggests that the MOCR was not automatically enhanced by a single stimulus presented immediately before the MOCR elicitor, but rather that it was enhanced by the regularity of the preceding sound sequences.
  • Seiji Nakagawa
    The Journal of the Acoustical Society of America 156(1) 610-622 2024年7月1日  
    Fluid-filled fractures involving kinks and branches result in complex interactions between Krauklis waves-highly dispersive and attenuating pressure waves within the fracture-and the body waves in the surrounding medium. For studying these interactions, we introduce an efficient 2D time-harmonic elastodynamic boundary element method. Instead of modeling the domain within a fracture as a finite-thickness fluid layer, this method employs zero-thickness, poroelastic Linear-Slip Interfaces to model the low-frequency, local fluid-solid interaction. Using this method, the scattering of Krauklis waves by a single kink along a straight fracture and the radiation of body waves generated by Krauklis waves within complex fracture systems are examined.
  • Hajime Yano, Ryoichi Takashima, Tetsuya Takiguchi, Seiji Nakagawa
    European Signal Processing Conference 1546-1550 2024年  
    Brain computer interfaces based on speech imagery have attracted attention in recent years as more flexible tools of machine control and communication. Classifiers of imagined speech are often trained for each individual due to individual differences in brain activity. However, the amount of brain activity data that can be measured from a single person is often limited, making it difficult to train a model with high classification accuracy. In this study, to improve the performance of the classifiers for each individual, we trained variational autoencoders (VAEs) using magnetoencephalographic (MEG) data from seven participants during speech imagery. The trained encoders of VAEs were transferred to EEGNet, which classified speech imagery MEG data from another participant. We also trained conditional VAEs to augment the training data for the classifiers. The results showed that the transfer learning improved the performance of the classifiers for some participants. Data augmentation also improved the performance of the classifiers for most participants. These results indicate that the use of VAE feature representations learned using MEG data from multiple individuals can improve the classification accuracy of imagined speech from a new individual even when a limited amount of MEG data is available from the new individual.

MISC

 1017
  • 岡安 唯, 中川 誠司, 西村 忠己, 山下 哲範, 吉田 悠加, 長谷 芳樹, 細井 裕司
    Audiology Japan 55(5) 311-312 2012年9月  
  • Seiji Nakagawa, Chika Fujiyuki, Takayuki Kagomiya
    JAPANESE JOURNAL OF APPLIED PHYSICS 51(7) 07GF22.1-07GF22.5 2012年7月  
    Bone-conducted ultrasound (BCU) is perceived even by the profoundly sensorineural deaf. A novel hearing aid using the perception of amplitude-modulated BCU (BCU hearing aid: BCUHA) has been developed; however, further improvements are needed, especially in terms of articulation and sound quality. In this study, the intelligibility and sound quality of BCU speech with several types of amplitude modulation [double-sideband with transmitted carrier (DSB-TC), double-sideband with suppressed carrier (DSB-SC), and transposed modulation] were evaluated. The results showed that DSB-TC and transposed speech were more intelligible than DSB-SC speech, and transposed speech was closer than the other types of BCU speech to air-conducted speech in terms of sound quality. These results provide useful information for further development of the BCUHA. (C) 2012 The Japan Society of Applied Physics
  • Takuya Hotehama, Seiji Nakagawa
    Japanese Journal of Applied Physics 51(7) 07GF21.1-07GF21.5 2012年7月  
  • 中川 誠司, 保手浜 拓也
    日本生体磁気学会誌 25(1) 76-77 2012年6月  
  • Shunsuke Ishimitsu, Takayuki Arai, Kensuke Fujinoki, Seiji Nakagawa, Yoshiharu Soeta
    ICIC Express Letters 6 891-897 2012年4月1日  
    The notion that sound is a normal part of product operation is widespread in society. As a result, much attention has been directed at designing various sounds that are generally treated as noise. Car drivers detect variations in sound characteristics between different buttons, such as the pitch, tone color, loudness and duration of button sounds, and these can affect the desirability of both a car and its audio system. In this study, we focused on the sound design of transient signals and evaluated this design using 11 different button sounds. We examined the subjective evaluations of the sounds by the subjects using the semantic differential method and neuronal activity in their auditory cortices evoked by these stimuli using magnetoencephalography. We found that button sound characteristics significantly affect subjective impressions and neuronal activity in the auditory cortex. ICIC International © 2012.
  • Masashi Nakayama, Masashi Nakayama, Shunsuke Ishimitsu, Seiji Nakagawa
    ICIC Express Letters 6 1013-1018 2012年4月1日  
    Because speech signals are easily influenced by noise in the air, speech recognition techniques are not able to achieve a high performance in noisy environments. Consequently, many researchers are investigating speech retrieval/extraction methods in environments where noise reduction or signal extraction methods are used. These methods perform well in high Signal to Noise Ratio (SNR) environments. However, they are not good at measuring speech in low SNR environments such as below 20 dB SNR. Body-conducted speech (BCS) measures signals from a human body with an accelerometer made from magnetic materials, which cannot be used in high magnetic fields such as in a Magnetic Resonance Imaging (MRI) room, where about 80 dB of Sound Pressure Level (SPL) noise exists. For speech communication between a patient and an operator in this environment, we have investigated sound quality improvement of body-conducted speech measured by an Optical Fiber Bragg Grating (OFBG) microphone which is made solely from non-magnetic materials. In this study, we investigated the estimation of clear signals from signals measured by an OFBG microphone using our signal retrieval method. Its effectiveness was proven by time-frequency analysis comparisons with the original signal. ICIC International © 2012.
  • 伊藤一仁, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2012 ROMBUNNO.3-Q-20 2012年3月6日  
  • 添田喜治, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2012 ROMBUNNO.3-Q-22 2012年3月6日  
  • 大塚明香, 湯本真人, 栗城眞也, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2012 ROMBUNNO.3-Q-24 2012年3月6日  
  • 名越隼人, 石光俊介, 山中貴弘, 福井和敏, 籠宮隆之, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2012 ROMBUNNO.1-Q-16 2012年3月6日  
  • 保手浜拓也, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2012 ROMBUNNO.2-P-3 2012年3月6日  
  • 中川誠司, 川村智
    日本音響学会研究発表会講演論文集(CD−ROM) 2012 ROMBUNNO.3-Q-21 2012年3月6日  
  • 籠宮隆之, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2012 ROMBUNNO.3-Q-19 2012年3月6日  
  • 松井 淑恵, 下倉 良太, 斎藤 修, 福田 芙美, 西村 忠己, 細井 裕司, 中川 誠司
    電子情報通信学会技術研究報告. SP, 音声 111(471) 13-18 2012年3月1日  
    最重度難聴者が骨導超音波補聴器を用いて音声を知覚できるかどうかを確認するため、最重度難聴者2名を対象に単音節の同定実験を行い、語音明瞭度を測定した。その結果、補聴器から得られる聴覚の手がかりと、読唇で得られる視覚の手がかりの両方を用いると、特定の子音の明瞭度が上がり、結果としてそれぞれ単独の手がかりを用いるより高い明瞭度が得られた。
  • Taichi Kato, Hiroyuki Maehara, Ian Miller, Tomohito Ohshima, Enrique de Miguel, Kenji Tanabe, Kazuyoshi Imamura, Hidehiko Akazawa, Nanae Kunitomi, Ryosuke Takagi, Mikiha Nose, Franz-Josef Hambsch, Seiichiro Kiyota, Elena P. Pavlenko, Aleksei V. Baklanov, Oksana I. Antonyuk, Denis Samsonov, Aleksei Sosnovskij, Kirill Antonyuk, Maksim V. Andreev, Etienne Morelle, Pavol A. Dubovsky, Igor Kudzej, Arto Oksanen, Gianluca Masi, Thomas Krajci, Roger D. Pickard, Richard Sabo, Hiroshi Itoh, William Stein, Shawn Dvorak, Arne Henden, Shinichi Nakagawa, Ryo Noguchi, Eriko Iino, Katsura Matsumoto, Hiroki Nishitani, Tomoya Aoki, Hiroshi Kobayashi, Chihiro Akasaka, Greg Bolt, Jeremy Shears, Javier Ruiz, Sergey Yu. Shugarov, Drahomir Chochol, Nikolai A. Parakhin, Berto Monard, Kazuhiko Shiokawa, Kiyoshi Kasai, Bart Staels, Atsushi Miyashita, Donn R. Starkey, Yenal Oegmen, Colin Littlefield, Natalia Katysheva, Ivan M. Sergey, Denis Denisenko, Tamas Tordai, Robert Fidrich, Vitaly P. Goranskij, Jani Virtanen, Tim Crawford, Jochen Pietz, Robert A. Koff, David Boyd, Steve Brady, Nick James, William N. Goff, Koh-ichi Itagaki, Hideo Nishimura, Youichirou Nakashima, Seiichi Yoshida, Rod Stubbings, Gary Poyner, Yutaka Maeda, Stanislav A. Korotkiy, Kirill V. Sokolovsky, Seiji Ueda
    PUBLICATIONS OF THE ASTRONOMICAL SOCIETY OF JAPAN 64(1) 2012年2月  
    Continuing the project described by Kato et al. (2009, PASJ, 61, S395), we collected the times of superhump maxima for 51 SU UMa-type dwarf novae, mainly observed during the 2010-2011 season. Although most of the new data for systems with short superhump periods basically confirmed the findings by Kato et al. (ibid.) and Kato et al. (2010, PASJ, 62, 1525), the long-period system GX Cas showed an exceptionally large positive-period derivative. An analysis of public Kepler data of V344 Lyr and V1504 Cyg yielded less-striking stage transitions. In V344 Lyr, there was a prominent secondary component growing during the late stage of superoutbursts, and this component persisted for at least two more cycles of successive normal outbursts. We also investigated the superoutburst of two conspicuous eclipsing objects: HT Cas and the WZ Sge-type object SDSS J080434.20+510349.2. Strong beat phenomena were detected in both objects, and late-stage superhumps in the latter object had an almost constant luminosity during repeated rebrightenings. The WZ Sge-type object SDSS J133941.11+484727.5 showed a phase reversal around the rapid fading from the superoutburst. The object showed a prominent beat phenomenon, even after the end of the superoutburst. A pilot study of superhump amplitudes indicated that the amplitude of superhumps is strongly correlated with the orbital period, and the dependence on the inclination is weak in systems with inclinations smaller than 80 degrees.
  • 中川誠司
    島津科学技術振興財団研究成果報告書 2010 29-32 2012年1月31日  
  • Masashi Nakayama, Shunsuke Ishimitsu, Seiji Nakagawa
    2012 ICME International Conference on Complex Medical Engineering, CME 2012 Proceedings 147-153 2012年  
    Speech is a human instrument for communication, and many applications of speech have been proposed. However, the sound quality of speech is reduced by noise in air, and many researchers and engineers have investigated speech measurement methods in noisy environments in terms of signal processing and the use of noise-robust microphones. Conventional approaches of signal measurement can measure speech in an environment with a high signal-to-noise ratio however, these approaches do not work well in an environment with a low signal-to-noise ratio. By contrast, body-conducted speech, which is speech conducted through the bone and skin of the human body, can be measured in such an environment. However, the frequency characteristics of body-conducted speech are poorer than those of air-conducted speech, and the sound quality needs to be improved for speech applications and conversations. With this background, the paper discusses and investigates sound-quality improvements for the sentence unit of body-conducted speech using differential acceleration, which is a signal retrieval method employed to improve sound quality. The performance of the method is investigated in terms of signal retrieval from body-conducted speech recorded by an accelerometer and Optical Fiber Bragg Grating microphone. © 2012 IEEE.
  • Takuya Hotehama, Seiji Nakagawa
    2012 ICME International Conference on Complex Medical Engineering, CME 2012 Proceedings 412-417 2012年  
    Ultrasound with a frequency greater than 20 kHz, which is generally recognized as a sound beyond the upper limit of human hearing, can be heard via bone-conduction. Such "audible" ultrasound through bone-conduction is referred to as bone-conducted ultrasound (BCU). It have been reported that profoundly hearing-impaired people can also hear a BCU and recognize part of the information on the modulating signal from the amplitude-modulated BCU. These perceptual characteristics were utilized in the development of a new hearing-aid system, Bone-Conducted Ultrasonic Hearing Aids (BCUHA), for the profoundly hearing-impaired. In this study, to verify the feasibility of "binaural" BCUHAs, we investigated whether listeners can use the interaural time differences in the amplitude envelopes (envelope-ITDs) and the intensity differences (IIDs) as cues for lateralization of BCUs. Results showed that listeners can detect changes in envelope-ITDs or IIDs of BCUs. Also, the discrimination thresholds of the polarities of IIDs were compensated by envelope-ITDs, i.e., the time-intensity trading was observed in the BCU perception. These findings indicate that the auditory system has the ability for lateralization using envelope-ITDs and IIDs of bilaterally presented BCUs. Further, it suggest that bilaterally presented BCUs are processed in the auditory pathway associated with lateralization in a similar manner to high-frequency amplitude-modulated sounds. © 2012 IEEE.
  • NAKAGAWA Seiji, ITO Kazuhito
    日本生体医工学会大会プログラム・論文集(CD-ROM) 51st ROMBUNNO.O3-03-1 2012年  
  • 中川 誠司, 藤幸 千賀, 籠宮 隆之
    超音波エレクトロニクスの基礎と応用に関するシンポジウム講演論文集 33 499-500 2012年  
  • 保手浜 拓也, 中川 誠司
    超音波エレクトロニクスの基礎と応用に関するシンポジウム講演論文集 33 191-192 2012年  
  • Masashi Nakayama, Masashi Nakayama, Shunsuke Ishimitsu, Hayato Nagoshi, Seiji Nakagawa, Kazutoshi Fukui
    40th International Congress and Exposition on Noise Control Engineering 2011, INTER-NOISE 2011 2 1599-1603 2011年12月1日  
    Speech signals are affected by noise, which affects the signal quality, so speech recognition does not perform well in noisy environments. Consequently, many researchers are investigating speech retrieval/extraction methods that use conventional noise reduction methods for noisy speech and other background noise, and signal extraction methods using devices such as microphone arrays. These methods generally work in high signal to noise ratio (SNR) environments. However, they do not work in low SNR environments, typically below -20 dB SNR. In previous research we proposed a body-conducted speech recognition system to measure speech in noisy environments, in which the signal was measured by an accelerometer that was placed on the upper lip. The speech recognition performance achieved was about 95 % when the acoustic model was used to estimate the speech. However, the conventional body-conducted speech microphone is made from magnetic materials, so it cannot be used in high magnetic fields such as in a magnetic resonance imaging (MRI) room, where about 80 dB of sound pressure level noise exists. For communication in an MRI room between an operator and patient, we proposed a new body-conducted speech microphone using an Optical Fiber Bragg Grating (OFBG) microphone, which is made from only non-magnetic materials. In this paper we discuss the estimating of clear body-conducted speech from the OFBG microphone signal with our signal retrieval method, which uses differential acceleration and conventional noise reduction methods. Its effectiveness is proven by time-frequency analysis when it is compared with the non-processed signal from a conventional accelerometer. Copyright © (2011) by the Institute of Noise Control Engineering.
  • Masashi Nakayama, Masashi Nakayama, Shunsuke Ishimitsu, Hayato Nagoshi, Seiji Nakagawa, Kazutoshi Fukui
    Proceedings of Forum Acusticum 101-104 2011年12月1日  
    Speech recognition does not perform well in noisy environments. Consequently, researchers are investigating speech retrieval/extraction methods that include noise reduction for noisy speech and other background noise, and signal-extraction methods using devices such as microphone arrays. These methods generally work well in high signal-to-noise ratio (SNR) environments. However, they do not work in low-SNR environments, typically at SNRs below -20 dB. To measure speech in noisy environments, we have previously proposed a body-conducted speech recognition system where the signal is measured by an accelerometer placed on the upper lip. The speech recognition performance achieved was about 95% when an acoustic model was used to estimate the speech. However, a conventional body-conducted speech microphone is made from magnetic materials and thus cannot be used in strong magnetic fields such as those in a magnetic resonance imaging (MRI) room, where the sound pressure level is about 80 dB. For communication in an MRI room between the operator and patient, we proposed a new body-conducted speech microphone using an optical fiber Bragg grating, which is made from only nonmagnetic materials. In this research, we evaluated the effectiveness of the new body-conducted speech microphone in time-frequency analysis. The performance of isolated-word recognition was also evaluated using a speech recognition system with an acoustic model constructed with nonspecific speech.
  • Shunsuke Ishimitsu, Hiromi Nishikawa, Kenji Takami, Seiji Nakagawa, Yoshiharu Soeta, Takuya Hotehama
    Proceedings of Forum Acusticum 1091-1095 2011年12月1日  
    The focus of production concepts regarding car engine sound has changed in recent years, from finding a solution to noise to designing specific sounds. Although many studies have been conducted to create subjectively pleasant car-engine sounds, the psychoacoustic effects of the accelerating engine sounds are still unclear. The present study investigated auditory impressions of internal car noise using psychological and neurophysiological methods. To represent the half-order sound of an engine, we used harmonic complex tones that simulate acceleration noise with a sinusoidal model as stimuli. Neuronal activity in the auditory cortex evoked by these stimuli were measured using magnetoen- cephalography (MEG). Subjective evaluations were examined using paired-comparison tests. At the same time, we recorded responses in the MEG alpha-wave range (8-13 Hz), and analyzed them using the autocorrelation function (ACF). The results of a jury test confirmed that changes in the level of half-order sound were associated with changes in subjective annoyance, and everyday drivers preferred sound that included a high level of half-order sounds. MEG analyses revealed a relationship between subjective annoyance and the effective duration of the ACF in MEG activity between 8 and 13 Hz.
  • Shunsuke Ishimitsu, Hiromi Nishikawa, Seiji Nakagawa, Yoshiharu Soeta, Takuya Hotehama
    40th International Congress and Exposition on Noise Control Engineering 2011, INTER-NOISE 2011 4 3422-3427 2011年12月1日  
    In recent years, the focus of production concepts regarding car engine sound has changed from finding a solution to noise, to designing specific sounds. Although many studies have been conducted to create subjectively pleasant car-engine sounds, the psychoacoustic effects of the time-varying rate of accelerating engine sounds are still unclear. The present study investigated auditory impressions of internal car noise using psychological and neurophysiological methods. To represent the half-order sound of an engine, we used harmonic complex tones that simulate acceleration noise with a sinusoidal model as stimuli. Neuronal activity in the auditory cortex evoked by these stimuli was measured using magnetoencephalography (MEG). Subjective evaluations were examined using paired-comparison tests. At the same time, we recorded responses in the MEG alpha-wave range (8-13 Hz), and analyzed them using the autocorrelation function (ACF). The results of a jury test confirmed that changes in the level of half-order sound were associated with changes in subjective annoyance, and everyday drivers preferred engine noise that included a high level of half-order sound. MEG analyses revealed a relationship between subjective annoyance and the effective duration of the ACF in MEG activity between 8 and 13 Hz.
  • 中川 誠司, 中川 あや
    宇宙航空環境医学 48(4) 72-72 2011年12月  
  • 中川誠司, 保手浜拓也
    村田学術振興財団年報 (25) 46-52-52 2011年12月  
  • HOTEHAMA Takuya, NAKAGAWA Seiji
    超音波エレクトロニクスの基礎と応用に関するシンポジウム講演予稿集 32nd 427-428 2011年11月8日  
  • ITO Kazuhito, NAKAGAWA Seiji
    超音波エレクトロニクスの基礎と応用に関するシンポジウム講演予稿集 32nd 425-426 2011年11月8日  
  • NAKAGAWA Seiji
    超音波エレクトロニクスの基礎と応用に関するシンポジウム講演予稿集 32nd 429-430 2011年11月8日  
  • 中川誠司, 川村智, 藤幸千賀
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.1-Q-15 2011年9月13日  
  • 伊藤一仁, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.1-Q-10 2011年9月13日  
  • 籠宮隆之, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.1-Q-11 2011年9月13日  
  • 大塚明香, 湯本真人, 栗城眞也, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.1-Q-6 2011年9月13日  
  • 鈴木茉莉緒, 鈴木茉莉緒, 籠宮隆之, 神崎素樹, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.3-Q-16 2011年9月13日  
  • 岡安 唯, 西村 忠己, 山下 哲範, 中川 誠司, 吉田 悠加, 柳井 修一, 長谷 芳樹, 細井 裕司
    Audiology Japan 54(5) 411-412 2011年9月  
  • 中山仁史, 石光俊介, 中川誠司
    中谷電子計測技術振興財団年報 (25) 130-135-135 2011年8月10日  
  • 矢倉 晴子, 外池 光雄, 岩木 直, 中川 誠司, 荻野 敏, 村田 勉
    電子情報通信学会技術研究報告. SP, 音声 111(153) 17-22 2011年7月14日  
    「喜び」,「悲しみ」などの特定の感情が込められた音声の聴取時の聴覚初期情報処理過程を明らかにするために,時間分解能と空間分解能に優れた脳磁界計測を用いて検証を行った。計測には1)情動音声認識課題,2)語頭音認識課題 3)音声無視課題の3条件を設定し,1)に関連する初期聴覚情報処理過程の情動処理に関連する潜時を統計的に検定した。その結果,刺激開始後の潜時0.16-0.18sにおいて,右上側頭回後部付近が情動成分への処理に関与していることが示唆された。
  • 梶山 誠司, 中川 五男, 日高 昌三
    麻酔 = The Japanese journal of anesthesiology : 日本麻酔科学会準機関誌 60(7) 835-839 2011年7月  
  • Kazuhito Ito, Seiji Nakagawa
    Japanese Journal of Applied Physics 50(7) 1-7 2011年7月  
  • 石光俊介, 西川裕美, 高見健治, 中川誠司, 添田喜治, 保手浜拓也
    システム制御情報学会研究発表講演会講演論文集(CD−ROM) 55th ROMBUNNO.W37-6 2011年5月17日  
  • Shunsuke Ishimitsu, Kenji Takami, Seiji Nakagawa, Yoshiharu Soeta
    Proceedings of the AES International Conference 206-214 2011年4月14日  
    The production concept of car engine sound has been changing from finding a solution to noise to designing sound. Although many studies have been conducted on creating comfortable car-engine sound, the psychoacoustic effects of time-varying rate for accelerating-engine sounds are still unclear. Thus, we investigated the effects of increasing the frequency rate of car interior noise on auditory impressions using psychological and neurophysiological methods. Harmonic complex tones that simulate acceleration noise were used as stimuli. First, relationship between'sporty' feeling from dynamic charastics from the engine sound and brain meganetic fields was invertigated. In this investigation, subjective evaluations were examined using the semantic differential (SD) method. And neuronal activities of the auditory cortex evoked by these stimuli were measured by magnetoencephalography (MEG). The results indicated that has a significant effect on subjective impressions and on neuronal activities of the auditory cortex. Second, we investigated the relationships between subjective preference and brain magnetic fields for accelerating car interior noise. Subjective evaluations were examined using the paired-comparison method. At the same time, the MEG alpha-waves range (8-13 Hz) measurements were made and analyzed using the autocorrelation function (ACF). The results indicate that the effective duration of the ACF of the MEGs of between 8 and 13 Hz lengthens after the presentation of a preferred sound.
  • 中川誠司, 藤幸千賀, 籠宮隆之
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.3-P-50(A) 2011年3月2日  
  • 鈴木茉莉緒, 小田伸午, 籠宮隆之, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.3-P-36(B) 2011年3月2日  
  • 伊藤一仁, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.3-P-16(B) 2011年3月2日  
  • 保手浜拓也, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.3-P-52(B) 2011年3月2日  
  • 籠宮隆之, 中川誠司
    日本音響学会研究発表会講演論文集(CD−ROM) 2011 ROMBUNNO.3-P-46(A)-594 2011年3月2日  

書籍等出版物

 8

担当経験のある科目(授業)

 28

共同研究・競争的資金等の研究課題

 27