研究者業績

中口 俊哉

ナカグチ トシヤ  (Toshiya Nakaguchi)

基本情報

所属
千葉大学 フロンティア医工学センター 教授
学位
博士(工学)(上智大学)

J-GLOBAL ID
200901090860522117
researchmap会員ID
5000048018

外部リンク

論文

 207
  • Rizki Nurfauzi, Ayaka Baba, Taka-Aki Nakada, Toshiya Nakaguchi, Yukihiro Nomura
    Biomedical Physics & Engineering Express 11(2) 025026-025026 2025年2月6日  
    Abstract Traumatic injury remains a leading cause of death worldwide, with traumatic bleeding being one of its most critical and fatal consequences. The use of whole-body computed tomography (WBCT) in trauma management has rapidly expanded. However, interpreting WBCT images within the limited time available before treatment is particularly challenging for acute care physicians. Our group has previously developed an automated bleeding detection method in WBCT images. However, further reduction of false positives (FPs) is necessary for clinical application. To address this issue, we propose a novel automated detection for traumatic bleeding in CT images using deep learning and multi-organ segmentation; Methods: The proposed method integrates a three-dimensional U-Net# model for bleeding detection with an FP reduction approach based on multi-organ segmentation. The multi-organ segmentation method targets the bone, kidney, and vascular regions, where FPs are primarily found during the bleeding detection process. We evaluated the proposed method using a dataset of delayed-phase contrast-enhanced trauma CT images collected from four institutions; Results: Our method detected 70.0% of bleedings with 76.2 FPs/case. The processing time for our method was 6.3 ± 1.4 min. Compared with our previous ap-proach, the proposed method significantly reduced the number of FPs while maintaining detection sensitivity.
  • Ryo Oka, Bochong Li, Seiji Kato, Takanobu Utsumi, Takumi Endo, Naoto Kamiya, Toshiya Nakaguchi, Hiroyoshi Suzuki
    Current Urology 2025年2月3日  
    Abstract Background With the rising incidence of prostate cancer (PCa), there is a global demand for assistive tools that aid in the diagnosis of high-grade PCa. This study aimed to develop a diagnostic support system for high-grade PCa using innovative magnetic resonance imaging (MRI) sequences in conjunction with artificial intelligence (AI). Materials and methods We examined image sequences of 254 patients with PCa obtained from diffusion-weighted and T2-weighted imaging, using novel MRI sequences before prostatectomy, to elucidate the characteristics of the 3-dimensional (3D) image sequences. The presence of PCa was determined based on the final diagnosis derived from pathological results after prostatectomy. A 3D deep convolutional neural network (3DCNN) was used as the AI for image recognition. Data augmentation was conducted to enhance the image dataset. High-grade PCa was defined as Gleason grade group 4 or higher. Results We developed a learning system using a 3DCNN as a diagnostic support system for high-grade PCa. The sensitivity and area under the curve values were 85% and 0.82, respectively. Conclusions The 3DCNN-based AI diagnostic support system, developed in this study using innovative 3D multiparametric MRI sequences, has the potential to assist in identifying patients at a higher risk of pretreatment of high-grade PCa.
  • Zhe Li, Aya Kanazuka, Atsushi Hojo, Yukihiro Nomura, Toshiya Nakaguchi
    Electronics 13(19) 3882-3882 2024年9月30日  査読有り
    The COVID-19 pandemic has significantly disrupted traditional medical training, particularly in critical areas such as the injection process, which require expert supervision. To address the challenges posed by reduced face-to-face interactions, this study introduces a multi-modal fusion network designed to evaluate the timing and motion aspects of the injection training process in medical education. The proposed framework integrates 3D reconstructed data and 2D images of hand movements during the injection process. The 3D data are preprocessed and encoded by a Long Short-Term Memory (LSTM) network to extract temporal features, while a Convolutional Neural Network (CNN) processes the 2D images to capture detailed image features. These encoded features are then fused and refined through a proposed multi-head self-attention module, which enhances the model’s ability to capture and weigh important temporal and image dynamics in the injection process. The final classification of the injection process is conducted by a classifier module. The model’s performance was rigorously evaluated using video data from 255 subjects with assessments made by professional physicians according to the Objective Structured Assessment of Technical Skill—Global Rating Score (OSATS-GRS)[B] criteria for time and motion evaluation. The experimental results demonstrate that the proposed data fusion model achieves an accuracy of 0.7238, an F1-score of 0.7060, a precision of 0.7339, a recall of 0.7238, and an AUC of 0.8343. These findings highlight the model’s potential as an effective tool for providing objective feedback in medical injection training, offering a scalable solution for the post-pandemic evolution of medical education.
  • Yusako Morishita, Yuya Oura, Hiroshi Oyama, Izumi Usui, Yukihiro Nomura, Toshiya Nakaguchi
    The Japanese Journal for Medical Virtual Reality 21(1) 1-1 2024年9月  査読有り
  • Ping Xuan, Xiuqiang Chu, Hui Cui, Toshiya Nakaguchi, Linlin Wang, Zhiyuan Ning, Zhiyu Ning, Changyang Li, Tiangang Zhang
    Computers in Biology and Medicine 177 108640-108640 2024年7月  

MISC

 186

書籍等出版物

 3

講演・口頭発表等

 577

共同研究・競争的資金等の研究課題

 18