研究者業績

岡本 尚之

オカモト タカユキ  (Takayuki Okamoto)

基本情報

所属
千葉大学 フロンティア医工学センター 助教
学位
博士(工学)(2022年9月 千葉大学)

J-GLOBAL ID
202201020660944348
researchmap会員ID
R000044612

受賞

 4

論文

 10
  • Xingyu Zhou, Chen Ye, Takayuki Okamoto, Yuma Iwao, Naoko Kawata, Ayako Shimada, Hideaki Haneishi
    Japanese Journal of Radiology 2024年8月3日  査読有り
  • Takayuki Okamoto, Hiroki Okamura, Takehito Iwase, Tomohiro Niizawa, Yuto Kawamata, Hirotaka Yokouchi, Takayuki Baba, Hideaki Haneishi
    Optics Continuum 3(7) 1132-1148 2024年7月15日  査読有り筆頭著者責任著者
  • Naoki Ikezawa, Takayuki Okamoto, Yoichi Yoshida, Satoru Kurihara, Nozomi Takahashi, Taka-aki Nakada, Hideaki Haneishi
    Scientific Reports 14(1) 2024年2月10日  査読有り責任著者
    Abstract A stroke is a medical emergency and thus requires immediate treatment. Paramedics should accurately assess suspected stroke patients and promptly transport them to a hospital with stroke care facilities; however, current assessment procedures rely on subjective visual assessment. We aim to develop an automatic evaluation system for central facial palsy (CFP) that uses RGB cameras installed in an ambulance. This paper presents two evaluation indices, namely the symmetry of mouth movement and the difference in mouth shape, respectively, extracted from video frames. These evaluation indices allow us to quantitatively evaluate the degree of facial palsy. A classification model based on these indices can discriminate patients with CFP. The results of experiments using our dataset show that the values of the two evaluation indices are significantly different between healthy subjects and CFP patients. Furthermore, our classification model achieved an area under the curve of 0.847. This study demonstrates that the proposed automatic evaluation system has great potential for quantitatively assessing CFP patients based on two evaluation indices.
  • Xingyu Zhou, Chen Ye, Yuma Iwao, Takayuki Okamoto, Naoko Kawata, Ayako Shimada, Hideaki Haneishi
    Diagnostics 13(20) 3261 2023年10月20日  査読有り
    Background: Chronic obstructive pulmonary disease (COPD) typically causes airflow blockage and breathing difficulties, which may result in the abnormal morphology and motion of the lungs or diaphragm. Purpose: This study aims to quantitatively evaluate respiratory diaphragm motion using a thoracic sagittal magnetic resonance imaging (MRI) series, including motion asynchronization and limitations. Method: First, the diaphragm profile is extracted using a deep-learning-based field segmentation approach. Next, by measuring the motion waveforms of each position in the extracted diaphragm profile, obvious differences in the independent respiration cycles, such as the period and peak amplitude, are verified. Finally, focusing on multiple breathing cycles, the similarity and amplitude of the motion waveforms are evaluated using the normalized correlation coefficient (NCC) and absolute amplitude. Results and Contributions: Compared with normal subjects, patients with severe COPD tend to have lower NCC and absolute amplitude values, suggesting motion asynchronization and limitation of their diaphragms. Our proposed diaphragmatic motion evaluation method may assist in the diagnosis and therapeutic planning of COPD.
  • Takayuki Okamoto, Toshio Kumakiri, Hideaki Haneishi
    Radiological Physics and Technology 15(3) 206-223 2022年5月27日  査読有り筆頭著者責任著者
    Micro-computed tomography (micro-CT) enables the non-destructive acquisition of three-dimensional (3D) morphological structures at the micrometer scale. Although it is expected to be used in pathology and histology to analyze the 3D microstructure of tissues, micro-CT imaging of tissue specimens requires a long scan time. A high-speed imaging method, sparse-view CT, can reduce the total scan time and radiation dose; however, it causes severe streak artifacts on tomographic images reconstructed with analytical algorithms due to insufficient sampling. In this paper, we propose an artifact reduction method for 3D volume projection data from sparse-view micro-CT. Specifically, we developed a patch-based lightweight fully convolutional network to estimate full-view 3D volume projection data from sparse-view 3D volume projection data. We evaluated the effectiveness of the proposed method using physically acquired datasets. The qualitative and quantitative results showed that the proposed method achieved high estimation accuracy and suppressed streak artifacts in the reconstructed images. In addition, we confirmed that the proposed method requires both short training and prediction times. Our study demonstrates that the proposed method has great potential for artifact reduction for 3D volume projection data under sparse-view conditions.
  • Takayuki Okamoto, Takashi Ohnishi, Hideaki Haneishi
    IEEE Transactions on Radiation and Plasma Medical Sciences 6(8) 859-873 2022年4月20日  査読有り筆頭著者責任著者
    Sparse-view computed tomography (CT), an imaging technique that reduces the number of projections, can reduce the total scan duration and radiation dose. However, sparse data sampling causes streak artifacts on images reconstructed with analytical algorithms. In this paper, we propose an artifact reduction method for sparse-view CT using deep learning. We developed a light-weight fully convolutional network to estimate a fully sampled sinogram from a sparse-view sinogram by enlargement in the vertical direction. Furthermore, we introduced the band patch, a rectangular region cropped in the vertical direction, as an input image for the network based on the sinogram’s characteristics. Comparison experiments using a swine rib dataset of micro-CT scans and a chest dataset of clinical CT scans were conducted to compare the proposed method, improved U-net from a previous study, and the U-net with band patches. The experimental results showed that the proposed method achieved the best performance and the U-net with band patches had the second-best result in terms of accuracy and prediction time. In addition, the reconstructed images of the proposed method suppressed streak artifacts while preserving the object’s structural information. We confirmed that the proposed method and band patch are useful for artifact reduction for sparse-view CT.
  • Tin Tin Khaing, Takayuki Okamoto, Chen Ye, Md Abdul Mannan, Gen Miura, Hirotaka Yokouchi, Kazuya Nakano, Pakinee Aimmanee, Stanislav S. Makhanov, Hideaki Haneishi
    Artificial Life and Robotics 27(1) 70-79 2022年1月29日  査読有り責任著者
    Retinitis pigmentosa (RP) is a group of genetic disorders, characterized by degeneration of photoreceptor cells which is the main cause of choroidal thinning. It is one of the leading causes of blindness worldwide. Thus, an investigation of choroidal changes is required for a better understanding of disease and diagnosis of RP. In this paper, we propose an automatic technique for measuring the choroidal parameters in optical coherence tomography (OCT) images of eyes with RP. The parameters include the total choroidal area (TCA), luminal area (LA), stromal area (SA), and choroidal thickness (CT). We applied our recently proposed, dense dilated U-Net segmentation model, called ChoroidNET, for segmenting the choroid layer and choroidal vessels for our RP dataset. Choroid segmentation is an important task since the measurement results depend on it. Comparison with other state-of-the-art models shows that ChoroidNET provides a better quantitative and qualitative segmentation of the choroid layer and choroidal vessels. Next, we measure the choroidal parameters based on the segmentation results of ChoroidNET. The proposed method achieves high reliability with an intraclass correlation coefficient (0.961, 0.940, 0.826, 0.916) for TCA, LA, SA, and CT, respectively.
  • Tin Tin Khaing, Takayuki Okamoto, Chen Ye, Md Abdul Mannan, Hirotaka Yokouchi, Kazuya Nakano, Pakinee Aimmanee, Stanislav S. Makhanov, Hideaki Haneishi
    IEEE Access 9 150951-150965 2021年11月2日  査読有り責任著者
    Understanding the changes in choroidal thickness and vasculature is important to monitor the development and progression of various ophthalmic diseases. Accurate segmentation of the choroid layer and choroidal vessels is critical to better analyze and understand the choroidal changes. In this study, we develop a dense dilated U-Net model (ChoroidNET) for segmenting the choroid layer and choroidal vessels in optical coherence tomography (OCT) images. The performance of ChoroidNET is evaluated using an OCT dataset that contains images with various retinal pathologies. Overall Dice coefficient of 95.1 ± 0.4 and 82.4 ± 2.4 were obtained for choroid layer and vessel segmentation, respectively. Comparisons show that among state-of-the-art models, ChoroidNET, which produces results that are consistent with ground truths, is the most robust segmentation framework.
  • Changhee Han, Takayuki Okamoto, Koichi Takeuchi, Dimitris Katsios, Andrey Grushnikov, Masaaki Kobayashi, Antoine Choppin, Yutaka Kurashina, Yuki Shimahara
    Medical Imaging and Information Sciences 38(2) 73-75 2021年7月6日  招待有り
  • Takayuki Okamoto, Takashi Ohnishi, Hiroshi Kawahira, Olga Dergachyava, Pierre Jannin, Hideaki Haneishi
    Signal, Image and Video Processing 13(2) 405-412 2019年3月12日  査読有り筆頭著者
    Laparoscopic surgery allows reduction in surgical incision size and leads to faster recovery compared with open surgery. When bleeding takes place, hemostasis treatment is planned according to the state and location of the bleeding. However, it is difficult to find the bleeding source due to low visibility caused by the narrow field of view of the laparoscope. In this paper, we propose the concept of a hemostasis support system that automatically identifies blood regions and indicates them to the surgeon. We mainly describe a blood region identification method that is one of technical challenges to realize the support system. The proposed method is based on a machine learning technique called the support vector machine, working in real time. Within this method, all the pixels in the image are classified as either blood or non-blood pixels based on color features (e.g., a combination of RGB and HSV values). The suitable combination of feature values used for the classification is determined by a simple feature selection method. Three feature values were determined to identify the blood region. We then validated the proposed method with ten sequences of laparoscopic images by cross-validation. The average accuracy exceeded 95% with a processing time of about 12.6 ms/frame. The proposed method was able to accurately identify blood regions and was suitable for real-time applications.

MISC

 24

講演・口頭発表等

 42

共同研究・競争的資金等の研究課題

 4

産業財産権

 2