フロンティア医工学センター

岡本 尚之

オカモト タカユキ  (Takayuki Okamoto)

基本情報

所属
千葉大学 フロンティア医工学センター 助教
学位
博士(工学)(2022年9月 千葉大学)

J-GLOBAL ID
202201020660944348
researchmap会員ID
R000044612

受賞

 4

主要な論文

 10
  • Takayuki Okamoto, Hiroki Okamura, Takehito Iwase, Tomohiro Niizawa, Yuto Kawamata, Hirotaka Yokouchi, Takayuki Baba, Hideaki Haneishi
    Optics Continuum 3(7) 1132-1148 2024年7月15日  査読有り筆頭著者責任著者
  • Takayuki Okamoto, Toshio Kumakiri, Hideaki Haneishi
    Radiological Physics and Technology 15(3) 206-223 2022年5月27日  査読有り筆頭著者責任著者
    Micro-computed tomography (micro-CT) enables the non-destructive acquisition of three-dimensional (3D) morphological structures at the micrometer scale. Although it is expected to be used in pathology and histology to analyze the 3D microstructure of tissues, micro-CT imaging of tissue specimens requires a long scan time. A high-speed imaging method, sparse-view CT, can reduce the total scan time and radiation dose; however, it causes severe streak artifacts on tomographic images reconstructed with analytical algorithms due to insufficient sampling. In this paper, we propose an artifact reduction method for 3D volume projection data from sparse-view micro-CT. Specifically, we developed a patch-based lightweight fully convolutional network to estimate full-view 3D volume projection data from sparse-view 3D volume projection data. We evaluated the effectiveness of the proposed method using physically acquired datasets. The qualitative and quantitative results showed that the proposed method achieved high estimation accuracy and suppressed streak artifacts in the reconstructed images. In addition, we confirmed that the proposed method requires both short training and prediction times. Our study demonstrates that the proposed method has great potential for artifact reduction for 3D volume projection data under sparse-view conditions.
  • Takayuki Okamoto, Takashi Ohnishi, Hideaki Haneishi
    IEEE Transactions on Radiation and Plasma Medical Sciences 6(8) 859-873 2022年4月20日  査読有り筆頭著者責任著者
    Sparse-view computed tomography (CT), an imaging technique that reduces the number of projections, can reduce the total scan duration and radiation dose. However, sparse data sampling causes streak artifacts on images reconstructed with analytical algorithms. In this paper, we propose an artifact reduction method for sparse-view CT using deep learning. We developed a light-weight fully convolutional network to estimate a fully sampled sinogram from a sparse-view sinogram by enlargement in the vertical direction. Furthermore, we introduced the band patch, a rectangular region cropped in the vertical direction, as an input image for the network based on the sinogram’s characteristics. Comparison experiments using a swine rib dataset of micro-CT scans and a chest dataset of clinical CT scans were conducted to compare the proposed method, improved U-net from a previous study, and the U-net with band patches. The experimental results showed that the proposed method achieved the best performance and the U-net with band patches had the second-best result in terms of accuracy and prediction time. In addition, the reconstructed images of the proposed method suppressed streak artifacts while preserving the object’s structural information. We confirmed that the proposed method and band patch are useful for artifact reduction for sparse-view CT.
  • Takayuki Okamoto, Takashi Ohnishi, Hiroshi Kawahira, Olga Dergachyava, Pierre Jannin, Hideaki Haneishi
    Signal, Image and Video Processing 13(2) 405-412 2019年3月12日  査読有り筆頭著者
    Laparoscopic surgery allows reduction in surgical incision size and leads to faster recovery compared with open surgery. When bleeding takes place, hemostasis treatment is planned according to the state and location of the bleeding. However, it is difficult to find the bleeding source due to low visibility caused by the narrow field of view of the laparoscope. In this paper, we propose the concept of a hemostasis support system that automatically identifies blood regions and indicates them to the surgeon. We mainly describe a blood region identification method that is one of technical challenges to realize the support system. The proposed method is based on a machine learning technique called the support vector machine, working in real time. Within this method, all the pixels in the image are classified as either blood or non-blood pixels based on color features (e.g., a combination of RGB and HSV values). The suitable combination of feature values used for the classification is determined by a simple feature selection method. Three feature values were determined to identify the blood region. We then validated the proposed method with ten sequences of laparoscopic images by cross-validation. The average accuracy exceeded 95% with a processing time of about 12.6 ms/frame. The proposed method was able to accurately identify blood regions and was suitable for real-time applications.

MISC

 24

講演・口頭発表等

 42

共同研究・競争的資金等の研究課題

 4

産業財産権

 2