研究者業績

中口 俊哉

ナカグチ トシヤ  (Toshiya Nakaguchi)

基本情報

所属
千葉大学 フロンティア医工学センター 教授
学位
博士(工学)(上智大学)

J-GLOBAL ID
200901090860522117
researchmap会員ID
5000048018

外部リンク

論文

 209
  • Zhe Li, Aya Kanazuka, Atsushi Hojo, Yukihiro Nomura, Toshiya Nakaguchi
    Measurement 2025年5月  
  • Huitao Wang, Takahiro Nakajima, Kohei Shikano, Yukihiro Nomura, Toshiya Nakaguchi
    Tomography 2025年2月27日  
  • Rizki Nurfauzi, Ayaka Baba, Taka-Aki Nakada, Toshiya Nakaguchi, Yukihiro Nomura
    Biomedical Physics & Engineering Express 11(2) 025026-025026 2025年2月6日  
    Abstract Traumatic injury remains a leading cause of death worldwide, with traumatic bleeding being one of its most critical and fatal consequences. The use of whole-body computed tomography (WBCT) in trauma management has rapidly expanded. However, interpreting WBCT images within the limited time available before treatment is particularly challenging for acute care physicians. Our group has previously developed an automated bleeding detection method in WBCT images. However, further reduction of false positives (FPs) is necessary for clinical application. To address this issue, we propose a novel automated detection for traumatic bleeding in CT images using deep learning and multi-organ segmentation; Methods: The proposed method integrates a three-dimensional U-Net# model for bleeding detection with an FP reduction approach based on multi-organ segmentation. The multi-organ segmentation method targets the bone, kidney, and vascular regions, where FPs are primarily found during the bleeding detection process. We evaluated the proposed method using a dataset of delayed-phase contrast-enhanced trauma CT images collected from four institutions; Results: Our method detected 70.0% of bleedings with 76.2 FPs/case. The processing time for our method was 6.3 ± 1.4 min. Compared with our previous ap-proach, the proposed method significantly reduced the number of FPs while maintaining detection sensitivity.
  • Ryo Oka, Bochong Li, Seiji Kato, Takanobu Utsumi, Takumi Endo, Naoto Kamiya, Toshiya Nakaguchi, Hiroyoshi Suzuki
    Current Urology 2025年2月3日  
    Abstract Background With the rising incidence of prostate cancer (PCa), there is a global demand for assistive tools that aid in the diagnosis of high-grade PCa. This study aimed to develop a diagnostic support system for high-grade PCa using innovative magnetic resonance imaging (MRI) sequences in conjunction with artificial intelligence (AI). Materials and methods We examined image sequences of 254 patients with PCa obtained from diffusion-weighted and T2-weighted imaging, using novel MRI sequences before prostatectomy, to elucidate the characteristics of the 3-dimensional (3D) image sequences. The presence of PCa was determined based on the final diagnosis derived from pathological results after prostatectomy. A 3D deep convolutional neural network (3DCNN) was used as the AI for image recognition. Data augmentation was conducted to enhance the image dataset. High-grade PCa was defined as Gleason grade group 4 or higher. Results We developed a learning system using a 3DCNN as a diagnostic support system for high-grade PCa. The sensitivity and area under the curve values were 85% and 0.82, respectively. Conclusions The 3DCNN-based AI diagnostic support system, developed in this study using innovative 3D multiparametric MRI sequences, has the potential to assist in identifying patients at a higher risk of pretreatment of high-grade PCa.
  • Zhe Li, Aya Kanazuka, Atsushi Hojo, Yukihiro Nomura, Toshiya Nakaguchi
    Electronics 13(19) 3882-3882 2024年9月30日  査読有り
    The COVID-19 pandemic has significantly disrupted traditional medical training, particularly in critical areas such as the injection process, which require expert supervision. To address the challenges posed by reduced face-to-face interactions, this study introduces a multi-modal fusion network designed to evaluate the timing and motion aspects of the injection training process in medical education. The proposed framework integrates 3D reconstructed data and 2D images of hand movements during the injection process. The 3D data are preprocessed and encoded by a Long Short-Term Memory (LSTM) network to extract temporal features, while a Convolutional Neural Network (CNN) processes the 2D images to capture detailed image features. These encoded features are then fused and refined through a proposed multi-head self-attention module, which enhances the model’s ability to capture and weigh important temporal and image dynamics in the injection process. The final classification of the injection process is conducted by a classifier module. The model’s performance was rigorously evaluated using video data from 255 subjects with assessments made by professional physicians according to the Objective Structured Assessment of Technical Skill—Global Rating Score (OSATS-GRS)[B] criteria for time and motion evaluation. The experimental results demonstrate that the proposed data fusion model achieves an accuracy of 0.7238, an F1-score of 0.7060, a precision of 0.7339, a recall of 0.7238, and an AUC of 0.8343. These findings highlight the model’s potential as an effective tool for providing objective feedback in medical injection training, offering a scalable solution for the post-pandemic evolution of medical education.

MISC

 186

書籍等出版物

 3

講演・口頭発表等

 577
  • 橋本賢介, 藤原伊純, 津村徳道, 中口俊哉
    2010春期 応用物理学関係連合講演会, 17a-J-7 2010年3月17日
  • 澤邉暢志, 平井経太, 津村徳道, 中口俊哉, 山本昇志
    2010春期 応用物理学関係連合講演会, 17a-J-6 2010年3月17日
  • 三上 俊彰, 平井 経太, 中口 俊哉, 津村 徳道
    電子情報通信学会 総合大会, AS-4-6 2010年3月16日 一般社団法人電子情報通信学会
  • 前田 未友, 岡本 隆太郎, 山本 昇志, 津村 徳道, 中口 俊哉, 下山 一郎, 三宅 洋一
    電子情報通信学会 総合大会, AS-4-7 2010年3月16日 一般社団法人電子情報通信学会
  • Masayuki Ukishima, Yoshinori Suzuki, Norimichi Tsumura, Toshiya Nakaguchi, Martti Mäkinen, Jussi Parkkinen
    TAGA 62nd Annual Technical Conference, pp.61-62, San Diego, U.S.A. 2010年3月
    A method is proposed to separately model the mechanical dot gain and the optical dot gain. First, an iterative algorithm is proposed to estimate the spatio-spectral transmittance of ink layer from the spatio-spectral reflectance of color halftone print measured with the reflection optical microscope attached with the liquid crystal tunable filter (LCTF). The spatio-spectral transmittance of ink layer is not affected by the optical dot gain and is only affected by the mechanical dot gain. Next, a model is proposed to estimate the effective dot coverage using the estimated spatio-spectral transmittance of ink layer. It corresponds to the analysis of mechanical dot gain. Next, a model is proposed to estimate Yule-Nielsen's n parameter using the effective dot coverage. It corresponds to the analysis of optical dot gain. Finally, the prediction accuracy of the proposed model is evaluated by the ΔE 94 between the measured and predicted spectral reflectance of offset printing images with cyan and magenta inks. The prediction accuracy of the proposed model was significant since the average ΔE94 and the maximum ΔE94 of all samples between the measured spectral reflectance and the predicted spectral reflectance were 0.62 and 1.37, respectively.
  • Ahmed Afifi, Toshiya Nakaguchi, Norimichi Tsumura
    SPIE Medical Imaging 2010, 7623-153, SanDiego, U.S.A 2010年2月 SPIE-INT SOC OPTICAL ENGINEERING
    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The pre-segmentation is carried out by labeling each pixel according to its high level texture features extracted using the over-complete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.
  • Satoshi Yamamoto, Norimichi Tsumura, Toshiya Nakaguchi, Takao Namiki, Yuji Kasahara, Katsutoshi Terasawa, Yoichi Miyake
    5th European Conference on Colour in Graphics, Imaging, and Vision and 12th International Symposium on Multispectral Colour Science, CGIV 2010/MCS'10, Joensuu, Finland, June 14-17, 2010 2010年 IS&T - The Society for Imaging Science and Technology
  • Masayuki Ukishima, Yoshinori Suzuki, Norimichi Tsumura, Toshiya Nakaguchi, Martti Mäkinen, Shinichi Inoue, Jussi Parkkinen
    5th European Conference on Colour in Graphics, Imaging, and Vision and 12th International Symposium on Multispectral Colour Science, CGIV 2010/MCS'10, Joensuu, Finland, June 14-17, 2010 2010年 IS&T - The Society for Imaging Science and Technology
  • Satoshi Yamamoto, Yasumasa Itakura, Masashi Sawabe, Gimpei Okada, Norimichi Tsumura, Toshiya Nakaguchi
    2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010 2010年 IEEE Computer Society
    In this article, we propose an efficient and accurate compressive sensing-based method for estimating the light transport characteristics of real world scenes. Although compressive sensing allows efficient estimation of a highdimension signal with a sparse or near-to-sparse representation from a small number of samples, the computational cost of the compressive sensing in estimating the light transport characteristics is relatively high. Moreover, these methods require a relatively smaller number of images compared with other techniques although they still need 500- 1000 images to estimate an accurate light transport matrix. Our proposed method - precomputed ROMP (Regularized Orthogonal Matching Pursuit) - improves the performance of the compressive sensing by providing an appropriate initial state, which allows us to more accurately estimate the matrix with fewer images. This improvement was achieved through two steps: 1) pseudo-single pixel projection by multi-line projection - measuring coarse light transport characteristics to utilize them as an initial state, 2) ROMP with initial signal - refining coarse light transport characteristics with the compressive sensing theory with the initial signal. Precomputed ROMP was carried out by parallel processing. With these two steps, we were able to estimate the light transport characteristics more accurately, much faster, and with a lesser number of images. © 2010 IEEE.
  • Satoshi Yamamoto, Norimichi Tsumura, Keiko Ogawa-Ochiai, Toshiya Nakaguchi, Yuji Kasahara, Takao Namiki, Yoichi Miyake
    2010 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC) 2010年 IEEE
    In this article, we propose an effective color-processing algorithm to analyze the hyperspectral image of the tongue and its application to preventive medicine by the concept of Japanese traditional herbal medicine (Kampo medicine). Kampo medicine contains a number of concepts useful for preventive medicine such as "Mibyou" - disease-oriented state - signs of abnormalities. Hyperspectral images of the tongue were taken with the system with an integrating sphere, and tongue area without coating was eliminated automatically. Then, spectral information of the tongue area without coating was analyzed by principal component analysis, and the component vector best representing the clinical symptom was found by rotating the vector on a plane spanned by two arbitrary principal component vectors.
  • Ahmed Afifi, Toshiya Nakaguchi, Norimichi Tsumura
    2nd International Conference on Agents and Artificial Intelligence (ICAART2010), vol.1, pp.291-297, Rome, Italy 2010年1月 INSTICC-INST SYST TECHNOLOGIES INFORMATION CONTROL & COMMUNICATION
    The image segmentation is the first and essential process in many medical applications. This process is traditionally performed by radiologists or medical specialists to manually trace the objects on each image. In almost all of these applications, the medical specialists have to access a large number of images which is a tedious and a time consuming process. On the other hand, the automatic segmentation is still challenging because of low image contrast and ill-defined boundaries. In this work, we propose a fully automated medical image segmentation framework. In this framework, the segmentation process is constrained by two prior models; a shape prior model and a texture prior model. The shape prior model is constructed from a set of manually segmented images using the principle component analysis (PCA) while the wavelet packet decomposition is utilized to extract the texture features. The fisher linear discriminate algorithm is employed to build the texture prior model from the set of texture features and to perform a preliminary segmentation. Furthermore, the particle swarm optimization algorithm (PSO) is used to refine the preliminary segmentation according to the shape prior model. In this work, we tested the proposed technique for the segmentation of the liver from abdominal CT scans and the obtained results show the efficiency of the proposed technique to accurately delineate the desired objects.
  • 浮島正之, 鈴木芳徳, 津村徳道, 中口俊哉, Martti Makinen, Jussi Parkkinen
    第104 回日本画像学会研究討論会,pp.97-100 2009年12月4日
  • Keiichi Ochiai, Norimichi Tsumura, Toshiya Nakaguchi, Yoichi Miyake
    SIGGAPH ASIA 2009, Posters, Yokohama, Japan 2009年12月
  • 板倉安将, 津村徳道, 中口俊哉
    OPJ2009, 26aD3 2009年11月26日
  • 山本昇志, 上三垣さゆり, 津村徳道, 中口俊哉, 三宅洋一
    OPJ2009, 26aD4 2009年11月26日
  • 鈴木芳徳, 浮島正之, 中口俊哉, 津村徳道, 三宅洋一
    OPJ2009, 26pP29 2009年11月26日
  • 佐々木麻衣, 津村徳道, 中口俊哉, 三宅洋一
    OPJ2009, 25aE3 2009年11月25日
  • Satoshi Sugiyama, Toshiya Nakaguchi, Satoshi Yamamoto, Takao Namiki, Norimichi Tsumura, Yoichi Miyake
    1st Congress of Fatty Liver and Metabolic Syndrome, p.625, Budapest, Hangary 2009年11月
  • 石川裕也, 中口俊哉, 山本智史, 並木隆雄, 津村徳道, 三宅洋一
    生体医工学シンポジウム2009(BMES2009), 3-3-18 2009年9月19日
  • 杉山慧, 中口俊哉, 山本智史, 並木隆雄, 津村徳道, 三宅洋一
    生体医工学シンポジウム2009(BMES2009), 4-1-18 2009年9月19日
  • 菊地綾乃, 平井経太, 中口俊哉, 津村徳道, 三宅洋一
    第3回イメージメディアクオリティとその応用ワークショップ(JIQA2009), pp.84-87 2009年9月9日
  • 浮島正之, 中口俊哉, 津村徳道, Martti Makinen, Jussi Parkkinen, 三宅洋一
    第3回イメージメディアクオリティとその応用ワークショップ(JIQA2009)pp.88-92 2009年9月9日
  • 岩波 琢也, 菊地 綾乃, 平井 経太, 金子 毅, 中口 俊哉, 伊藤 典男, 津村 徳道, 三宅 洋一
    映像情報メディア学会 2009年次大会,14-7 2009年8月26日 一般社団法人 映像情報メディア学会
    In the present paper, we have studied the relationship between ambient illuminations and psychological factors in viewing video image. Four kinds of video images with changing layouts of illuminations were observed and analyzed by Semantic Differential method. It became clear that factor scores were increased by the illumination from around the display and ceiling.
  • 中口 俊哉, 館 真吾, 津村 徳道, 三宅 洋一
    第28回 日本医用画像工学会(JAMIT2009), P12 2009年8月4日 日本医用画像工学会
    CT画像中の臓器や組織の形状を3次元的に提示することは診断支援、医療教育などに大変有用であるが、各臓器の表面形状を3次元的に抽出する必要がある。中でも膵臓は周囲を他の臓器に囲まれて境界が不明瞭であるため、抽出が困難とされている。本研究では利用者の知識を簡易に活用するインタラクティブ抽出手法を提案する。初めに利用者の手によりCT画像中から膵臓全体を含む領域を粗く指定する。次に、この領域の境界上の任意の点と、同領域の中心線上の最近傍点とを結び、その線分におけるCT値変化の微係数を計算、微係数の最大値をその方向におけるエッジ信頼度と定義する。このようにして作成したエッジ信頼度分布を自動抽出手法に利用することで不鮮明な輪郭部における形状の漏出エラーを抑制した新しい形状抽出を実現した。本研究ではChan-VeseモデルのLevel set抽出手法に適用し、安定かつ高精度に膵臓形状を得ることができた。(著者抄録)
  • 山本 昇志, 上三垣 さゆり, 津村 徳道, 中口 俊哉, 三宅 洋一
    映像メディア学会技術報告, Vol.33, No.33, pp.21-24 2009年8月 一般社団法人 映像情報メディア学会
    我々は製造段階での作業に対する質感の変化を視覚的にシミュレーションする質感再現システムを開発している.目標とするシステムには作業と連動する質感再現が重要で,実作業,特に手動作との連携が必要不可欠である.そこで本研究では机上作業における手の位置と動作をロバストに計測する手法を開発した.手の位置の検出には赤外線のみを発するプロジェクタで投影した構造パターンを用い,机上から一定高さ以内の手の位置のみを検出することで,作業時のみの手の位置を検出することができる.また手動作判定に必要な手の形状計測には空間コード化法を用いたが,計測範囲を計算で求めた手の周辺位置のみと限定することで計測及び形状再構築の処理時間を飛躍的に削減することができた.本報告ではこれら画像処理的な手法と,試作したプロトタイプシステムの性能について述べる.
  • Masayuki Ukishima, Martti Makinen, Toshiya Nakaguchi, Norimichi Tsumura, Jussi Parkkinen, Yoichi Miyake
    Lecture Notes in Computer Science (SCIA2009), Vol.5575, pp.607-616, Oslo, Norway 2009年6月
  • Tatsuya Namae, Takeshi Koishi, Toshiya Nakaguchi, Norimichi Tsumura, Yoichi Miyake
    Int'l Journal of Computer Assisted Radiology and Surgery (CARS2009), Vol.4 Supplement 1, p.S276Int'l Journal of Computer Assisted Radiology and Surgery (CARS2009), Vol.4 Supplement 1, p.S276 2009年6月
  • Takeshi Koishi, Ginpei Okada, Tatsuya Namae, Toshiya Nakaguchi, Norimichi Tsumura, Yoichi Miyake
    Int'l Journal of Computer Assisted Radiology and Surgery (CARS2009), Vol.4 Supplement 1, p.S280 2009年6月
  • 後上慧人, 細岡信介, 牧野貴雄, 中口俊哉, 津村徳道, 三宅洋一
    2009年度日本写真学会年次大会, pp.29-30 2009年5月
  • 平井経太, ジャンバル トゥムルゴー, 菊地綾乃, 中口俊哉, 津村徳道, 三宅洋一
    第3回新画像システム・情報フォトニクス研究討論会, pp.28-29 2009年5月
  • 浮島正之, 中口俊哉, 津村徳道, Marruku Hauta-Kasari, Jussi Parkkinen, 三宅洋一
    2009年度日本写真学会年次大会, pp.27-28 2009年5月
  • Takuro Ishii, Tatsuo Igarashi, Satoki Zenbutsu, Masashi Sekine, Toshiya Nakaguchi, Yukio Naya, Harufumi Makino
    Society of American Gastrointestinal and Endoscopic Surgeons (SAGES 2009), Phoenix, U.S.A. 2009年4月
  • 生江達哉, 小石毅, 中口俊哉, 津村徳道, 三宅洋一
    第56回 応用物理学関係連合講演会,31p-ZX-12, p.1052 2009年3月31日
  • 鈴木 芳徳, 浮島 正之, 中口 俊哉, 津村 徳道, 三宅 洋一
    電子情報通信学会総合大会 AS-3-9, pp.S-53-54 2009年3月17日 一般社団法人電子情報通信学会
  • 浮島 正之, Makinen Martti, 中口 俊哉, 津村 徳道, Parkkinen Jussi, 三宅 洋一
    電子情報通信学会総合大会 AS-3-8, pp.S-51-52 2009年3月17日 一般社団法人電子情報通信学会
  • トゥムルトゴー ジャンバル, 平井 経太, 菊池 綾乃, 中口 俊哉, 津村 徳道, 三宅 洋一
    電子情報通信学会総合大会 ,AS-3-7, pp.S-49-50 2009年3月17日 一般社団法人電子情報通信学会
  • 中口 俊哉
    日本計算数理工学会誌, No.2009-1, pp.5-10 2009年3月27日 日本計算数理工学会
  • Keiji Nishimura, Takeshi Koishi, Toshiya Nakaguchi, Sinya Morita, Norimichi Tsumura, Yoichi Miyake
    Proceedings of SPIE, Medical Imaging, Vol.7261, 7261-78, Orlando, USA 2009年2月
  • Mai Sasaki, Takeshi Koishi, Toshiya Nakaguchi, Norimichi Tsumura, Yoichi Miyake
    Proceedings of SPIE, Medical Imaging, Vol.7261, 7261-10, Orlando, USA 2009年2月 SPIE
    In recent years, various kinds of endoscope have been developed and widely used to endoscopic biopsy, endoscopic operation and endoscopy. The size of the inflammatory part is important to determine a method of medical treatment. However, it is not easy to measure absolute size of inflammatory part such as ulcer, cancer and polyp from the endoscopic image. Therefore, it is required measuring the size of those part in endoscopy. In this paper, we propose a new method to measure the absolute length in a straight line between arbitrary two points based on the photogrammetry using endoscope with magnetic tracking sensor which gives camera position and angle. In this method, the stereocorresponding points between two endoscopic images are determined by the endoscopist without any apparatus of projection and calculation to find the stereo correspondences, then the absolute length can be calculated on the basis of the photogrammetry. The evaluation experiment using a checkerboard showed that the errors of the measurements are less than 2% of the target length when the baseline is sufficiently-long. © 2009 SPIE.
  • Shinji Nakagawa, Toshiya Nakaguchi, Norimichi Tsumura, Yoichi Miyake
    Proceedings of SPIE, Vol.7242, 7242-46, San Jose, USA 2009年1月
  • 平井 経太, 津村 徳道, 中口 俊哉
    映像情報メディア学会 冬季大会, 4-4 2009年12月16日 一般社団法人 映像情報メディア学会
    In this paper, spatio-velocity contrast sensitivities were measured by using red-green and blue-yellow grating stimuli. The results show the contrast sensitivities at the spatial frequencies over one cycle-per-degree decrease gradually as the velocities of the stimuli increase.
  • Keita Hirai, Jambal Tumurtogoo, Ayano Kikuchi, Toshiya Nakaguchi, Norimichi Tsumura, Yoichi Miyake
    SEVENTEENTH COLOR IMAGING CONFERENCE - COLOR SCIENCE AND ENGINEERING SYSTEMS, TECHNOLOGIES, AND APPLICATIONS 2009年 SOC IMAGING SCIENCE & TECHNOLOGY
    In this paper, we proposed and validated SV-CIELAB which is a video quality assessment (VQA) method using a spatio-velocity contrast sensitivity function (SV-CSF). The SV-CSF consists of the relationship among contrast sensitivities, spatial frequencies and velocities of stimuli. We used the SV-CSF for filtering original and distorted videos. The criteria in our method are obtained by calculating CIELAB color differences between filtered videos. From the experimental results for the validation, it was shown that SV-CIELAB is the more efficient VQA method than conventional methods such as CIELAB color difference, Spatial-CIELAB and so on.
  • Sayuri Kamimigaki, Shoji Yamamoto, Keita Hirai, Norimichi Tsumura, Toshiya Nakaguchi, Yoichi Miyake
    SEVENTEENTH COLOR IMAGING CONFERENCE - COLOR SCIENCE AND ENGINEERING SYSTEMS, TECHNOLOGIES, AND APPLICATIONS 2009年 SOC IMAGING SCIENCE & TECHNOLOGY
    Rapid proto-typing of product is important technology that the computer science can contribute. Especially, the evaluation of realistic appearance decides the quality of the product in a final process. As the industrial design application, we have been developing the Appearance-based Display System by using the radiance control of projector and technique of mixed-reality. In this paper, we proposed a novel reproduction system of three-dimensional appearance with glossiness, which is controlled by two or more projection images. R. Raskar et al. proposed 2-pass rendering method to consider the 3D geometry. However, view-dependent shading, such as specular highlights is not considered in this method. Because the shading view is assumed to be the same as rendering view or user's viewpoint, the view-dependent shading could not ensure consistency. Therefore, in this paper, we divide 3D geometry into object shape, virtual light direction (shading direction), user's viewpoint (view-dependent rendering view) and projector position(view-independent rendering view). 3D geometry of the viewpoint and the object can be calibrated by the space encoding method with projector-camera system. According to the geometry calibration, the partial responsibility and compensation of the intensities errors such as occlusions, overlaps and attenuation for each projection is automatically decided. As the result, this system can generate the real appearance of gloss, color and shade on the mock object's surface.
  • 中口俊哉
    第14回日本バーチャルリアリティ学会大会, 2B2-3 2009年
  • Kenji Kamimura, Norimichi Tsumura, Toshiya Nakaguchi, Hideto Motomura, Yoichi Miyake
    Proceedings of SPIE, Vol.7242, 7242-41, San Jose, USA 2009年1月
  • Shoji Yamamoto, Sayuri Kamimigaki, Norimichi Tsumura, Toshiya Nakaguchi, Yoichi Miyake
    Proceedings of SPIE, Vol.7251, 7251-02, San Jose, USA 2009年1月
  • Takeshi Koishi, Mai Sasaki, Toshiya Nakaguchi, Norimichi Tsumura, Yoichi Miyake
    Technical Report of IEICE, MI2008-71, Taiwan 2009年1月
  • Takuya Iwanami, Ayano Kikuchi, Takashi Kaneko, Keita Hirai, Natsumi Yano, Toshiya Nakaguchi, Norimichi Tsumura, Yasuhiro Yoshida, Yoichi Miyake
    Proceedings of SPIE, Vol.7241, 7241-20, San Jose, USA 2009年1月 SPIE
    In this paper, we have clarified the relationship between ambient illumination and psychological factors in viewing of display images. Psychological factors were obtained by the factor analysis with the results of the semantic differential (SD) method. In the psychological experiments, subjects evaluated the impressions of displayed images with changing ambient illuminating conditions. The illumination conditions were controlled by a fluorescent ceiling light and a color LED illumination which was located behind the display. We experimented under two kinds of conditions. One was the experiment with changing brightness of the ambient illumination. The other was the experiment with changing the colors of the background illumination. In the results of the experiment, two factors "realistic sensation, dynamism" and "comfortable," were extracted under different brightness of the ambient illumination of the display surroundings. It was shown that the "comfortable" was improved by the brightness of display surroundings. On the other hand, when the illumination color of surroundings was changed, three factors "comfortable," "realistic sensation, dynamism" and "activity" were extracted. It was also shown that the value of "comfortable" and "realistic sensation, dynamism" increased when the display surroundings were illuminated by the average color of the image contents. © 2009 SPIE-IS&amp T.
  • Kenji Kamimura, Norimichi Tsumura, Toshiya Nakaguchi, Hideto Motomura, Yoichi Miyake
    Proceedings of SPIE, Vol.7242, 7242-41 2009年1月
    In recent years, the resolution of display devices has been extremely increased. The resolution of video camera (except very expensive one), however, is quite lower than that of display since it is difficult to achieve high spatial resolution with specific frame rate (e.g. 30 frames per second) due to the limited bandwidth. The resolution of image can be increased by interpolation, such as bi-cubic interpolation, but in this method it is known that the edges of image are blurred. To create plausible high-frequency details in the blurred image, super-resolution technique has been studied for a long time. In this paper, we proose a new algorithm for video super-resolution by considering multi-sensor camera system. The multi-sensor camera can capture two types video sequence as follow (a) high-resolution with low frame rate luminance sequence, (b) low-resolution with high frame rate color sequences. The training pairs for super-resolution are obtained from these two sequences. The relationships between the high- and low-resolution frames are trained using pixel-based feature named "texton" and stored in the database with their spatial distribution. The low-resolution sequences are then represented with texton and each texton is substituted by searching the trained database to create high-resolution features in output sequences. The experimental results showed that the proposed method can well reproduce both the detail regions and sharp edges of the scene. It was also shown that the PSNR of the image obtained by proposed method is improved compared to the image by bi-cubic interpolation method. © 2009 SPIE-IS&amp T.
  • 小石毅, 中口俊哉, 林秀樹, 津村徳道, 三宅洋一
    2008年度 日本写真学会秋季研究報告会, pp.3-4 2008年12月8日

共同研究・競争的資金等の研究課題

 18