研究者業績

川村 和也

カワムラ カズヤ  (Kazuya Kawamura)

基本情報

所属
千葉大学 フロンティア医工学センター 准教授
(兼任)大学院工学研究院 工学部 総合工学科 医工学コース 准教授
(兼任)大学院工学研究院  融合理工学府 基幹工学専攻 准教授
学位
博士(工学)(2009年3月 早稲田大学)

研究者番号
50449336
ORCID ID
 https://orcid.org/0000-0002-2736-1311
J-GLOBAL ID
200901068885312227
researchmap会員ID
6000000438

外部リンク

論文

 120
  • Kazuya Kawamura, Ayaka Matsui, Ryoichi Nakamura, Nobuyoshi Otori
    Journal of Japan Society of Computer Aided Surgery 25(4) 278-286 2024年  査読有り筆頭著者
  • Kazuya Kawamura, Yuma Shimura
    Journal of Robotics and Mechatronics 34(6) 1277-1283 2022年12月  査読有り筆頭著者
  • Kazuya Kawamura, Ryu Ebata, Ryoichi Nakamura, Nobuyoshi Otori
    International journal of computer assisted radiology and surgery 18(1) 9-16 2022年9月23日  査読有り招待有り筆頭著者
    PURPOSE: Endoscopic sinus surgery (ESS) is widely used to treat chronic sinusitis. However, it involves the use of surgical instruments in a narrow surgical field in close proximity to vital organs, such as the brain and eyes. Thus, an advanced level of surgical skill is expected of surgeons performing this surgery. In a previous study, endoscopic images and surgical navigation information were used to develop an automatic situation recognition method in ESS. In this study, we aimed to develop a more accurate automatic surgical situation recognition method for ESS by improving the method proposed in our previous study and adding post-processing to remove incorrect recognition. METHOD: We examined the training model parameters and the number of long short-term memory (LSTM) units, modified the input data augmentation method, and added post-processing. We also evaluated the modified method using clinical data. RESULT: The proposed improvements improved the overall scene recognition accuracy compared with the previous study. However, phase recognition did not exhibit significant improvement. In addition, the application of the one-dimensional median filter significantly reduced short-time false recognition compared with the time series results. Furthermore, post-processing was required to set constraints on the transition of the scene to further improve recognition accuracy. CONCLUSION: We suggested that the scene recognition could be improved by considering the model parameter, adding the one-dimensional filter and post-processing. However, the scene recognition accuracy remained unsatisfactory. Thus, a more accurate scene recognition and appropriate post-processing method is required.
  • Satoshi Miura, Taisei Kaneko, Kazuya Kawamura, Yo Kobayashi, Masakatsu G. Fujie
    The International Journal of Medical Robotics and Computer Assisted Surgery 18(2) 2022年4月  査読有り
  • Satoshi Miura, Masaki Seki, Yuta Koreeda, Yang Cao, Kazuya Kawamura, Yo Kobayashi, Masakatsu G. Fujie, Tomoyuki Miyashita
    Advanced Biomedical Engineering 11 87-97 2022年  
    Laparoscopic surgery holds great promise in medicine but remains challenging for surgeons because it is difficult to perceive depth while suturing. In addition to binocular parallax, such as threedimensional vision, shadow is essential for depth perception. This paper presents an augmented reality system that draws virtual shadows to aid depth perception. On the visual display, the system generates shadows that mimic actual shadows by estimating shadow positions using image processing. The distance and angle between the forceps tip and the surface were estimated to evaluate the accuracy of the system. To validate the usefulness of this system in surgical applications, novices performed suturing tasks with and without the augmented reality system. The system error and delay were sufficiently small, and the generated shadows were similar to actual shadows. Furthermore, the suturing error decreased significantly when the augmented reality system was used. The shadow-drawing system developed in this study may help surgeons perceive depth during laparoscopic surgery.
  • Satoshi Miura, Masaki Seki, Yuta Koreeda, Yang Cao, Kazuya Kawamura, Yo Kobayashi, Masakatsu G. Fujie, Tomoyuki Miyashita
    ADVANCED BIOMEDICAL ENGINEERING 11 87-97 2022年  
    Laparoscopic surgery holds great promise in medicine but remains challenging for surgeons because it is difficult to perceive depth while suturing. In addition to binocular parallax, such as threedimensional vision, shadow is essential for depth perception. This paper presents an augmented reality system that draws virtual shadows to aid depth perception. On the visual display, the system generates shadows that mimic actual shadows by estimating shadow positions using image processing. The distance and angle between the forceps tip and the surface were estimated to evaluate the accuracy of the system. To validate the usefulness of this system in surgical applications, novices performed suturing tasks with and without the augmented reality system. The system error and delay were sufficiently small, and the generated shadows were similar to actual shadows. Furthermore, the suturing error decreased significantly when the augmented reality system was used. The shadow-drawing system developed in this study may help surgeons perceive depth during laparoscopic surgery.
  • 江幡 龍, 中村 亮一, 川村 和也, 鴻 信義
    日本コンピュータ外科学会誌 23(2) 48-55 2021年4月  査読有り責任著者
  • Satoshi Miura, Ryutaro Ohta, Yang Cao, Kazuya Kawamura, Yo Kobayashi, Masakatsu G. Fujie
    IEEE Transactions on Human-Machine Systems 51(4) 1-8 2021年  査読有り
  • Satoshi Miura, Masaki Seki, Yuta Koreeda, Yang Cao, Kazuya Kawamura, Yo Kobayashi, Masakatsu G. Fujie, Tomoyuki Miyashita
    2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob) 2020-November 205-211 2020年11月  査読有り
    Laparoscopic surgery can realize minimal invasive surgery. However, it's difficult for surgeon to recognize the depth during suturing. Binocular endoscope helps surgeons to recognize the depth, but surgeons do not understand the circumstance by equipping with head mounted display. Since shadow helps surgeon to recognize the depth, in this paper, we developed the virtual shadow drawing system. The system shows shadow like actual by the estimation of the forceps position, surface shape and shadow's position. We tested the accuracy of the system by evaluating the estimated distance and angle between the forceps and the surface. The error and delay were enough small to draw shadow like actual. Furthermore, participants performed the suturing task while looking at the shadow. The experiment was carried out in a variety of the shadow's direction. As result, the suturing error's mean and variance value was the least at the 270 deg. In conclusion, the appropriate shadow would be vertical to the wounds.
  • Satoshi Miura, Masaki Seki, Yuta Koreeda, Yang Cao, Kazuya Kawamura, Yo Kobayashi, Masakatsu G. Fujie, Tomoyuki Miyashita
    2020 8TH IEEE RAS/EMBS INTERNATIONAL CONFERENCE FOR BIOMEDICAL ROBOTICS AND BIOMECHATRONICS (BIOROB) 205-211 2020年  
    Laparoscopic surgery can realise minimal invasive surgery. However, it's difficult for surgeon to recognize the depth during suturing. Binocular endoscope helps surgeons to recognize the depth, but surgeons do not understand the circumstance by equipping with head mounted display. Since shadow helps surgeon to recognize the depth, in this paper, we developed the virtual shadow drawing system. The system shows shadow like actual by the estimation of the forceps position, surface shape and shadow's position. We tested the accuracy of the system by evaluating the estimated distance and angle between the forceps and the surface. The error and delay were enough small to draw shadow like actual. Furthermore, participants performed the suturing task while looking at the shadow. The experiment was carried out in a variety of the shadow's direction. As result, the suturing error's mean and variance value was the least at the 270 deg. In conclusion, the appropriate shadow would be vertical to the wounds.
  • 山田亮太, 川平洋, 下村義弘, 川村和也
    日本コンピュータ外科学会誌 22(2) 131-137 2020年  査読有り責任著者
  • 井上 淳, 飯岡 俊光, 川村 和也, 花崎 泉
    看護理工学会誌 7 99-106 2020年  査読有り
    本研究では筆者らが開発している,従来の歩行器の形状とは異なる新しい形状の杖歩行訓練器が歩行に与える影響について検討した.本杖歩行訓練器は杖歩行の量的訓練を,患者1人でも安全に行わせることを目的としている.現在まで,使用時の歩幅・歩隔・骨盤動揺量の変化等を計測し,安全性を確認してきた.しかし,杖歩行訓練器が人間の歩行に与える影響や操作性を検討するため,歩行時の遅れ時間を評価する必要性がある.本論文では,人間と杖歩行訓練器の進行方向加速度の相互相関を取ることで,2物体間の遅れ時間を導出して杖歩行訓練器の影響を評価した.その結果,杖歩行訓練器の周波数の高い振動がハーネスで吸収されていることと,人体を引っ張るような危険な動きにはなっていないことが分かった.また,サイズが大きくなると遅れ時間が増大することが分かった.(著者抄録)
  • 山田 亮太, 川平 洋, 下村 義弘, 川村 和也
    日本内視鏡外科学会雑誌 24(7) MO332-7 2019年12月  査読有り最終著者責任著者
  • Kazuya Kawamura, Hiroto Seno, Yo Kobayashi, Satoshi Ieiri, Makoto Hashizume, Masakatsu G. Fujie
    Applied Sciences 9(19) 4136-4136 2019年10月2日  査読有り筆頭著者
    In pediatric surgery, robotic technology is useful. However, it is difficult to apply this technology due to size-related problems. In our study, we proposed a mechanical design method using a human-in-the-loop type simulator, and the moving volume and invisible area were optimized. We also verified the effectiveness of the optimization of the mechanical parameters by applying the simulator to pediatric surgery. In this experiment, a needle-hooking task was carried out by four subjects with five types of mechanisms using the results of the Pareto optimal solution obtained in the previous research. Moreover, the accuracy of the needle tip manipulation was verified. It was confirmed that the accuracy was higher under the operation of the mechanism that satisfied the Pareto optimal solution in comparison with the other mechanism. As the operation was carried out based on movement in the direction of the arm, the moving volume decreased. Therefore, the accuracy of the hooking was found to improve. It would be useful to optimize the mechanism by verifying the moving volume and invisible area rate for the needle-hooking task. In future work, the optimization of the mechanism for procedures that require both hands will be carried out.
  • Satoshi Miura, Kazuya Kawamura, Yo Kobayashi, Masakatsu G Fujie
    IEEE transactions on bio-medical engineering 66(8) 2352-2361 2019年8月  査読有り
    GOAL: To realize intuitive, minimally invasive surgery, surgical robots are often controlled using master-slave systems. However, the surgical robot's structure often differs from that of the human body, so the arrangement between the monitor and master must reflect this physical difference. In this study, we validate the feasibility of an embodiment evaluation method that determines the arrangement between the monitor and master. In our constructed cognitive model, the brain's intraparietal sulcus activates significantly when somatic and visual feedback match. Using this model, we validate a cognitively appropriate arrangement between the monitor and master. METHODS: In experiments, we measure participants' brain activation using an imaging device as they control the virtual surgical simulator. Two experiments are carried out that vary the monitor and hand positions. CONCLUSION: There are two common arrangements of the monitor and master at the brain activation's peak: One is placing the monitor behind the master, so the user feels that the system is an extension of his arms into the monitor; the other arranges the monitor in front of the master, so the user feels the correspondence between his own arm and the virtual arm in the monitor. SIGNIFICANCE: From these results, we conclude that the arrangement between the monitor and master impacts embodiment, enabling the participant to feel apparent posture matches in master-slave surgical robot systems.
  • Satoshi Miura, Kazuya Kawamura, Masakatsu Fujie, Shigeki Sugano, Tomoyuki Miyashita
    Proceedings of the International Conference on Engineering Design, ICED 2019-August 3611-3620 2019年  査読有り
    © 2019 Design Society. All rights reserved. Pipe inspection robots have been developed to reduce the cost and time required for gas pipe inspection. However, these robots have been developed using a scrap and build method and are not used in practice. In this paper, we propose a method of virtual pipe inspection simulation to clarify the parameters that are important in increasing the robot's ease of use. This paper presents the results obtained by a feasibility study with regard to pipe simulation. We developed a virtual pipe by simulating eight actual turns of an external gas pipe, and a robot equipped with camera at the tip. In the experiments, three individuals working in the field of gas inspection carried out the operation. We obtained questionnaire, time, and brain activity data. The results revealed various important points that must be considered in practical simulation and robot design. In conclusion, the virtual pipe simulation can be useful in developing the design of a pipe inspection robot.
  • 川村和也, 井上 淳, 飯岡俊光, 吉村俊昭, 花崎 泉
    電気学会C部門大会2018予稿集 2018年9月  査読有り筆頭著者
  • Masashi Sekine, Kazuya Kawamura, Wenwei Yu
    Proceedings of the 4th International Symposium on Wearable Robotics(WeRob2018) 22 425-429 2018年  査読有り
  • Masashi Sekine, Kouki Shiota, Enbo Liu, Kazuya Kawamura, Wenwei Yu
    Proceedings and abstracts book of Advanced Materials World Congress 2018 (AMWC2018) 2018年  査読有り
  • Yang Cao, Li Liu, Satoshi Miura, Masaki Seki, Yo Kobayashi, Kazuya Kawamura, Masakatsu G Fujie
    International journal of computer assisted radiology and surgery 12(11) 2003-2013 2017年11月  査読有り
    PURPOSE: Our purpose is to develop a system based on image processing methods that can inform users of the angular relationship between the needle and the forceps. The user thereby adjusts their needle grasping posture according to the angle information, which leads to an improvement in suturing accuracy. METHODS: The system prototype consists of a camera and an image processing computer. The image captured by the camera is input to the computer, and then, the angular relationship between the forceps and needle is calculated via image processing. Then, the system informs the user of the calculated angular relationship between the needle and forceps in real time. To evaluate whether the system improves suturing accuracy, we invited 12 participants to enroll in an experiment based on a suturing task. RESULTS: The experimental results illustrate that the system allows participants to easily adjust the positional relationship between the needle and the forceps and that this adjusted angular relationship leads to higher suturing accuracy. CONCLUSIONS: Adjustment to holding the needle at a right angle before insertion has a critical effect on suturing quality. Therefore, we developed a system that informs the user of the angular relationship between the needle and the forceps. The results of the evaluation show that the system significantly improves the suturing accuracy of participants via informing them of the angle.
  • Toshimitu Iioka, Jun Inoue, Izumi Hanazaki, Kazuya Kawamura, Yoshifumi Kijima, Toshiro Fujimoto
    Proceedings of The 39th Annual International Conference of the IEEE EMBC2017 2017(1) FrDT8-06.1 2017年7月  
  • 井上 淳, 花崎 泉, 川村 和也, 貴嶋 芳文, 藤元 登四郎
    電気学会論文誌. C 137(3) 452-458 2017年  
    <p>In this report, we discuss about walker which use for crutch walking exercises for hemiplegia by way of illustration. We developed the new walker which enable person have hemiplegia to crutch walking training in hospital ward without monitoring. The hemiplegia patients need crutch walking training until leaving hospital. Though it is difficult to all patients train enough crutch walking because of limited rehabilitation time. We designed the walker using the human body measurements database. Our walker has three advantages. First, it do not contact with the patients arm, leg and crutch when crutch-walking training. Second, when patient falls, the brake will work automatically. Third, it follows a gait, without using a patient's hand. We investigated about safety of gait with walker and discuss about issues for implementation.</p>
  • 小池 友和, 藤谷 順子, 西垣 有希子, 安藤 武, 關口 相和子, 山下 祥平, 川村 和也, 藤江 正克
    理学療法 - 臨床・研究・教育 24(1) 36-39 2017年  
    呼吸リハビリテーションにおいては胸郭可動域を改善させることによる換気量の増大が期待されている。胸郭拡張差と肺活量の関係は諸家により検討されているが,胸郭拡張差と一回換気量に関しての報告は少ない。今回我々は,胸郭計測システムを用いて胸郭可動域と深呼吸時の換気量の関係について検討した。胸郭拡張差と最大吸気量には相関が認められ,胸郭拡張差1 cmにあたり剣状突起高では男性約168.8 ml,女性約458.9 ml換気量が増えており,第10肋骨高では男性約229.0 ml,女性では約326.0 ml換気量が増えていた。性差等を考慮する必要があるものの胸郭の可動域の改善が換気量の増大につながることがわかった。<br>
  • Cao Yang, Miura Satoshi, Liu Quanquan, Kobayashi Yo, Kawamura Kazuya, Sugano Shigeki, Fujie Masakatsu G
    MECHANICAL ENGINEERING JOURNAL 4(4) 2017年  査読有り
    <p>In this study, we propose a novel endoscopic manipulation system that is controlled by a surgeon's eye movements. The optical axis direction of the endoscopic manipulator is altered intuitively based on the surgeon's pupil movements. A graphical user interface was developed by dividing the monitor screen into several areas with shape boundaries, so that the movement direction of the endoscope can be identified by the area gazed at by the operator. We used a probabilistic neural network (PNN) to decide the regional distribution proportion to recognize the direction in which the operator would want the endoscopic manipulator to move. The PNN model was trained by individual calibration data. We hypothesized that PNN model training could be completed immediately after calibration, which also determines the boundary of the regional distribution portion (RDP). We designed an experiment and recorded the path of direction change to verify the PNN's effectiveness in our proposed system. All participants, including four who wore glasses, completed the requested task. Moreover, wearing glasses had no significant effect on the performance of the proposed system. Furthermore, the PNN training duration only took 2% of the entire time of the procedure to handle individual differences. We conclude that our method can handle individual differences in operators' eyes through machine learning of personal calibration data over a short time frame, which will not take significant extra preoperative setup time.</p>
  • Yang Cao, Satoshi Miura, Quanquan Liu, Yo Kobayashi, Kazuya Kawamura, Shigeki Sugano, Masakatsu G. Fujie
    ROBOMECH Journal 3(1) 2016年12月1日  
    © 2016, Cao et al. Increased attention has been focused on laparoscopic surgery because of its minimal invasiveness and improved cosmetic properties. However, the procedure of laparoscopic surgery is considerably difficult for surgeons, thus paving the way for the introduction of robotic technology to reduce the surgeon’s burden. Thus, we have developed a single-port surgery assistive robot with a master–slave structure that has two surgical manipulators and a sheath manipulator for the alteration of endoscope direction. During the development of the surgical robotic system, achieving intuitive operation is very important. In this paper, we propose a new laparoscope manipulator control system based on the movement of the pupils to enhance intuitive operability. We achieve this using a webcam and an image processing method. After the pupil movement data are obtained, the master computer transforms these data into an output signal, and then the slave computer receives and uses that signal to drive the robot. The details of the system and the pupil detection procedure are explained. The aim of the present experiment is to verify the effectiveness of the image processing method applied to the alteration of endoscope direction control system. For this purpose, we need to determine an appropriate pupil motion activation threshold to begin the sheath manipulator’s movement. We used four kinds of activation threshold, measuring the time cost of a particular operation: to move the image of the endoscope to a specific target position. Moreover, we identified an appropriate activation threshold that can be used to determine whether the endoscope is moving.
  • Yuta Koreeda, Yo Kobayashi, Satoshi Ieiri, Yuya Nishio, Kazuya Kawamura, Satoshi Obata, Ryota Souzaki, Makoto Hashizume, Masakatsu G Fujie
    International journal of computer assisted radiology and surgery 11(10) 1927-36 2016年10月  査読有り
    PURPOSE: We developed and evaluated a visual compensation system that allows surgeons to visualize obscured regions in real time, such that the surgical instrument appears virtually transparent. METHODS: The system consists of two endoscopes: a main endoscope to observe the surgical environment, and a supporting endoscope to render the region hidden from view by surgical instruments. The view captured by the supporting endoscope is transformed to simulate the view from the main endoscope, segmented to the shape of the hidden regions, and superimposed to the main endoscope image so that the surgical instruments look transparent. A prototype device was benchmarked for processing time and superimposition rendering error. Then, it was evaluated in a training environment with 22 participants performing a backhand needle driving task with needle exit point error as the criterion. Lastly, we conducted an in vivo study. RESULTS: In the benchmark, the mean processing time was 62.4 ms, which was lower than the processing time accepted in remote surgeries. The mean superimposition error of the superimposed image was 1.4 mm. In the training environment, needle exit point error with the system decreased significantly for experts compared with the condition without the system. This change was not significant for novices. In the in vivo study, our prototype enabled visualization of needle exit points during anastomosis. CONCLUSION: The benchmark suggests that the implemented system had an acceptable performance, and evaluation in the training environment demonstrated improved surgical task outcomes in expert surgeons. We will conduct a more comprehensive in vivo study in the future.
  • Satoshi Miura, Junichi Takazawa, Yo Kobayashi, Tomoyuki Miyashita, Masakatsu G. Fujie, Kazuya Kawamura
    2016 WORLD AUTOMATION CONGRESS (WAC) 2016年  査読有り
    This paper presents a novel evaluation method for designing an intuitive surgical robot by measuring a user's brain activity. Conventionally, surgical robots have been designed based on their mechanical performance. However, an improvement in a robot's mechanical performance does not necessarily represent the embodiment that the user feels. In this paper, we evaluate intuitive operability based on the user's brain activation. Previously, we used functional near-infrared spectroscopictopography (fNIRS) brain imaging; however, it is better to use a brain measurement technique possessing a high time resolution, as brain activity is has a higher time resolution than fNIRS. The objective was to measure changes in brain activity as a function of a change in the slave arm positioning. In the experiment, the brain activity of four participants was measured using fNIRS while they used a hand controller to move the virtual arm of a surgical simulator. The experiment was carried out with the virtual arm in two positions: one easy to control and the other difficult. The spectrum of the brain activity increased at the easy position more than at the difficult position. We conclude that the brain activity changed as the user perceived that the virtual arm belonged to their body.
  • Yang Cao 0006, Yo Kobayashi, Satoshi Miura, Kazuya Kawamura, Masakatsu G. Fujie, Shigeki Sugano
    2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016, Qingdao, China, December 3-7, 2016 479-484 2016年  査読有り
  • Satoshi Miura, Yo Kobayashi, Kazuya Kawamura, Masatoshi Seki, Yasutaka Nakashima, Takehiko Noguchi, Yuki Yokoo, Masakatsu G. Fujie
    Computer Aided Surgery 3-15 2016年1月  査読有り
    This paper presents a novel method for evaluating a user’s feelings in a master–slave robotic surgical operation. By measuring brain activity, an engineer can quantify the user’s feelings during the operation from a cognitive science perspective. In contrast with conventional methods, the engineer can consider the user’s feelings in designing a robot with intuitive operability. The brain activity measurement method is well suited to not only surgical robots but also all master–slave robots. The objective of this paper is to determine the optimal distance between the slave and endoscope using brain activity measurement. We find that brain activity shows a significant peak when the user controls the virtual arm in a position matching the most natural hand-eye coordination.
  • Yang Cao 0006, Satoshi Miura, Yo Kobayashi, Kazuya Kawamura, Shigeki Sugano, Masakatsu G. Fujie
    IEEE Robotics Autom. Lett. 1(1) 531-538 2016年  査読有り
  • Kazuya Kawamura, Hiroto Seno, Yo Kobayashi, Satoshi Ieiri, Makoto Hashizume, Masakatsu G. Fujie
    Adv. Robotics 30(7) 476-488 2016年  査読有り
  • 井上淳, 花崎泉, 川村和也, 貴嶋芳文, 藤元登四郎
    平成27年度電気学会C部門大会予稿集 30-33 2015年9月  
  • Satoshi Miura, Yo Kobayashi, Kazuya Kawamura, Yasutaka Nakashima, Masakatsu Fujie
    International Journal of Computer Assisted Radiology and Surgery (IJCARS) 10(6) 783-790 2015年6月  査読有り
  • Satoshi Miura, Yo Kobayashi, Kazuya Kawamura, Yasutaka Nakashima, Masakatsu G Fujie
    International journal of computer assisted radiology and surgery 10(6) 783-90 2015年6月  査読有り
    PURPOSE: we present an evaluation method to qualify the embodiment caused by the physical difference between master-slave surgical robots by measuring the activation of the intraparietal sulcus in the user's brain activity during surgical robot manipulation. We show the change of embodiment based on the change of the optical axis-to-target view angle in the surgical simulator to change the manipulator's appearance in the monitor in terms of hand-eye coordination. The objective is to explore the change of brain activation according to the change of the optical axis-to-target view angle. METHODS: In the experiments, we used a functional near-infrared spectroscopic topography (f-NIRS) brain imaging device to measure the brain activity of the seven subjects while they moved the hand controller to insert a curved needle into a target using the manipulator in a surgical simulator. The experiment was carried out several times with a variety of optical axis-to-target view angles. RESULTS: Some participants showed a significant peak (P value = 0.037, F-number = 2.841) when the optical axis-to-target view angle was 75°. CONCLUSIONS: The positional relationship between the manipulators and endoscope at 75° would be the closest to the human physical relationship between the hands and eyes.
  • Masashi Sekine, Le Xie, Kazuya Kawamura, Wenwei Yu
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS 12(Feb) WEB ONLY 2015年2月  査読有り
    Transhumeral and shoulder disarticulation amputees find it difficult to move their prostheses for goal-oriented movement using only their small residual limbs. Thus, spatial accessibility is especially important for shoulder prostheses. Moreover, because responding to external disturbances and absorbing impact using only the viscoelasticity and flexibility of the small residual limb is difficult, the intrinsic viscoelasticity of the shoulder prosthesis is indispensable for safety. In our previous work, we proposed a small pneumatic elastic actuator-driven parallel link mechanism for shoulder prostheses. In this paper, we propose two new devices, a sliding antagonistic mechanism and a soft backbone, to improve the spatial characteristics and disturbance responsiveness. We quantitatively evaluated a prosthetic arm with the two devices. The results showed that the two devices increased the arm's workspace and disturbance responsiveness.
  • Yang Cao 0006, Quanquan Liu, Yo Kobayashi, Kazuya Kawamura, Shigeki Sugano, Masakatsu G. Fujie
    2015 IEEE International Conference on Robotics and Biomimetics, ROBIO 2015, Zhuhai, China, December 6-9, 2015 1637-1642 2015年  査読有り
  • Satoshi Miura, Yuya Matsumoto, Yo Kobayashi, Kazuya Kawamura, Yasutaka Nakashima, Masakatsu G Fujie
    Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference 2015 17-20 2015年  査読有り
    This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.
  • K. Murakami, R. Kishimoto, T. Obata, M. Tsukune, Y. Kobayashi, M. Fujie, K. Kawamura, K. Yoshida, T. Yamaguchi
    2015 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS) 2015年  
    The frequency dependence of shear wave velocity provides significant information for evaluating viscoelastic character of tissue relating to liver fibrosis. Although Voigt model has been often used in viscoelastic analysis, several studies showed that the frequency dependence measured by dynamic mechanical analysis (DMA) test was not consistent with the theoretical prediction. To experimentally investigate the relationships of the change of the tissue structure and the viscoelasticity of tissue, the shear wave velocity of fatty and fibrotic livers of rat model was quantitatively measured by using shear wave elastography (SWE) and DMA test. In DMA test, shear wave velocity was calculated from the complex elasticity modulus; storage and loss elastic modulus. The difference in shear wave velocity between fatty and fibrotic livers was evaluated to be 0.27 m/s in SWE and 0.20 m/s in DMA test.
  • 井上 淳, 山口 浩志, 花崎 泉, 川村 和也, 貴嶋 芳文, 藤元 登四郎
    ロボティクス・メカトロニクス講演会講演概要集 2015 _1P1-F02_1-_1P1-F02_3 2015年  
    In this paper, In this paper, we developed the new walker which enable person have hemiplegia to crutch walking training in hospital ward without monitoring. The hemiplegia patients need crutch walking training until leaving hospital. Though it is difficult to all patients train enough crutch walking because of limited rehabilitation time. Our walker has three advantages. First, it do not contact with the patients arm, leg and crutch when crutch-walking training. We designed the walker using the human body measurements database. Second, when patient falls, the brake will work automatically. Third, it follows a gait, without using a patient's hand. We investigated about safety of gait with walker.
  • Miura, Satoshi, Matsumoto, Yuya, Kobayashi, Yo, Kawamura, Kazuya, Nakashima, Yasutaka, Fujie, Masakatsu G
    Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2015 17-20 2015年  査読有り
  • Satoshi Miura, Yo Kobayashi, Kazuya Kawamura, Masatoshi Seki, Yasutaka Nakashima, Takehiko Noguchi, Yuki Yokoo, Masakatsu G. Fujie
    J. Adv. Comput. Intell. Intell. Informatics 19(1) 143-151 2015年  査読有り
  • Yuta Koreeda, Satoshi Obata, Yuya Nishio, Satoshi Miura, Yo Kobayashi, Kazuya Kawamura, Ryota Souzaki, Satoshi Ieiri, Makoto Hashizume, Masakatsu G. Fujie
    Int. J. Comput. Assist. Radiol. Surg. 10(5) 619-628 2015年  査読有り
  • 井上淳, 山口浩志, 花崎泉, 川村和也, 貴嶋芳文, 東祐二, 湯地忠彦, 藤元登四郎
    生活生命支援医療福祉工学系学会連合大会2014(LIFE2014)概要集 2014年9月  
  • Jun Inoue, Kazuya Kawamura, Masakatsu G. Fujie
    Proceedings of the IASTED International Conference on Biomedical Engineering, BioMed 2014 260-266 2014年  
    In this paper, we performed biodynamic verification of a muscular activity model using Bayes estimation. In creating this model, we aimed to enable quantitative selection of lower foot orthoses based on a patient's muscular activity in the lower foot. Because physical models require the use of large-scale measurement systems, which cannot be used clinically, they are not suitable for making these measurements. Therefore, we chose Bayes estimation to construct a model for estimating the muscular activity from parameters that can be measured easily, such as joint angle and sole pressure. This model allows for not only the estimation of muscle activity, but also another closely related parameter through the change in muscular activity, which is a parent node to the muscle activity node. The three advantages of our model are that it 1) reports the influences on muscle activity, which change throughout the gait cycle, by using 10% level nodes for each factor 2) expresses the influence of those factors, which are different at low and high muscular activity levels and 3) compensates for missed predictions by estimating muscle activity in 10% increments. Here, we verify the biodynamic validity of the model parent node for four foot muscles.
  • 中島 康貴, 渡邉 峰生, 井上 淳, 川村 和也, 藤江 正克
    バイオメカニズム 22 237-248 2014年  
    <p>片麻痺患者の歩行訓練では, 理学療法士 (PT) が患者の骨盤動作を逸脱しないように誘導するハンドリングという手法が行われている. ハンドリングは麻痺の症状や個人差に応じたアシストが可能である一方, PTの身体的負担が大きいため, 長時間正確な動作を繰り返す量的な訓練は困難とされている. 本研究では, ロボットが持つ精確性とPTが持つ患者への適応性を両立するために, PTのハンドリングの力学的特性を定量的に把握し, その特性を規範とした歩行訓練ロボットの開発を目的とする. 本稿では, 片麻痺患者のハンドリング計測の実験結果から, インピーダンス特性を考慮したハンドリングモデルを構築し, そのモデルの精度検証およびモデルに基づいた制御を実装したロボットによる有効性を評価した.</p>
  • 三浦 智, 小林 洋, 川村 和也, 関 雅俊, 中島 康貴, 野口 建彦, 横尾 勇樹, 藤江 正克
    日本コンピュータ外科学会誌 16(2) 61-69 2014年  
    Surgical robots have undergone considerable improvement by only mechanical performance in recent years, but the intuitive operability has not been quantitatively evaluated. This paper presents the brain activity measurement method to determine intuitive operability to design a robot with intuitive operability. The objective of this paper is to validate that the specific brain area which is the intraparietal sulcus activates if the user controls the slave manipulator positioned intuitively.In the experiments, while subjects controlled the hand controller to position the tip of the virtual slave manipulator on the target in the surgical simulator, we measured the brain activity through the functional near-infrared spectroscopy (f-NIRS). We carried out the experiment a number of times with the virtual slave manipulator configured in a variety of ways. The results show that the brain activated significantly with the specific slave manipulator configured such that the angles matched the human body. We conclude that the how strongly human feels the manipulator belongs to his body affects hand-eye coordination, which is related to visual and somatic sense feedback.
  • 井上 淳, 川村 和也, 藤江 正克
    ロボティクス・メカトロニクス講演会講演概要集 2014 _3A1-V01_1-_3A1-V01_4 2014年  
    In this paper, we performed biodynamic verification of a normal gait muscular activity model gait with lower foot orthosis muscular activity model and using Bayes estimation. In creating this model, we aimed to enable quantitative selection of lower foot orthoses based on a patient's muscular activity in the lower foot. This model allows for not only the estimation of muscle activity, but also another closely related parameter through the change in muscular activity, which is a parent node to the muscle activity node. Here, we verify the biodynamic validity of the normal gait model and gait with lower foot orthosis model parent node for two foot muscles.
  • 豊永 勇樹, 貴嶋 芳文, 湯地 忠彦, 東 祐二, 藤元 登四郎, 渡邉 峰生, 中島 康貴, 東野 達也, 井上 淳, 川村 和也, 藤江 正克
    第35回九州理学療法士・作業療法士合同学会 予稿集 2013年11月  
  • Jun Inoue, Kazuya Kawamura, Masakatsu G. Fujie
    Transactions of Japanese Society for Medical and Biological Engineering Vol. 51 (2013)(No. Supplemen) R-298-298-R-298 2013年9月  
  • 安藤健, 小島康史, 関雅俊, 川村和也, 二瓶美里, 佐藤春彦, 井上剛伸, 藤江正克
    日本機械学会論文集 C編(Web) 79(802) WEB ONLY 2037-2047-2047 2013年7月  
    Learning is essential for human being. In this article, we analyzed the human learning strategy in the multi-movement discrimination by learning machine. Especially, the leg movements&#039; discrimination using Self Organizing Mapping was taken as an example. Firstly, based on the experiment by ten healthy subjects, learning strategies were divided into two strategies; Convergence strategy (improving the repeatability of each movement) and Independence strategy (conducting different movements). Secondly, a child with severe impairment conducted similar experiments. He selected Convergence strategy and his index values to evaluate the degree of convergence and independence of his leg&#039;s movements were quite similar with healthy subjects&#039; value. The generality of Convergence and Independence strategies was suggested in multi-movement discrimination task.

主要なMISC

 80

書籍等出版物

 2

講演・口頭発表等

 148

担当経験のある科目(授業)

 14

所属学協会

 3

共同研究・競争的資金等の研究課題

 20

産業財産権

 11

社会貢献活動

 12