2024/01/20 更新

写真a

パトハック, サーサク マヘシ
PATHAK, Sarthak Mahesh
PATHAK, Sarthak Mahesh
所属
理工学部 助教C
連絡先
メールによる問い合わせは《こちら》から
外部リンク

学位

  • 博士(工学) ( 東京大学 )

  • Master of Technology ( Indian Institute of Technology Madras )

学歴

  • 2017年9月
     

    東京大学   工学系研究科   精密工学専攻   博士後期   修了

  • 2014年9月
     

    Indian Institute of Technology Madras   設計工学専攻学士・修士一貫コース   修士   修了

  • 2009年3月
     

    Kishinchand Chellaram College Mumbai, India   卒業

経歴

  • 2020年4月 - 現在

    東京大学   大学院工学系研究科 精密工学専攻   特任助教

  • 2021年4月 -  

    中央大学理工学部助教

  • 2018年4月 - 2020年3月

    東京大学   大学院工学系研究科 精密工学専攻   日本学術振興会 外国人特別研究員

  • 2017年10月 - 2018年3月

    東京大学   大学院工学系研究科 精密工学専攻   特任研究員

所属学協会

  • 2019年 - 現在

    精密工学会

  • 2018年8月 - 現在

    日本ロボット学会

  • 2016年8月 - 現在

    IEEE Robotics and Automation Society, Signal Processing Society

研究キーワード

  • ロボットビジョン

  • 3次元計測

  • 360度カメラ

  • ロボット自己位置推定

論文

  • Experimental Evaluation of Highly Accurate 3D Measurement Using Stereo Camera and Line Laser 査読

    Shunya Nonaka, Sarthak Pathak, Kazunori Umeda

    Journal of Robotics and Mechatronics   35 ( 5 )   1374 - 1384   2023年10月

     詳細を見る

    掲載種別:研究論文(学術雑誌)   出版者・発行元:Fuji Technology Press Ltd.  

    This paper proposes a method to improve the accuracy of 3D measurement of a stereo camera by marking a measured object using a line laser. Stereo cameras are commonly used for 3D measurement, but the accuracy of 3D measurement is affected by the amount of texture. Therefore, a new measurement system combining a stereo camera and a line laser is developed. The accuracy of 3D measurement with a stereo camera is improved by using a line laser to mark arbitrary points on the measured object and measuring the marked points, regardless of the amount of texture on the measured object. Because the laser is only used to mark points on the measurement target, calibration is not required with the stereo camera. Experimental evaluation showed that our proposed method can obtain millimeters.

    DOI: 10.20965/jrm.2023.p1374

    researchmap

  • Automatic scoring in fencing by using skeleton points extracted from images 査読

    Takehiro Sawahata, Alessandro Moro, Sarthak Pathak, Kazunori Umeda

    Sixteenth International Conference on Quality Control by Artificial Vision   2023年7月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:SPIE  

    DOI: 10.1117/12.3000424

    researchmap

  • Anomaly detection from images in pipes using GAN

    Shigeki Yumoto, Takumi Kitsukawa, Alessandro Moro, Sarthak Pathak, Taro Nakamura, Kazunori Umeda

    ROBOMECH Journal   10 ( 1 )   2023年3月

     詳細を見る

    掲載種別:研究論文(学術雑誌)   出版者・発行元:Springer Science and Business Media LLC  

    Abstract

    In recent years, the number of pipes that have exceeded their service life has increased. For this reason, earthworm-type robots equipped with cameras have been developed to perform regularly inspections of sewer pipes. However, inspection methods have not yet been established. This paper proposes a method for anomaly detection from images in pipes using Generative Adversarial Network (GAN). A model that combines f-AnoGAN and Lightweight GAN is used to detect anomalies by taking the difference between input images and generated images. Since the GANs are only trained with non-defective images, they are able to convert an image containing defects into one without them. Subtraction images is used to estimate the location of anomalies. Experiments were conducted using actual images of cast iron pipes to confirm the effectiveness of the proposed method. It was also validated using sewer-ml, a public dataset.

    DOI: 10.1186/s40648-023-00246-y

    researchmap

    その他リンク: https://link.springer.com/article/10.1186/s40648-023-00246-y/fulltext.html

  • Performance Improvement of ICP-SLAM by Human Removal Process Using YOLO 査読

    Keigo Akiba, Ryuki Suzuki, Yonghoon Ji, Sarthak Pathak, Kazunori Umeda

    Applied Human Informatics   5 ( 1 )   1 - 13   2023年3月

     詳細を見る

    掲載種別:研究論文(学術雑誌)  

    researchmap

  • ベルト搭載グリッパによる画像を使った物体のつまみ上げ制御 査読

    磯邉 柚香, Pathak Sarthak, 梅田 和昇, 橋本 裕介, 松山 吉成, 松田 卓, 金田 侑, 池内 宏樹, 多田隈 建二郎

    日本ロボット学会誌   41 ( 2 )   187 - 197   2023年3月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.7210/jrsj.41.187

    researchmap

  • Camera-based Progress Estimation of Assembly Work Using Deep Metric Learning

    Takumi Kitsukawa, Sarthak Pathak, Alessandro Moro, Yoshihiro Harada, Hideo Nishikawa, Minori Noguchi, Akifumi Hamaya, Kazunori Umeda

    2023 IEEE/SICE International Symposium on System Integration (SII)   2023年1月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:IEEE  

    DOI: 10.1109/sii55687.2023.10039109

    researchmap

  • Estimation of Road Surface Shape and Object Height Focusing on the Division Scale in Disparity Image Using Fisheye Stereo Camera 査読

    Tomoyu Sakuda, Kento Arai, Sarthak Pathak, Kazunori Umeda

    2023 IEEE/SICE International Symposium on System Integration (SII)   2023年1月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:IEEE  

    DOI: 10.1109/sii55687.2023.10039474

    researchmap

  • PointpartNet++:対応点の決定によるサイズが異なる3次元点群位置合わせの精度向上 査読

    顔 世筍, Sarthak Pathak, 梅田 和昇

    精密工学会誌   89 ( 1 )   90 - 98   2023年1月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.2493/jjspe.89.90

    researchmap

  • PointpartNet: 3D point-cloud registration via deep part-based feature extraction

    Shixun Yan, Sarthak Pathak, Kazunori Umeda

    Advanced Robotics   36 ( 15 )   724 - 734   2022年6月

     詳細を見る

    掲載種別:研究論文(学術雑誌)   出版者・発行元:Informa UK Limited  

    DOI: 10.1080/01691864.2022.2084346

    researchmap

  • Indoor SLAM based on line observation probability using a hand-drawn map

    Ryuki Suzuki, Yonghoon Ji, Sarthak Pathak, Kazunori Umeda

    2022 IEEE/SICE International Symposium on System Integration (SII)   2022年1月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:IEEE  

    DOI: 10.1109/sii52469.2022.9708610

    researchmap

  • Polynomial-fitting based calibration for an active 3D sensing system using dynamic light section method

    Mikihiro Ikura, Sarthak Pathak, Atsushi Yamashita, Hajime Asama

    Fifteenth International Conference on Quality Control by Artificial Vision   2021年7月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:SPIE  

    DOI: 10.1117/12.2590827

    researchmap

  • Localization in a Semantic Map via Bounding Box Information and Feature Points 査読

    Sarthak Pathak, Irem Uygur, Shize Lin Renato Miyagusuku, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    Proceedings of the 2021 IEEE/SICE International Symposium on System Integration (SII2021)   126 - 131   2021年1月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:IEEE  

    DOI: 10.1109/IEEECONF49454.2021.9382719

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/sii/sii2021.html#PathakULMMYA21

  • Self-supervised optical flow derotation network for rotation estimation of a spherical camera.

    Dabae Kim, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    Adv. Robotics   35 ( 2 )   118 - 128   2021年

     詳細を見る

    掲載種別:研究論文(学術雑誌)  

    DOI: 10.1080/01691864.2020.1857305

    researchmap

  • Visualization of Dump Truck and Excavator in Bird's-eye View by Fisheye Cameras and 3D Range Sensor.

    Yuta Sugasawa, Shota Chikushi, Ren Komatsu, Jun Younes Louhi Kasahara, Sarthak Pathak, Ryosuke Yajima, Shunsuke Hamasaki, Keiji Nagatani, Takumi Chiba, Kazuhiro Chayama, Atsushi Yamashita, Hajime Asama

    Intelligent Autonomous Systems 16 - Proceedings of the 16th International Conference IAS-16(IAS)   629 - 640   2021年

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Springer  

    DOI: 10.1007/978-3-030-95892-3_47

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/ias/ias2021.html#SugasawaCKKPYHN21

  • AdjustSense: Adaptive 3D Sensing System with Adjustable Spatio-Temporal Resolution and Measurement Range Using High-Speed Omnidirectional Camera and Direct Drive Motor.

    Mikihiro Ikura, Sarthak Pathak, Jun Younes Louhi Kasahara, Atsushi Yamashita, Hajime Asama

    Sensors   21 ( 21 )   6975 - 6975   2021年

     詳細を見る

    掲載種別:研究論文(学術雑誌)  

    DOI: 10.3390/s21216975

    researchmap

  • Scale Optimization of Structure from Motion for Structured Light-based All-round 3D Measurement.

    Momoko Kawata, Hiroshi Higuchi, Sarthak Pathak, Atsushi Yamashita, Hajime Asama

    2021 IEEE International Conference on Systems, Man, and Cybernetics(SMC)   442 - 449   2021年

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:IEEE  

    DOI: 10.1109/SMC52423.2021.9658793

    researchmap

    その他リンク: https://dblp.uni-trier.de/db/conf/smc/smc2021.html#KawataHPYA21

  • 色情報と3次元距離情報に基づく全天球カメラの位置姿勢推定 -キーフレーム毎のデプスマップ更新による連続位置姿勢推定- 査読

    陽 東旭, 樋口 寛, Sarthak Pathak, Alessandro Moro, 山下 淳, 淺間 一

    精密工学会誌   86 ( 12 )   2020年12月

     詳細を見る

  • Robust and efficient indoor localization using sparse semantic information from a spherical camera 査読

    Irem Uygur, Renato Miyagusuku, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    Sensors (Switzerland)   20 ( 15 )   1 - 21   2020年8月

     詳細を見る

    掲載種別:研究論文(学術雑誌)  

    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Self-localization enables a system to navigate and interact with its environment. In this study, we propose a novel sparse semantic self-localization approach for robust and efficient indoor localization. “Sparse semantic” refers to the detection of sparsely distributed objects such as doors and windows. We use sparse semantic information to self-localize on a human-readable 2D annotated map in the sensor model. Thus, compared to previous works using point clouds or other dense and large data structures, our work uses a small amount of sparse semantic information, which efficiently reduces uncertainty in real-time localization. Unlike complex 3D constructions, the annotated map required by our method can be easily prepared by marking the approximate centers of the annotated objects on a 2D map. Our approach is robust to the partial obstruction of views and geometrical errors on the map. The localization is performed using low-cost lightweight sensors, an inertial measurement unit and a spherical camera. We conducted experiments to show the feasibility and robustness of our approach.

    DOI: 10.3390/s20154128

    Scopus

    PubMed

    researchmap

  • SelfSphNet: Motion Estimation of a Spherical Camera via Self-Supervised Learning 査読

    Dabae Kim, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    IEEE Access   8   41847 - 41859   2020年3月

     詳細を見る

    掲載種別:研究論文(学術雑誌)  

    © 2013 IEEE. In this paper, we propose SelfSphNet, that is, a self-supervised learning network to estimate the motion of an arbitrarily moving spherical camera without the need for any labeled training data. Recently, numerous learning-based methods for camera motion estimation have been proposed. However, most of these methods require an enormous amount of labeled training data, which is difficult to acquire experimentally. To solve this problem, our SelfSphNet employs two loss functions to estimate the frame-to-frame camera motion, thus giving two supervision signals to the network with the usage of unlabeled training data. First, a 5 DoF epipolar angular loss, which is composed of a dense optical flow of spherical images, estimates the 5 DoF motion between two image frames. This loss function utilizes a unique property of the spherical optical flow, which allows the rotational and translational components to be decoupled by using a derotation operation. This operation is derived from the fact that spherical images can be rotated to any orientation without any loss of information, hence making it possible to 'decouple' the dense optical flow between pairs of spherical images to a pure translational state. Next, a photometric reprojection loss estimates the full 6 DoF motion using a depth map generated from the decoupled optical flow. This minimization strategy enables our network to be optimized without using any labeled training data. To confirm the effectiveness of our proposed approach (SelfSphNet), several experiments to estimate the camera trajectory, as well as the camera motion, were conducted in comparison to a previous self-supervised learning approach, SfMLearner, and a fully supervised learning approach whose baseline network is the same as SelfSphNet. Moreover, transfer learning in a new scene was also conducted to verify that our proposed method can optimize the network with newly collected unlabeled data.

    DOI: 10.1109/ACCESS.2020.2977109

    Scopus

    researchmap

  • Extrinsic parameters calibration of multiple fisheye cameras in manhattan worlds 査読

    Weijie Chen, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    Proceedings of SPIE - The International Society for Optical Engineering   11515   115151X-1 - 115151X-6   2020年1月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2020 SPIE. With the advantage of having a large field of view, fisheye cameras are widely used in many applications. In order to generate a precise view, calibration of the fisheye cameras is very important. In this paper, we propose a method of extrinsic parameters calibration of multiple fisheye cameras working in man-made structures. A Manhattan Worlds space assumption is used, which describes man-made structures as sets of planes that are either orthogonal or parallel to each other. The orientation of the cameras is obtained by extracting vanishing points that denote orthogonal principal directions in different images captured by the cameras at the same time. With the proposed method, the calibration of extrinsic parameters is very convenient and the system can be recalibrated remotely.

    DOI: 10.1117/12.2566562

    Scopus

    researchmap

  • Accurate All-Round 3D Measurement Using Trinocular Spherical Stereo via Weighted Reprojection Error Minimization 査読

    Wanqi Yin, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    Proceedings - 2019 IEEE International Symposium on Multimedia, ISM 2019   86 - 93   2019年12月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2019 IEEE. Comparing to perspective cameras, the all-round 3D measurement of the environment can be done by spherical cameras in a more efficient way. However, the measurement using binocular spherical stereo has two singularity points at the epipoles of each spherical camera, where the measurement result gets extremely sensitive to the error when getting close to the epipoles and along the epipolar directions. This affects the accuracy of 3D reconstruction along with the epipolar directions. A three-way measurement method using three spherical cameras with trinocular spherical stereo setup is proposed in this paper to achieve accurate all-round 3D measurement. The improved accuracy of 3D measurement by the implementation of weighted reprojection error optimization was verified in experiments.

    DOI: 10.1109/ISM46123.2019.00021

    Scopus

    researchmap

  • Improving 3D measurement accuracy in epipolar directions via trinocular spherical stereo 査読

    Wanqi Yin, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019   981 - 982   2019年10月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2019 IEEE. Spherical cameras are suitable for all-round 3D measurement due to their wide field of view. When performing 3D measurement using binocular spherical stereo, the areas in the epipolar directions cannot be reconstructed accurately due to the low measurement confidence in the epipolar direction. By adding one more spherical camera, perpendicular to the epipolar direction, to the binocular spherical stereo to form a trinocular spherical stereo, all-round 3D measurement can be achieved. In this paper, we propose a method to improve the 3D measurement accuracy by minimizing the overall reprojection error. Effectiveness of the proposed method is verified through experiments.

    DOI: 10.1109/GCCE46687.2019.9015551

    Scopus

    researchmap

  • 計測点の信頼度を考慮した全天球ステレオカメラの運動推定 査読

    野田 純平, Sarthak Pathak, 藤井 浩光, 山下 淳, 淺間 一

    精密工学会誌   85 ( 6 )   568 - 576   2019年6月

     詳細を見る

    記述言語:日本語   出版者・発行元:公益社団法人 精密工学会  

    <p>Localization is an important function for a mobile robot. In this research, a new localization method using a spherical stereo camera is proposed. Spherical stereo cameras are effective for motion estimation as they can visualize and measure all directions of the environment. Instead of using conventional sparse feature points, dense information of all pixels in images is used to estimate motion accurately. However, because of the wide field of view, the 3D measurement uncertainty is not uniform across the image. Hence, geometric uncertainty of every 3D point is calculated and used as weights to improve accuracy. Experiments show the effectiveness of using uncertainty information to increase the accuracy of motion estimation.</p>

    DOI: 10.2493/jjspe.85.568

    Scopus

    researchmap

  • E-CNN: Accurate Spherical Camera Rotation Estimation via Uniformization of Distorted Optical Flow Fields 査読

    Dabae Kim, Sarthak Pathak, Alessandro Moro, Ren Komatsu, Atsushi Yamashita, Hajime Asama

    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings   2019-May   2232 - 2236   2019年5月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2019 IEEE. Spherical cameras, which can acquire all-round information, are effective to estimate rotation for robotic applications. Recently, Convolutional Neural Networks have shown great robustness in solving such regression problems. However they are designed for planar images and cannot deal with the non-uniform distortion present in spherical images, when expressed in the planar equirectangular projection. This can lower the accuracy of motion estimation. In this research, we propose an Equirectangular-Convolutional Neural Network (E-CNN) to solve this issue. This novel network regresses 3D spherical camera rotation by uniformizing distorted optical flow patterns in the equirectangular projection. We experimentally show that this results in consistently lower error as opposed to learning from the distorted optical flow.

    DOI: 10.1109/ICASSP.2019.8682203

    Scopus

    researchmap

  • A Framework for Bearing-Only Sparse Semantic Self-Localization for Visually Impaired People 査読

    Irem Uygur, Renato Miyagusuku, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    Proceedings of the 2019 IEEE/SICE International Symposium on System Integration, SII 2019   319 - 324   2019年4月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2019 IEEE. Self-localization in indoor environments is a critical issue for visually impaired people. Most localization approaches use low-level features and metric information as input. This can result in insufficient output for visually impaired people since humans understand their surroundings from high-level semantic cues. They need to be provided their location with respect to the objects in their surroundings. Thus, in this work, we develop a novel framework that uses semantic information directly for localization, which can also be used to inform the user about his surroundings. The developed framework directly uses sparse semantic information such as the existence of doors, windows, tables, etc. directly within the sensor model and localizes the user within a 2D semantic map. It does not make use of any distance information to each semantic landmark, which is usually quite difficult to obtain. Nor does it require any kind of data association- the objects need not be uniquely identified. Hence, it can be implemented with simple sensors like a camera, with object detection software. For our framework, one of the most popular game engines, Unity was chosen to create a realistic office environment, consisting of necessary office items and an agent with a wide-angle camera representing the user. Experimentally, we show that this semantic localization method is an efficient way to make use of sparse semantic information for locating a person.

    DOI: 10.1109/SII.2019.8700370

    Scopus

    researchmap

  • Distortion-resistant spherical visual odometry for UAV-based bridge inspection 査読

    Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    Proceedings of SPIE - The International Society for Optical Engineering   11049   110491O-1 - 110491O-6   2019年1月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(国際会議プロシーディングス)  

    © COPYRIGHT SPIE. In this research, we propose a novel distortion-resistant visual odometry technique using a spherical camera, in order to provide localization for a UAV-based, bridge inspection support system. We take into account the distortion of the pixels during the calculation of the 2-frame essential matrix via feature-point correspondences. Then, we triangulate 3D points and use them for 3D registration of further frames in the sequence via a modified spherical error function. Via experiments conducted on a real bridge pillar, we demonstrate that the proposed approach greatly increases the accuracy of localization, resulting in an 8.6 times lower localization error.

    DOI: 10.1117/12.2520206

    Scopus

    researchmap

  • Line-Based Global Localization of a Spherical Camera in Manhattan Worlds 査読

    Tsubasa Goto, Sarthak Pathak, Yonghoon Ji, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    Proceedings - IEEE International Conference on Robotics and Automation   2296 - 2303   2018年9月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2018 IEEE. Localization is an important task for mobile service robots in indoor spaces. In this research, we propose a novel technique for indoor localization using a spherical camera. Spherical cameras can obtain a complete view of the surroundings allowing the use of global environmental information. We take advantage of this in order to estimate camera position and the orientation with respect to a known 3D line map of an indoor environment, using a single image. We robustly extract 2D line information from the spherical image via spherical-gradient filtering and match it to 3D line information in the line map. Our method requires no information about the 3D-2D line correspondences. In order to avoid a complicated six degrees of freedom (6 DoF) search for position and orientation, we use a Manhattan world assumption to decompose the line information in the image. The 6 DoF localization process is divided into two phases. First, we estimate the orientation by extracting the three principle directions from the image. Then, the position is estimated by robustly matching the distribution of lines between the image and the 3D model via a spherical Hough representation. This decoupled search can robustly localize a spherical camera using a single image, as we demonstrate experimentally.

    DOI: 10.1109/ICRA.2018.8460920

    Scopus

    researchmap

  • Distortion-Robust Spherical Camera Motion Estimation via Dense Optical Flow 査読

    Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    Proceedings - International Conference on Image Processing, ICIP   3358 - 3362   2018年8月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(国際会議プロシーディングス)  

    © 2018 IEEE. Conventional techniques for frame-to-frame camera motion estimation rely on tracking a set of sparse feature points. However, images taken from spherical cameras have high distortion which can induce mistakes in feature point tracking, offsetting the advantage of their large fields-of-view. Hence, in this research, we attempt a novel approach of using dense optical flow for distortion-robust spherical camera motion estimation. Dense optical flow incorporates smoothing terms and is free of local outliers. It encodes the camera motion as well as dense 3D information. Our approach decomposes dense optical flow into epipolar geometry and the dense disparity map, and reprojects this disparity map to estimate 6 DoF camera motion. The approach handles spherical image distortion in a natural way. We experimentally demonstrate its accuracy and robustness.

    DOI: 10.1109/ICIP.2018.8451406

    Scopus

    researchmap

  • Virtual reality with motion parallax by dense optical flow-based depth generation from two spherical images 査読

    Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    SII 2017 - 2017 IEEE/SICE International Symposium on System Integration   2018-January   887 - 892   2018年2月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(国際会議プロシーディングス)  

    © 2017 IEEE. Virtual reality (VR) systems using head-mounted displays (HMDs) can render immersive views of environments, allowing change of viewpoint position and orientation. When there is a change in the position of the viewpoint, different objects in the scene undergo different displacements, depending on their depth. This is known as 'Motion Parallax' and is important for depth perception. It is easy to implement for computer-generated scenes. Spherical cameras like the Ricoh Theta S can capture an all-round view of the environment in a single image, making VR possible for real-world scenes as well. Spherical images contain information from all directions and allow all possible viewpoint orientations. However, implementing motion parallax for real-world scenes is tedious as accurate depth information is required, which is difficult to obtain. In this research, we propose a novel method to easily implement motion parallax for real world scenes by automatically estimating allround depth from two arbitrary spherical images. The proposed method estimates dense optical flow between two images and decomposes it to the depth map. The depth map can be used to reproject the scene accurately to any desired position and orientation, allowing motion parallax.

    DOI: 10.1109/SII.2017.8279335

    Scopus

    researchmap

  • 人工物環境における全天球カメラの位置姿勢推定のための直線特徴に基づく3D-2Dマッチング 査読

    後藤 翼, Sarthak Pathak, 池 勇勳, 藤井 浩光, 山下 淳, 淺間 一

    精密工学会誌   83 ( 12 )   1209 - 1215   2017年12月

     詳細を見る

    記述言語:日本語   出版者・発行元:公益社団法人 精密工学会  

    In this paper, a novel method for 6 degrees of freedom (DoF) localization of a single spherical camera in a man-made environment is proposed. Taking advantage of the various line features that are usually present in such an environment, a technique to match the 2D line feature information inside a spherical image to the 3D line segment information available in a known 3D model of the environment is developed. There are two main challenges to be overcome. First is the detection of the line feature information in a spherical image and its abstraction into a descriptor that is compatible with the 3D line feature information in the model. Second is to evaluate similarity of the line feature information from the 2D image and that from arbitrary 6 DoF poses in the 3D environment model in order to localize the camera. To deal with the former, a randomized hough transform with spherical gradient-based filtering is used to accurately detect line features in the image and create a line feature descriptor. The same descriptor is created from arbitrary 6 DoF poses in the 3D model. Then, to deal with the latter, the Earth Mover's Distance (EMD) is used to evaluate their similarity. The proposed method was evaluated in a real environment with its 3D model. The results demonstrated that it can effectively estimate the 6 DoF pose of a spherical camera using a single image.

    DOI: 10.2493/jjspe.83.1209

    Scopus

    researchmap

  • Spatio-temporal video completion in spherical image sequences 査読

    Binbin Xu, Sarthak Pathak, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    IEEE Robotics and Automation Letters   2 ( 4 )   2032 - 2039   2017年10月

     詳細を見る

    掲載種別:研究論文(学術雑誌)  

    © 2016 IEEE. Spherical cameras are widely used due to their full 360° fields of view. However, a common but severe problem is that anything carrying the camera is always included in the view, occluding visual information. In this letter, we propose a novel method to remove such occlusions in videos taken from a freely moving spherical camera. Our method can recover the occluded background accurately in distorted spherical videos by inpainting the color and motion information of pixels. The missing color and motion information inside the occluded region is iteratively recovered in a coarse-to-fine optimization. Spatial and temporal coherence of color and motion information is enforced, considering spherical image geometry. Initially, feature-point matching is used to remove the effect of camera rotation in order to deal with large pixel displacements. Following this, the iterative optimization process is bootstrapped using a reliable estimate of motion information obtained by interpolating it from surrounding regions. We demonstrate its effectiveness by successfully completing videos and recovering occluded regions recorded in various practical situations and by quantifying it against other state-of-the-art methods.

    DOI: 10.1109/LRA.2017.2718106

    Scopus

    researchmap

  • Optical Flow-Based Epipolar Estimation of Spherical Image Pairs for 3D Reconstruction 査読

    PATHAK Sarthak, MORO Alessandro, YAMASHITA Atsushi, ASAMA Hajime

    SICE Journal of Control, Measurement, and System Integration   10 ( 5 )   476 - 485   2017年9月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   出版者・発行元:公益社団法人 計測自動制御学会  

    Stereo vision is a well-known technique for vision-based 3D reconstruction of environments. Recently developed spherical cameras can be used to extend the concept to all 360&deg; and provide LIDAR-like 360 degree 3D data with color information. In order to perform accurate stereo disparity estimation, the accurate relative pose between the two cameras, represented by the five degree of freedom epipolar geometry, needs to be known. However, it is always tedious to mechanically align and/or calibrate such systems. We propose a technique to recover the complete five degree of freedom parameters of the epipolar geometry in a single minimization with a dense approach involving all the individual pixel displacements (optical flow) between two camera views. Taking advantage of the spherical image geometry, a non-linear least squares optimization based on the dense optical flow directly minimizes the angles between pixel displacements and epipolar curves in order to align them. This approach is particularly suitable for dense 3D reconstruction as the pixel-to-pixel disparity between the two images can be calculated accurately and converted to a dense point cloud. Further, there are no assumptions about the direction of camera displacement. We demonstrate this method by showing some error evaluations, examples of successfully rectified spherical stereo pairs, and the dense 3D models generated from them.

    DOI: 10.9746/jcmsi.10.476

    researchmap

  • Development of a bridge inspection support system using two-wheeled multicopter and 3D modeling technology 査読

    Yoshiro Hada, Manabu Nakao, Moyuru Yamada, Hiroki Kobayashi, Naoyuki Sawasaki, Katsunori Yoko Ji, Satoshi Kanai, Fumiki Tanaka, Hiroaki Date, Sarthak Pathak, Atsushi Yamashita, Manabu Yamada, Toshiya Sugawara

    Journal of Disaster Research   12 ( 3 )   593 - 606   2017年6月

     詳細を見る

    掲載種別:研究論文(学術雑誌)  

    © 2017, Fuji Technology Press. All rights reserved. Recently, many countries have faced serious problems associated with aging civil infra structures such as bridges, tunnels, dams, highways and so on. Aging infrastructures are increasing year by year and suitable maintenance actions are necessary to maintain their safety and serviceability. In fact, infrastructure deterioration has caused serious problems in the past. In order to prevent accidents with civil infrastructures, supervisors must spend a lot of money to maintain the safe conditions of infrastructures. Therefore, new technologies are required to reduce maintenance costs. In 2014 the Japanese government started the Cross-Ministerial Strategic Innovation Promotion Program (SIP), and technologies for infrastructure maintenance have been studied in the SIP project [1]. Fujitsu Limited, Hokkaido University, The University of Tokyo, Nagoya Institute of Technology and Docon Co. Limited have been engaged in the SIP project to develop a bridge inspection support system using information technology and robotic technology. Our system is divided into the following two main parts: bridge inspection support robots using a two-wheeled multicopter, and an inspection data management system utilizing 3D modeling technology. In this paper, we report the bridge inspection support system developed in our SIP project.

    DOI: 10.20965/jdr.2017.p0593

    Scopus

    researchmap

  • Spherical video stabilization by estimating rotation from dense optical flow fields 査読

    Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    Journal of Robotics and Mechatronics   29 ( 3Special Issue )   566 - 579   2017年6月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(学術雑誌)  

    © 2017, Fuji Technology Press. All rights reserved. We propose a method for stabilizing spherical videos by estimating and removing the effect of camera rotation using dense optical flow fields. By derotating each frame in the video to the orientation of its previous frame in two dense approaches, we estimate the complete 3 DoF rotation of the camera and remove it to stabilize the spherical video. Following this, any chosen area on the spherical video (equivalent of a normal camera’s field of view) is unwarped to result in a ‘rotation-less virtual camera’ that can be oriented independent of the camera motion. This can help in perception of the environment and camera motion much better. In order to achieve this, we use dense optical flow, which can provide important information about camera motion in a static environment and can have several advantages over sparse feature-point based approaches. The spatial regularization property of dense optical flow provides more stable motion information as compared to tracking sparse points and negates the effect of feature point outliers. We show superior results as compared to using sparse feature points alone.

    DOI: 10.20965/jrm.2017.p0566

    Scopus

    researchmap

  • Spherical Camera Localization in Man-made Environment Using 3D-2D Matching of Line Information 査読

    Tsubasa Goto, Sarthak Pathak, Yonghoon Ji, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    Proceedings of the International Workshop on Advanced Image Technology 2017 (IWAIT2017)   2017年1月

     詳細を見る

  • Improving Gradient Histogram Based Descriptors for Pedestrian Detection in Datasets with Large Variations 査読

    Prashanth Balasubramanian, Sarthak Pathak, Anurag Mittal

    IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops   1177 - 1186   2016年12月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2016 IEEE. Gradient histogram based descriptors, that are constructed using the gradient magnitudes as votes to orientation bins, are successfully used for Pedestrian Detection. However, their performance is hampered when presented with datasets having many variations in properties such as appearance, texture, scale, background, and object pose. Such variations can be reduced by smoothing the images. But, the performance of the descriptors, and their classifiers is affected negatively by this, due to the loss of important gradients along with the noisy ones. In this work, we show that the ranks of gradient magnitudes stay resilient to such a smoothing. We show that a combination of image smoothing and the ranks of gradient magnitudes yields good detection performances, especially when the variations in a dataset are large or the number of training samples is less. Experiments on the challenging Caltech and Daimler Pedestrian datasets, and the Inria Person dataset illustrate these findings.

    DOI: 10.1109/CVPRW.2016.150

    Scopus

    researchmap

  • Optical flow-based video completion in spherical image sequences 査読

    Binbin Xu, Sarthak Pathak, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    2016 IEEE International Conference on Robotics and Biomimetics, ROBIO 2016   388 - 395   2016年12月

     詳細を見る

    掲載種別:研究論文(国際会議プロシーディングス)  

    © 2016 IEEE. Spherical cameras are widely used for robot perception because of their full 360 degree fields of view. However, the robot body is always seen in the view, causing occlusions. In this paper, we propose a video completion method which is able to remove occlusions fast and recover the occluded background accurately. We first estimate motion in a 2D dense spherical optical flow field. Then we interpolate the motion in the occlusion regions by solving least square minimization problems using polynomial models. Based on the interpolated motion, we recover the occluded regions by tracing the optical flow trajectory to find the corresponding pixels in other frames and warping them back to fill in the occlusions. We also provide a simple, yet fast solution to effectively remove occlusions in all regions of the image by utilizing the continuity of field of view of spherical images. In experiments, quantitative methods are conducted to demonstrate the effectiveness and efficiency of our proposed method by comparing the results with those from state-of-the-art methods.

    DOI: 10.1109/ROBIO.2016.7866353

    Scopus

    researchmap

  • Dense 3D reconstruction from two spherical images via optical flow-based equirectangular epipolar rectification 査読

    Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    IST 2016 - 2016 IEEE International Conference on Imaging Systems and Techniques, Proceedings   140 - 145   2016年11月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(国際会議プロシーディングス)  

    © 2016 IEEE. In this research, a technique for dense 3D reconstruction from two spherical images clicked at displaced positions near a large structure is formulated. The technique is based on the use of global variational information i.e. The dense optical flow field in a unique rectification based refinement of the epipolar geometry between the two images in a vertically displaced orientation. A non-linear minimization is used to globally align the 2D equirectangular optical flow field (i.e. pixel displacements). The magnitude component of the resultant optical flow field can directly be converted to a dense 3D reconstruction. Thus, the epipolar geometry as well as the 3D structure can be estimated in a single minimization. This method could be useful in measurement and reconstruction of large structures such as bridges, etc. using a robot equipped with a spherical camera, thus helping in their inspection and maintenance.

    DOI: 10.1109/IST.2016.7738212

    Scopus

    researchmap

  • A decoupled virtual camera using spherical optical flow 査読

    Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

    Proceedings - International Conference on Image Processing, ICIP   2016-August   4488 - 4492   2016年8月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(国際会議プロシーディングス)  

    © 2016 IEEE. In camera-equipped teleoperated robots, it is often tedious for the operator to manage both the viewpoint and the shaky/unstable navigation, leading to disorientation. Our proposal is to create a virtual, freely rotatable camera that is decoupled from the robot's rotation. It is implemented using a complete spherical camera and removing its rotation in-image with a novel algorithm based on aligning the dense spherical optical flow field along the epipolar direction. Finally, any area on the rotation-less image sequence can be undistorted, resulting in the desired decoupled camera. We illustrate the concept by showing the effect on some videos taken from a spherical camera under different robot motions.

    DOI: 10.1109/ICIP.2016.7533209

    Scopus

    researchmap

  • 3D reconstruction of structures using spherical cameras with small motion 査読

    Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

    International Conference on Control, Automation and Systems   0   117 - 122   2016年1月

     詳細を見る

    担当区分:筆頭著者   掲載種別:研究論文(国際会議プロシーディングス)  

    © 2016 Institute of Control, Robotics and Systems - ICROS. In this research, a method for dense 3D reconstruction of structures from small motion of a spherical camera is proposed. Spherical cameras can capture information from all directions enabling measurement of the entire surrounding structure at once. The proposed technique uses two spherical images clicked at slightly displaced positions near the structure, followed by a combination of feature-point matching and dense optical flow. Feature-point matching between two images alone is usually not accurate to give a dense point cloud because of outliers. Moreover, calculation of the epipolar direction with feature point matching is susceptible to noise with small displacements. However, spherical cameras have unique parallax properties allowing use of dense, global information. Taking advantage of this, the global, dense optical flow field is used. The epipolar geometry is densely optimized based on the optical flow field for an accurate 3D reconstruction. A possible use of this research could be to measure large infrastructures (bridges, tunnels, etc.) with minimal robot motion.

    DOI: 10.1109/ICCAS.2016.7832307

    Scopus

    researchmap

  • Robot Body Occlusion Removal in Omnidirectional Video Using Color and Shape Information 査読

    Xu Binbin, Pathak Sarthak, Fujii Hiromitsu, Yamashita Atsushi, Asama Hajime

    The Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM   2015   49 - 50   2015年12月

     詳細を見る

    記述言語:英語   出版者・発行元:一般社団法人日本機械学会  

    Omnidirectional cameras are widely used for robot inspection for their wild fields of view. However, the robot body will always be included in the view, causing occlusions. This paper deals with one such example of occlusion and proposes an inpainting based solution to remove it. Our method could generate a clean video automatically, without the need for a manually given mask. We propose an approximate shape fitting method combined with color information to generate the mask of robot body occlusion and followed by video inpainting. In experiments, the effectiveness of our proposed method is demonstrated by successfully removing robot body occlusions in omnidirectional videos.

    DOI: 10.1299/jsmeicam.2015.6.49

    CiNii Books

    researchmap

  • Rotation Removed Stabilization of Omnidirectional Videos Using Optical Flow 査読

    Pathak Sarthak, Moro Alessandro, Yamashita Atsushi, Asama Hajime

    The Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM   2015   51 - 52   2015年12月

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   出版者・発行元:一般社団法人日本機械学会  

    Omnidirectional cameras are often used with robots to provide an immersive view of the surroundings. However, robots often have unstable motion and undergo rotation. In this work, we formulate a method to stabilize the viewpoint of Omnidirectional videos by removing the rotation using dense optical flow fields. The method works by first projecting each Omnidirectional video frame on a unit sphere, measuring the optical flow at every point on the sphere, and finding the direction that minimizes its rotational component in a frame by frame manner. The Omnidirectional video is de-rotated and a 'rotation-less, translation-only' viewpoint is generated. Such a technique is well suited to work in any environment, even with sparse texture or repeating patterns where feature correspondence based methods may fail.

    DOI: 10.1299/jsmeicam.2015.6.51

    CiNii Books

    researchmap

▼全件表示

講演・口頭発表等

  • 測域センサと魚眼カメラの統合による油圧ショベルの俯瞰映像内におけるダンプトラックの3次元モデルの提示

    菅沢 佑太, 筑紫 彰太, 小松 廉, ルイ笠原 純ユネス, Sarthak Pathak, 谷島 諒丞, 濱崎 峻資, 永谷 圭司, 千葉 拓史, 茶山 和博, 山下 淳, 淺間 一

    第38回日本ロボット学会学術講演会(RSJ2020)  2020年10月 

     詳細を見る

    開催年月日: 2020年10月    

    researchmap

  • ロボット・AI・センサ情報処理技術を活用したi-Constructionの推進

    山下 淳, ルイ笠原 純ユネス, Sarthak Pathak, 小松 廉, 筑紫 彰太, 淺間 一, 谷島 諒丞, 濱崎 峻資, 永谷 圭司, 小澤 一雅

    第2回i-Constructionの推進に関するシンポジウム  2020年7月 

     詳細を見る

    開催年月日: 2020年7月    

    researchmap

  • 環境の3Dモデルと全天球カメラ画像を用いた色差最小化によるカメラの位置姿勢推定

    陽 東旭, 樋口 寛, Sarthak Pathak, Alessandro Moro, 山下 淳, 淺間 一

    動的画像処理実利用化ワークショップ2020(DIA2020)  2020年3月 

     詳細を見る

    開催年月日: 2020年3月    

    researchmap

  • 正距円筒オプティカルフローパターンを均等化したE-CNNによる全天球カメラの回転推定の精度向上

    Dabae Kim, Sarthak Pathak, Alessandro Moro, 小松 廉, 山下 淳, 淺間 一

    第19回計測自動制御学会システムインテグレーション部門講演会(SI2018)  2018年12月 

     詳細を見る

    開催年月日: 2018年12月    

    researchmap

  • 計測点の信頼度を考慮した全天球ステレオカメラによる運動推定

    野田 純平, Sarthak Pathak, 藤井 浩光, 山下 淳, 淺間 一

    第18回計測自動制御学会システムインテグレーション部門講演会(SI2017)  2017年12月 

     詳細を見る

    開催年月日: 2017年12月    

    researchmap

  • 直線特徴に基づく2D-3Dマッチングを用いた全天球カメラの位置姿勢推定

    後藤 翼, Sarthak Pathak, 池 勇勳, 藤井 浩光, 山下 淳, 淺間 一

    第34回日本ロボット学会学術講演会(RSJ2016)  2016年9月 

     詳細を見る

    開催年月日: 2016年9月    

    researchmap

  • Complete Omnidirectional Rotation Estimation for Flying Robots using Lines

    第33回日本ロボット学会学術講演会(RSJ2015)  2015年9月 

     詳細を見る

    開催年月日: 2015年9月    

    researchmap

▼全件表示

受賞

  • FA財団論文賞 2018

    2018年12月   FA財団   "人工物環境における全天球カメラの位置姿勢推定のための直線特徴に基づく3D-2Dマッチング",精密工学会誌

    後藤 翼, Sarthak Pathak, 池 勇勳, 藤井 浩光, 山下 淳, 淺間 一

  • JRM Best Paper Award 2018

    2018年12月   Journal of Robotics and Mechatronics   "Spherical Video Stabilization by Estimating Rotation from Dense Optical Flow Fields", Journal of Robotics and Mechatronics

    Sarthak Pathak, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

  • Best Paper Award

    2017年1月   International Workshop on Advanced Image Technology (IWAIT 2017)   "Spherical Camera Localization in Man-made Environment Using 3D-2D Matching of Line Information"

    Tsubasa Goto, Sarthak Pathak, Yonghoon Ji, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

  • Student Best Paper Award

    2016年10月   "International Conference on Control, Automation, and Systems (ICCAS 2016)"  

    Sarthak Pathak

  • Student Best Paper Award

    2016年10月   International Conference on Control, Automation, and Systems (ICCAS 2016)   "3D Reconstruction of Structures using Spherical Cameras with Small Motion"

    Sarthak Pathak

  • Honorable Mention Award

    2015年12月   The 6th International Conference on Advanced Mechatronics (ICAM2015)   "Robot Body Occlusion Removal in Omnidirectional Video Using Color and Shape Information"

    Binbin Xu, Sarthak Pathak, Hiromitsu Fujii, Atsushi Yamashita, Hajime Asama

▼全件表示

共同研究・競争的資金等の研究課題

  • 360-Degree Camera based Fast Indoor Localization using Image Gradients

    研究課題/領域番号:20K22383  2020年9月 - 2022年3月

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research Grant-in-Aid for Research Activity Start-up  Grant-in-Aid for Research Activity Start-up  The University of Tokyo

    Sarthak Mahesh Pathak

      詳細を見る

    配分額:2860000円 ( 直接経費:2200000円 、 間接経費:660000円 )

    researchmap

  • 災害対応のため360度センシング

    研究課題/領域番号:18F18109  2018年4月 - 2020年3月

    日本学術振興会  科学研究費助成事業 特別研究員奨励費  特別研究員奨励費  東京大学

    淺間 一, PATHAK SARTHAK MAHESH

      詳細を見る

    配分額:2300000円 ( 直接経費:2300000円 )

    1. 災害環境における全天球ステレオカメラの運動推定及び3次元復元
    災害現場では被害状況の把握を安全かつ迅速に行う必要があるため,環境を面で計測可能なカメラを用いた計測手法が有効である.一般的なカメラは視野が狭いが,全天球カメラの視野は360度であり,有効である.ただし全天球カメラでは,復元点の信頼度が方向によって大きく変化する.特に,エピポーラ線方向の復元点の信頼度は非常に低くなる傾向がある.そこで本研究は復元点の信頼度を考慮して,全天球ステレオカメラの運動推定及び3次元復元手法を構築する.
    2. 災害環境の地図を用いた全天球カメラの位置姿勢推定
    災害環境にドローンを導入する場合,初期位置が既知である必要があるが,屋内環境ではGPS等の利用が不可能である.そこで,本研究では映像情報のみを用いたIndoor Positioning Systemの研究開発を行う.ここでは,災害環境の直線モデルを既知情報として用いる.ドローンに搭載した全天球カメラで取得した映像中の2次元の直線情報と,環境地図中での3次元直線情報のマッチングによって,ドローンの6自由度位置姿勢を推定する手法を構築する.
    3. 深層学習を用いた全天球カメラの運動推定・3次元復元のロバスト化
    カメラの運動推定及び環境の3次元復元を行うために,一般的な方法では映像での特徴トラッキングがベースとなる.ここで,環境によって異なる特徴が必要になる場合がある.例えば,屋内環境で直線以外は使えないことが多く.屋外環境では特徴点を使う場合が多い.このように環境に適した特徴を用いないと,カメラの運動推定・3次元復元が失敗する可能性がある.この問題を解決するため,特徴の設計をせず,データから学習可能な畳み込みニューラルネットワークを利用する.具体的には,全天球カメラの歪みを考慮した全天球畳み込みニューラルネットワークの研究開発を行う.

    researchmap

学術貢献活動

  • 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE 2019), Special Session Organizer - "Consumer Electronics Meets Robotics"

    役割:企画立案・運営等

    IEEE Consumer Electronics Society  2019年10月    

     詳細を見る

    種別:大会・シンポジウム等 

    researchmap

  • The 2017 IEEE/SICE International Symposium on System Integration (SII2017), Program Committee Member

    役割:企画立案・運営等

    IEEE/SICE  2017年12月    

     詳細を見る

    種別:大会・シンポジウム等 

    researchmap