Researcher Database

Researcher Profile and Settings

Master

Affiliation (Master)

  • Faculty of Information Science and Technology Computer Science and Information Technology Synergetic Information Engineering

Affiliation (Master)

  • Faculty of Information Science and Technology Computer Science and Information Technology Synergetic Information Engineering

researchmap

Profile and Settings

Degree

  • Systems Information Science(2008/03 Future University-HAKODATE)

Profile and Settings

  • Profile

    Daisuke Sakamoto is an Associate Professor of Human-Computer Interaction lab, Hokkaido University. He received his B.A.Media Architecture, M.Systems Information Science, and Ph.D. in Systems Information Science from Future University-Hakodate in 2004, 2006, and 2008, respectively. He was an Intern researcher of ATR Intelligent Robotics and Communication Labs (2006-2008). He worked at The University of Tokyo as a Research Fellow of the Japan Society for the Promotion of Science (2008-2010). He joined JST ERATO Igarashi Design Interface Project as a researcher (2010). After that he backed to The University of Tokyo as an Assistant Professor (2011) and a Project Lecturer (2013-2016). His research interests include Human-Computer Interaction and Human-Robot Interaction, which focused on the user interaction with people and interaction design for the computing systems.
  • Name (Japanese)

    Sakamoto
  • Name (Kana)

    Daisuke
  • Name

    201301071935550969

Alternate Names

Achievement

Research Interests

  • Human-Computer Interaction   Human-Robot Interaction   Human-Agent Interaction   User Interface   Interaction Design   Entertainment Computing   

Research Areas

  • Informatics / Human interfaces and interactions / Human-computer Interaction

Research Experience

  • 2020/03 - Today Hokkaido University
  • 2019/04 - Today Hokkaido University Faculty of Information Science and Technology Associate Professor
  • 2022/04 - 2023/03 Hokkaido University Institute for the Advancement of Higher Education
  • 2017/01 - 2022/03 Japan Science and Technology Agency ERATO Hasuo Metamathematics for Systems Design Project Research Advisor
  • 2019/04 - 2019/09 Waseda University Faculty of Science and Engineering
  • 2017/03 - 2019/03 Hokkaido University Graduate School of Information Science and Technology Associate Professor
  • 2018/04 - 2018/09 Waseda University Faculty of Science and Engineering
  • 2017/04 - 2018/03 Meiji University Graduate School of Advanced Mathematical Sciences
  • 2017/04 - 2017/09 Waseda University Faculty of Science and Engineering
  • 2013/01 - 2017/02 The University of Tokyo Graduate School of Information Science and Technology Project Lecturer
  • 2016/04 - 2016/08 The University of Tokyo College of Arts and Science Part-time lecturer
  • 2015/04 - 2015/08 The University of Tokyo College of Arts and Science Part-time lecturer
  • 2014/09 - 2014/09 Hokkaido University Graduate School of Information Science and Technology Part-time lecture
  • 2013/09 - 2013/09 Hokkaido University Graduate School of Information Science and Technology Part-time lecturer
  • 2011/04 - 2013/03 Tokyo University of the Arts Art Media Center Part-time Lecturer
  • 2011/04 - 2013/03 Japan Science and Technology Agency ERATO Igarashi Design Interface Project Research Advisor
  • 2008/04 - 2013/03 Advanced Telecommunications Research Institute International Communication Robot Dept. Cooperative Researchers
  • 2011/04 - 2013/01 The University of Tokyo Graduate School of Information Science and Technology Assistant Professor
  • 2011/08 - 2011/10 University of Manitoba Department of Computer Science Visiting Researcher
  • 2010/04 - 2011/03 Japan Science and Technology Agency ERATO Igarashi Design Interface Project Researcher
  • 2008/10 - 2010/03 Japan Science and Technology Agency ERATO Igarashi Design Interface Project Collaborator
  • 2008/04 - 2010/03 Japan Society for Promotion Science The University of Tokyo Postdoctoral Research Fellow
  • 2006/04 - 2008/03 Advanced Telecommunications Research Institute International Communication Robot Dept. Intern

Education

  • 2006/04 - 2008/03  Future University-Hakodate  Graduate School of Systems Information Science  Ph.D. Course
  • 2004/04 - 2006/03  Future University-Hakodate  Graduate School of Systems Information Science  Master Course
  • 2000/04 - 2004/03  Future University-Hakodate  Department of Systems Information Science

Awards

  • 2024/06 一般社団法人 情報処理学会 2023年度論文賞
     Kuiper Belt:VRにおける自然ではない視線角度を用いた視線入力手法の提案 
    受賞者: 崔 明根;坂本 大介;小野 哲雄
  • 2024/03 情報処理学会シンポジウム「インタラクション2024」 優秀論文賞
     AR入力デバイスとしての身体装着型トラックボールの検討 
    受賞者: 岩井望;崔明根;坂本大介;小野哲雄
  • 2024/03 情報処理学会シンポジウム「インタラクション2024」 インタラクティブ発表賞(PC推薦)
     VRリダイレクションを用いた座位姿勢改善手法における閾値調査 
    受賞者: 小林 広夢;崔 明根;坂本 大介;小野 哲雄
  • 2024/02 北海道知事表彰 令和5年度 北海道科学技術奨励賞
     
    受賞者: 坂本大介
  • 2023/12 第31回インタラクティブシステムとソフトウェアに関するワークショップ(WISS2023) 対話発表賞(一般)
     OMEME:非装着状態のHMDを用いたコンパニオンロボットの開発 
    受賞者: 阿部 優樹;鈴木 湧登;坂本 大介;小野 哲雄
  • 2023/10 公益財団法人日本デザイン振興会 2023年度グッドデザイン賞
     ユーザインタフェースの研究 
    受賞者: 鈴木健司;笹間裕;坂本大介;金田悠和;町田宏司;椎崎幸世;中村塁;安藤智彦;西磨翁;坂本理砂;相原佳代子;岩本陽;田中暢;木村周;桐生しおり
  • 2022/12 第30回インタラクティブシステムとソフトウェアに関するワークショップ(WISS2022) 最優秀発表賞(一般)
     Gino .Aiki: 合気道の身体の使い方の習得を支援するMRソフトウェア 
    受賞者: 鈴木湧登;坂本大介;小野哲雄
  • 2022/12 第30回インタラクティブシステムとソフトウェアに関するワークショップ(WISS2022) 対話発表賞(一般)
     Gino .Aiki: 合気道の身体の使い方の習得を支援するMRソフトウェア 
    受賞者: 鈴木湧登;坂本大介;小野哲雄
  • 2022/02 情報処理学会「インタラクション2022」 論文賞
     Kuiper Belt: バーチャルリアリティにおける極端な視線角度を用いた視線入力手法の検討 
    受賞者: 崔 明根;坂本 大介;小野 哲雄
  • 2021/06 情報処理学会デジタルコンテンツクリエーション研究会 DCON論文賞
     
    受賞者: 巻口 誉宗;高田 英明;坂本 大介;小野 哲雄
  • 2021/03 情報処理学会およびIEEE-CS(Institute of Electrical and Electronics Engineers Computer Society) IPSJ/IEEE-CS Young Computer Researcher Award
     
    受賞者: Daisuke Sakamoto
  • 2021/01 Japan ACM SIGCHI Chapter Distinguished Young Researcher Award
     
    受賞者: Daisuke Sakamoto
  • 2019/10 the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '19) Best Demo Award - People's Choice
     
    受賞者: Kenji Suzuki;Daisuke Sakamoto;Sakiko Nishi;Tetsuo Ono
  • 2018/08 日本ソフトウェア科学会 第22回研究論文賞
     
    受賞者: 小山裕己;坂本大介;五十嵐健夫
  • 2017/12 HAIシンポジウム2017 Outstanding Research Award (優秀論文賞)
     複数ロボットの発話の重なりによって創発する空間の知覚 
    受賞者: 水丸 和樹;坂本 大介;小野 哲雄
  • 2017/12 HAIシンポジウム2017 Impressive Poster Award (優秀ポスター賞)
     ぬいぐるみロボットを用いた休憩タイミング提示システムの提案 
    受賞者: 大西 紗綾;坂本 大介;小野 哲雄
  • 2016/12 日本ソフトウェア科学会 第24回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2016) 対話発表賞
     自由形状の竹とんぼのインタラクティブデザインシステム 
    受賞者: 中村 守宏;小山 裕己;坂本 大介;五十嵐 健夫
  • 2016/03 情報処理学会インタラクション2016 インタラクティブ発表賞(PC推薦)
     Dollhouse VR: 複数人が空間を多角的に見ながら協調してレイアウトを検討できるシステム 
    受賞者: 杉浦裕太;尉林暉;チョントビー;坂本大介;宮田なつき;多田充徳;大隈隆史;蔵田武志;新村猛;持丸正明;五十嵐健夫
  • 2015/11 ACM the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15) Best Poster Award, Honorable Mention
     Fix and Slide: Caret Navigation with Movable Background 
    受賞者: Kenji Suzuki;Kazumasa Okabe;Ryuuki Sakamoto;Daisuke Sakamoto
  • 2015/06 情報処理学会 学会活動貢献賞
     学術講演の動画中継・アーカイブ活動を通じた学会への貢献 
    受賞者: 坂本大介
  • 2015/04 ACM, the SIGCHI Conference on Human Factors in Computing Systems (CHI '15) Honorable Mention
     AnnoTone: Record-time Audio Watermarking for Context-aware Video Editing 
    受賞者: Ryohei Suzuki;Daisuke Sakamoto;Takeo Igarashi
  • 2015/03 情報処理学会インタラクション2015 インタラクティブ発表賞
     テキスト全体の移動によりキャレットの相対位置を変化させるポインティング手法の提案 
    受賞者: 鈴木 健司;岡部 和昌;坂本 竜基;坂本 大介
  • 2014/11 日本ソフトウェア科学会 インタラクティブシステムとソフトウェアに関するワークショップ(WISS '14) 優秀論文賞
     
    受賞者: 小山 裕己;坂本 大介;五十嵐 健夫
  • 2014/10 international conference on Human-Agent Interaction (iHAI '14) Best Paper Nominee
     
    受賞者: Jun Kato;Daisuke Sakamoto;Takeo Igarashi;Masataka Goto
  • 2014/02 ACM international conference on Intelligent User Interfaces (IUI '14) Best Paper Award
     
    受賞者: Fangzhou Wang;Yang Li;Daisuke Sakamoto;Takeo Igarashi
  • 2013/12 International Conference on Artificial Reality and Telexistence (ICAT '13) Best Paper Award
     
    受賞者: Daniel Saakes;Vipul Choudhary;Daisuke Sakamoto;Masahiko Inami;Takeo Igarashi
  • 2013/10 ACM Symposium on Virtual Reality Software and Technology (VRST '13) Best Paper Award
     
    受賞者: Naoki Sasaki;Hsiang-Ting Chen;Daisuke Sakamoto;Takeo Igarashi
  • 2013/03 ACM/IEEE international conference on Human-robot interaction (HRI2013) Best Demo Honorable Mention Award
     
    受賞者: Yuta Sugiura;Yasutoshi Makino;Daisuke Sakamoto;Masahiko Inami;Takeo Igarashi
  • 2012/10 日本デザイン振興会 2012年度グッドデザイン賞
     
    受賞者: 杉浦裕太;筧豪太;杉本麻樹;坂本大介;稲見昌彦;五十嵐健夫
  • 2010/11 International Conference on Advances in Computer Entertainment Technology (ACE 2010) Best Paper Silver Award
     
    受賞者: Takumi Shirokura;Daisuke Sakamoto;Yuta Sugiura;Tetsuo Ono;Masahiko Inami;Takeo Igarashi
  • 2010/04 Laval Virtual 2010 Grand Prix du Jury
     
    受賞者: Thomas Seifried;Christian Rendl;Florian Perteneder;Jakob Leitner;Michael Haller;Daisuke Sakamoto;Jun Kato;Masahiko Inami;Stacey D. Scott
  • 2010/03 情報処理学会 インタラクション2010 インタラクティブ発表賞
     
    受賞者: 杉浦裕太;筧豪太;Anusha I. Withana;Charith L. Fernando;坂本大介;稲見昌彦;五十嵐健夫
  • 2009/05 情報処理学会 平成20年度論文賞
     
    受賞者: 坂本大介;神田崇行;小野哲雄;石黒浩
  • 2008/03 国際電気通信基礎技術研究所 研究開発表彰 優秀研究賞(所内表彰)
     
    受賞者: 坂本大介
  • 2007/11 神戸ビエンナーレ2007 ロボットメディアアートコンペティション 最優秀賞
     
    受賞者: 坂本大介
  • 2007/03 情報処理学会 インタラクション2007 ベストペーパー賞
     
    受賞者: 坂本大介;神田崇行;小野哲雄;石黒浩;萩田紀博
  • 2007/03 ACM/IEEE International Conference on Human-Robot Interaction (HRI2007) Best Paper Award
     
    受賞者: Kotaro Hayashi;Daisuke Sakamoto;Takayuki Kanda;Masahiro Shiomi;Satoshi Koizumi;Hiroshi Ishiguro;Tsukasa Ogasawara;Norihiro Hagita
  • 2006 情報処理学会関西支部 平成18年度 学生奨励賞
     
    受賞者: 坂本大介
  • 2005/03 電子情報通信学会北海道支部 平成17年度 支部長賞
     
    受賞者: 坂本大介
  • 2004/03 公立はこだて未来大学 未来大学賞
     
    受賞者: 坂本大介
  • 2002/10 日経BP WPC EXPO テーマビジュアルコンテスト 優秀賞
     
    受賞者: 坂本大介;松下勇夫

Published Papers

  • Maino Shinohara, Daisuke Sakamoto, Tetsuo Ono, James Everett Young
    HAI 133 - 141 2023/12 [Refereed][Not invited]
  • Yuto Suzuki, Daisuke Sakamoto, Tetsuo Ono
    ISMAR-Adjunct 519 - 524 2023/10 [Refereed][Not invited]
  • 崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 情報処理学会 64 (2) 400 - 416 1882-7764 2023/02/15 [Refereed]
     
    目の水平方向の最大可動範囲は平均45°である.しかし目が限界まで動くことはほとんどなく,視線は基本的に視線と頭部方向がなす角度(視線角度)25°以内に分布している.我々はこの25°-45°の領域を“Kuiper Belt”と名付けた.当領域はユーザが意図的に目を動かさない限り視線が移動することはほとんどない.ゆえにKuiper Beltを活用することで,VRにおける視覚探索時の意図しない選択操作であるMidas Touchが減少すると考えられる.本稿ではKuiper Beltで視線インタラクションを行うためのパラメータ設計を行う実験と,Kuiper Beltを用いた手法の有用性と負担を検討する実験を行った.実験結果より,Kuiper Beltを活用することで視覚探索時の選択操作の高速化と誤入力の減少が可能であることが示された. The maximum physical range of horizontal human eye movement is approximately 45°. However, in a natural gaze shift, the difference in the direction of the gaze relative to the frontal direction of the head rarely exceeds 25°. We name this region of 25°-45° the “Kuiper Belt” in the eye-gaze interaction. We try to utilize this region to solve the Midas touch problem to enable a search task while reducing false input in the Virtual Reality environment. In this work, we conduct two studies to figure out the design principle of how we place menu items in the Kuiper Belt as an “out-of-natural angle” region of the eye-gaze movement and determine the effectiveness and workload of the Kuiper Belt-based method. The results indicate that the Kuiper Belt-based method facilitated the visual search task while reducing false input. Finally, we present example applications utilizing the findings of these studies.
  • 阿部 優樹, 崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 情報処理学会 64 (2) 352 - 365 1882-7764 2023/02/15 
    ストリームライブ配信の普及にともない,ライブ配信時の配信者と視聴者の交流手法が研究されている.配信者側の交流支援手法はさかんに研究が行われてきている一方で,視聴者側のデザイン検討は少なく,特にストリームライブ配信視聴中のテキスト入力手法は十分に検討されていない.そこで我々は,視聴者側のユーザ体験を向上させるライブチャット用キーボードの検討を行った.まず,我々はストリームライブ配信を日常的に視聴するユーザにインタビュー調査を行い,ライブ配信視聴者の習慣や問題点を調査した.その結果,ユーザはスマートフォンでストリームライブ配信を視聴することが多く,その際には横持ちスマートフォンでの視聴が好まれる傾向がみられた.同時に,既存のスマートフォン上のキーボードは動画画面と大きく干渉するため,視聴者の没入感とコメント意欲を妨げていることが明らかになった.そこで,動画を隠さず,ライブチャット交流を円滑にする横持ちでのスマートフォンに最適化したキーボードの実現を目指して1) 両手を負担なく活用できるダブルフリックキーボード,2) 動画背景に対する半透明キーボードを提案し,その有用性をユーザビリティの観点から検証した.結果,スマートフォンの横持ち操作において,両手保持を想定したキー配置は有効であること,および一定以下の不透明なキーボードは動画背景で入力性能が低下することが明らかになった.最後に,横持ちスマートフォンでの文字キー配置や動画背景での不透明度の影響について考察を行う. With the spread of live streaming services, interaction methods between streamers and viewers during live streaming have been studied. While there has been much research on interaction methods for the streamer side, there has been little design study on the viewer side, and in particular, text input methods during live streaming have not been sufficiently studied. We propose a keyboard for live chat that improves the user experience on the viewer side. First, we conducted an interview survey of viewers who watch live streaming on a daily basis to investigate their preferences and problems. As a result, we found that viewers often watch live streaming on their smartphones, and that they tend to hold their smartphones horizontally when watching live streaming. At the same time, it was found that the current keyboard on smartphones interferes significantly with the video screen, which disturbs the viewer's immersion in the video and motivation to comment on the content. Therefore, we propose a keyboard optimized for smartphones in landscape mode that facilitates live chat interaction. Key features of our keyboard are 1) a double-flick keyboard that can utilize both hands, and 2) a semi-transparent, less occlude video screen. We conducted a study to understand the usability of the proposed keyboard. As a result, we found that a key arrangement of the double-flick keyboard for two-handed interaction is effective in text input for landscape mode, however, high clarity of the keyboard decreases performance in text input while watching a video. Finally, we discuss the effects of key arrangement in landscape mode and keyboard opacity for a live chat on streaming video.
  • Kento Goto, Kazuki Mizumaru, Daisuke Sakamoto, Tetsuo Ono
    HRI 1192 - 1193 2022
  • Sho Mitarai, Nagisa Munekata, Daisuke Sakamoto, Tetsuo Ono
    Journal of The Virtual Reality Society of Japan (TVRSJ) 26 (4) 333 - 344 2021/12 [Refereed][Not invited]
  • Takehiro Abe, Daisuke Sakamoto
    MobileHCI '21: 23rd International Conference on Mobile Human-Computer Interaction(MobileHCI) 1 - 11 2021/09 [Refereed]
  • 秋葉 翔太, 崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 62 (2) 689 - 700 1882-7764 2021/02/15 [Refereed]
     
    In this study, we propose a one-handed target selection method, in which the selection is completed with only one tap and a small amount of movement of the thumb. In this method, in order to explicitly show the selection target, targets are rearranged in a semicircular shape around the finger so that all the selection candidates are not covered by the finger. In this study, as a method of selecting a rearranged target, we implemented and evaluated three selection gestures. One Half-Pie that directly selects the target by tilting the finger in the direction of the target, Two Half-Pie that selects the target arranged in two steps while switching by pushing the display, and RailDragger that selects the target according to the amount dragged from the tapped point. In the experiment, the pointing task was performed using each of the three selection methods, and the selection methods were evaluated from the viewpoint of the time to selection completion and selection accuracy. As a result, we found that RailDragger was the best for the selection time and Two Half-Pie was the best for the selection accuracy in the three proposed methods.
  • 崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 62 (2) 667 - 679 1882-7764 2021/02/15 [Refereed]
     
    In this paper, we present a method of applying a bubble lens technique, which is the method to achieve easy and fast selection for a small target, to the eye-gaze interface. This method makes it easy to select a small target in eye-gaze interface by activating magnification lens near the target to use knowledge that a saccade includes a ballistic movement and corrective movements. We performed a pointing task to validate usefulness of bubble gaze lens by comparing a bubble gaze cursor and a bubble gaze lens which is our proposed technique. Results indicated that our proposed technique always faster than the bubble gaze cursor, and reduced the error rate by 54.0%. In addition, the usability score and mental work load score was also significantly better than the bubble gaze cursor.
  • 岡田 友哉, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 62 (2) 654 - 666 1882-7764 2021/02/15 [Refereed]
     
    We present a gesture input method using a ski pole as an input device, focusing on skiing as an outdoor activity. First, we designed a user-defined gesture by asking several experienced skiers to devise a gesture suitable for mobile device operation. To recognize the designed gestures, we implemented a gesture recognizer with a convolutional neural network (CNN) that uses acceleration and gyroscopic sensors under the grip of the pole. As a result of collecting data during gesture execution and conducting discrimination experiments, the correct answer rate was about 96.5% when learning with randomly selected data, and the average correct answer rate was about 85.8% when learning with one user's data as test data and another user's data. To confirm the false recognition rate during actual skiing, we collected data at the ski resort and discriminated it from the gesture data, and found that the recognition rate was about 99.3% during skiing and gesture input.
  • Daichi Katsura, Naoto Nishino, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of SPIE - The International Society for Optical Engineering 11766 1996-756X 2021 
    There are a variety of sizes and thicknesses of holds used in climbing, and the ease of holding them varies greatly. However, the difficulty of holding has not been considered in previous studies on route exploration. In this study, we improved the search algorithm A∗ used in previous studies and incorporated the difficulty of the hold into the fitness. We also used the improved A∗ as the evaluation function to estimate the difficulty of holds using a genetic algorithm (GA). There was also no discussion on how many divisions of the hold should be divided by difficulty, so we assumed four divisions: 2, 4, 8, and 16 divisions. After adjusting the parameters during interviews with expert climbers, we compared the algorithm of the four divisions with that of previous studies using a questionnaire online. The results showed that the route of the algorithm, which considers the difficulty of the hold, was rated higher by expert climbers and that the 8-division algorithm was the best among the proposed methods.
  • Naoki Osaka, Kazuki Mizumaru, Daisuke Sakamoto, Tetsuo Ono
    HAI '21: International Conference on Human-Agent Interaction(HAI) 267 - 271 2021
  • SAKAMTO Daisuke, YOSHIDA Shigeo
    Computer Software 日本ソフトウェア科学会 37 (3) 3_25 - 3_30 0289-6540 2020/07/22 
    The Japan ACM SIGCHI Chapter and CHI2020 Japan Chapter local meeting committee reports on the online presentation of the Japan local meeting of ACM CHI2020, which was cancelled due to a new coronavirus disease (COVID-19).
  • 巻口 誉宗, 高田 英明, 坂本 大介, 小野 哲雄
    情報処理学会論文誌デジタルコンテンツ(DCON) 8 (1) 1 - 10 2187-8897 2020/02/26 [Refereed][Not invited]
     
    The aerial image projection methods using a semi-transparent screen or a half mirror are widely used in the entertainment field. In these conventional method, large scale devices are required for the display area of the aerial image and it is difficult to produce a widely movement of the subject. In this paper, we propose a movable double-sided transmission type multi-layered aerial image display technology for the purpose of achieve the movement of the aerial image out of the stage and to the audience seats. This technology is a simple optical system combining 4 displays and 4 half mirrors. The observer can observe both sides of the object as aerial images from two directions, front and back of the device, and can also observe two layers of near and far background aerial images from both sides by transmission and reflection with a half mirror. Since the order relationship between the depths of the near view and the distant view does not depend on the viewing directions of the front and back faces, multiple people can simultaneously view multi-layered, highly realistic aerial images from both sides of the device. The two background layers are shared for the double-side viewing direction by multiple harf-mirror structure. Although the proposed method has only four display surfaces, there are total of six aerial image layers, three from each side. We report widely from optical configuration of the proposed method, prototype implementation and application of actual events.
  • 崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 61 (2) 221 - 232 1882-7764 2020/02/15 [Refereed][Not invited]
     
    Selecting a small target with an eye-gaze interface is difficult. Redesigning interface and/or increasing operation time are required for making eye-gaze interface easy to use. In this paper, we present a method of using an idea of bubble cursor, which is a kind of area cursor, for the eye-gaze interface in order to make it easy to select a small target while maintaining operation time and generality of interface design. We performed an experiment to validate our concept by comparing three interfaces, standard bubble cursor technique with a mouse, a standard eye-gaze interface with a point cursor, and the bubble cursor as an area cursor with eye-gaze interface in order to understand how the bubble cursor contributes to eye-gaze input interface. Results indicated that the bubble cursor with the eye-gaze interface always faster than the standard point cursor-based eye-gaze interface, and the usability score was also significantly higher than the standard eye-gaze interface. From those results, the bubble gaze cursor technique is an effective method to make eye-gaze pointing easier and faster.
  • Kenji Suzuki, Ryuuki Sakamoto, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2018, Barcelona, Spain, September 03-06, 2018 ACM 61 (2) 233 - 243 1882-7764 2020/02/15 [Refereed][Not invited]
     
    We present new alternative interfaces for zooming out on a mobile device: Bounce Back and Force Zoom. These interfaces are designed to be used with a single hand. They use a pressure-sensitive multitouch technology in which the pressure itself is used to zoom. Bounce Back senses the intensity of pressure while the user is pressing down on the display. When the user releases his or her finger, the view is bounced back to zoom out. Force Zoom also senses the intensity of pressure, and the zoom level is associated with this intensity. When the user presses down on the display, the view is scaled back according to the intensity of the pressure. We conducted a user study to investigate the efficiency and usability of our interfaces by comparing with previous pressure-sensitive zooming interface and Google Maps zooming interface as a baseline. Results showed that Bounce Back and Force Zoom was evaluated as significantly superior to that of previous research; number of operations was significantly lower than default mobile Google Maps interface and previous research.
  • Yuri Suzuki 0003, Kaho Kato, Naomi Furui, Daisuke Sakamoto, Yuta Sugiura
    TEI '20: Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction 467 - 472 2020 [Refereed]
  • Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, Maki Sugimoto
    2020 IEEE International Symposium on Mixed and Augmented Reality(ISMAR) 374 - 386 2020 [Refereed]
  • Daichi Katsura, Subaru Ouchi, Daisuke Sakamoto, Tetsuo Ono
    HAI '20: 8th International Conference on Human-Agent Interaction(HAI) 254 - 256 2020 [Refereed]
  • Myungguen Choi, Daisuke Sakamoto, Tetsuo Ono
    ETRA '20: 2020 Symposium on Eye Tracking Research and Applications(ETRA) 11 - 10 2020 [Refereed]
  • Kenji Suzuki, Daisuke Sakamoto, Sakiko Nishi, Tetsuo Ono
    Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2019, Taipei, Taiwan, October 1-4, 2019. ACM 66:1-66:6  2019/10 [Refereed][Not invited]
  • Lei Ma, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 7th International Conference on Human-Agent Interaction, HAI 2019, Kyoto, Japan, October 06-10, 2019 ACM 324 - 326 2019/10 [Refereed][Not invited]
  • Subaru Ouchi, Kazuki Mizumaru, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 7th International Conference on Human-Agent Interaction, HAI 2019, Kyoto, Japan, October 06-10, 2019 ACM 232 - 233 2019/10 [Refereed][Not invited]
  • 巻口 誉宗, 高田 英明, 本田 健悟, 坂本 大介, 小野 哲雄
    マルチメディア,分散協調とモバイルシンポジウム2019論文集 (2019) 176 - 179 2019/06/26 [Not refereed][Not invited]
  • 鈴木健司, 岡部和昌, 坂本竜基, 坂本大介
    情報処理学会論文誌ジャーナル(Web) 60 (2) 354‐363 (WEB ONLY)  1882-7764 2019/02 [Refereed][Not invited]
  • 黒澤 紘生, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 60 (2) 364 - 375 1882-7764 2019/02 [Refereed][Not invited]
     
    We present a target selection method for smartwatches, which employs a combination of a tilt operation and electromyography (EMG). First, a user tilts his/her arm to indicate the direction of cursor movement on the smartwatch; then s/he applies forces on the arm. EMG senses the force and moves the cursor to the direction where the user is tilting his/her arm to manipulate the cursor. In this way, the user can simply manipulate the cursor on the smartwatch with minimal effort, by tilting the arm and applying force to it. We conducted an experiment to investigate its performance and to understand its usability. Results showed that participants selected small targets with an accuracy greater than 93.89%. In addition, performance significantly improved compared to previous tilting operation methods. Likewise, its accuracy was stable as targets became smaller, indicating that the method is unaffected by the "fat finger problem".
  • Mari Hirano, Kanako Ogura, Daisuke Sakamoto, Mina Nakano, Takeru Tsuchida, Yuri Iwano, Haruhiko Shimoyama
    Gerontechnology 18 (2) 89 - 96 1569-111X 2019 
    Background Many studies indicate that companion robots are effective for supporting the health of older people have been reported however, there is little knowledge on supporting older people through conversation. It is thus necessary to explore the conversation style of robots to promote the psychological health of older people. Research aim The aim of this study was to explore which style of robot's utterance was effective for promoting relationship with older people: provide useful information or only ask questions and provide neutral responses. Methods We conducted a comparative study using two talking robots with the aim of promoting psychological health. One robot has programmed to converse frequently with people, giving them general advices the other robot was programmed listen frequently to people. Twenty-nine participants (average 70.28 years old) were randomly divided into two groups, and after having a semi-structured conversation with the robot, they responded to an impressive evaluation and interview. Results The results showed that there was a significant difference between the two groups the number of utterances of participants in the listening-robot group was significantly higher than that of the speaking-robot group. We analyzed the conversation content, and the results showed that participants had a more positive feeling toward the listening robot than toward the speaking robot. Conclusion We argue that social robots for older people should listen more than speak in order to promote a better relationship.
  • Motohiro Makiguchi, Daisuke Sakamoto, Hideaki Takada, Kengo Honda, Tetsuo Ono
    Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019, New Orleans, LA, USA, October 20-23, 2019 ACM 625 - 637 2019 [Refereed][Not invited]
  • Yuta Sugiura, Hikaru Ibayashi, Toby Chong, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Takashi Shinmura, Masaaki Mochimaru, Takeo Igarashi
    Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI 2018, Hachioji, Japan, December 02-03, 2018 ACM 21:1-21:6 - 6 2018/12 [Refereed][Not invited]
  • 水丸 和樹, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 59 (12) 2279 - 2287 1882-7764 2018/12 [Refereed][Not invited]
     
    A unique space called social space is formed in human groups. This space is strongly formed when people belonging to the group are actively communicating and will influence the behavior of a third party not belonging to the group. Moreover, overlapping speech occurs unconsciously in human everyday conversations, expressing entrainment, interest, understanding, and so on to the other party as well as becoming an important factor for producing active conversation. On the other hand, in recent years, humanoid robots have been put into practical use and demonstration experiments are actively being carried out. When assuming a future environment in which humans coexist with multiple robots, it is necessary to consider the space formed in a group of robots, but such research has not been conducted sufficiently. In this research, we implemented active communication between robots by overlapping their speech and investigated how humans perceived the space whitch emerged from it. As a result, it was indicated that overlapping speech improved the impression of conversation activity and that the space whitch emerged in the group of robots affected the behavior of the person observing the conversation.
  • 山下 峻, 藍 圭介, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 59 (11) 1965 - 1977 1882-7764 2018/11 [Refereed][Not invited]
     
    We present a composition support system that shows candidates of next melodies following user created music and melody. Non-expert of music composition, like beginners, have difficulties to create melodies in music. It would be effective if a system could support music composition. We develop an algorithm to generate candidate melodies following user's input, especially music and melodies. The algorithm was inspired by the idea of predictive text input interface. We actually generated and confirmed the candidate melodies by the proposed method, we found that it has room for improvement. Based on this result, we also tried to improve the proposed method and tried to improve the quality of candidate melodies. We then created a system for showing candidate in music composition interface to support writing music activities. We conduct evaluation studies to investigate the effectiveness of the proposed method and improvement. In the 1st evaluation study, we compared three condition melody generation methods and two condition dictionaries. From results, we confirmed that the effectiveness of the proposed method using combining Markov process and pattern matching and two dictionaries. In the 2nd evaluation study, we compared melodies in pre- and post-improvement. As a result, the score of music generated in post-improvement system, was improved and we confirmed that the effectiveness of the improvement.
  • Alexey Chistyakov, María T. Soto-Sanfiel, Takeo Igarashi, Daisuke Sakamoto, Jordi Carrabina
    Profesional de la Informacion 27 (5) 1116 - 1127 1699-2407 2018/09/01 
    The objective of the present research is to observe to what extent the stereoscopic effect presents a solution for enhancement of user interactions in the Web context. This paper describes an experiment conducted to detect differences in perception between 2D and 3D graphical user interfaces of an e-Commerce web application. The results of the conducted user study among 39 participants indicate significantly higher performance of the 2D interface in terms of efficiency, satisfaction, and, consequently, overall usability. Therefore, for the studied sample, the stereoscopic effect had mostly negative impact on user interactions.
  • Hiroki Kurosawa, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2018, Barcelona, Spain, September 03-06, 2018 ACM 43:1-43:11  2018/09 [Refereed][Not invited]
  • 春日 遥, 坂本 大介, 棟方 渚, 小野 哲雄
    情報処理学会論文誌 59 (8) 1520 - 1531 1882-7764 2018/08 [Refereed][Not invited]
     
    Pets have been humans' best friends since ancient times. People have been living with pets since then, and relationships between people and their pets, understood as family members at home, have been well researched. Social robots have recently entered family lives, and a new research field is emerging that examines triadic relationships between people, pets, and social robots. An exploratory field experiment was conducted to investigate how a social robot affects human-animal relationships within the home. In this experiment, a small humanoid robot, NAO, was introduced into the homes of 10 families, and 22 participants (with 12 pets: 4 dogs and 8 cats), called "owners" hereafter, were asked to interact with the humanoid robot. The robot was operated under two conditions: speaking positively to the pets and speaking negatively to the pets. The contents of the utterances from robot to pet, which comprised about 30 seconds of about 2 minutes of dialogue, were different under the two conditions. The results of this study indicated that changing the attitude of NAO toward the pets affected the owners' impressions of the robot.
  • Yuki Koyama, Issei Sato, Daisuke Sakamoto, Takeo Igarashi
    ACM TRANSACTIONS ON GRAPHICS 36 (4) 48:1-48:11  0730-0301 2017/07 [Refereed][Not invited]
     
    Parameter tweaking is a common task in various design scenarios. For example, in color enhancement of photographs, designers tweak multiple parameters such as "brightness" and "contrast" to obtain the best visual impression. Adjusting one parameter is easy; however, if there are multiple correlated parameters, the task becomes much more complex, requiring many trials and a large cognitive load. To address this problem, we present a novel extension of Bayesian optimization techniques, where the system decomposes the entire parameter tweaking task into a sequence of one-dimensional line search queries that are easy for human to perform by manipulating a single slider. In addition, we present a novel concept called crowd-powered visual design optimizer, which queries crowd workers, and provide a working implementation of this concept. Our single-slider manipulation microtask design for crowdsourcing accelerates the convergence of the optimization relative to existing comparison-based microtask designs. We applied our framework to two different design domains: photo color enhancement and material BRDF design, and thereby showed its applicability to various design domains.
  • Hiroaki Mikami, Daisuke Sakamoto, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 2017- 6208 - 6219 2017/05/02 [Refereed][Not invited]
     
    Experimentation plays an essential role in exploratory programming, and programmers apply version control operations when switching the part of the source code back to the past state during experimentation. However, these operations, which we refer to as micro-versioning, are not well supported in current programming environments. We first examined previous studies to clarify the requirements for a micro-versioning tool. We then developed a micro-versioning tool that displays visual cues representing possible micro-versioning operations in a textual code editor. Our tool includes a history model that generates meaningful candidates by combining a regional undo model and tree-structured undo model. The history model uses code executions as a delimiter to segment text edit operations into meaning groups. A user study involving programmers indicated that our tool satisfies the above-mentioned requirements and that it is useful for exploratory programming. Copyright is held by the owner/author(s). Publication rights licensed to ACM.
  • Hayashi, K., Sakamoto, D., K, a, T., Shiomi, M., Koizumi, S., Ishiguro, H., Ogasawara, T., Hagita, N.
    Human-Robot Interaction in Social Robotics 2017
  • Shiomi, M., Sakamoto, D., K, a, T., Ishi, C.T., Ishiguro, H., Hagita, N.
    Human-Robot Interaction in Social Robotics 2017
  • Haruka Kasuga, Daisuke Sakamoto, Nagisa Munekata, Tetsuo Ono
    Proceedings of the 5th International Conference on Human Agent Interaction, HAI 2017, Bielefeld, Germany, October 17 - 20, 2017 ACM 61 - 69 2017 [Refereed][Not invited]
  • Chia-Ming Chang, Koki Toda, Daisuke Sakamoto, Takeo Igarashi
    AUTOMOTIVEUI 2017: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AUTOMOTIVE USER INTERFACES AND INTERACTIVE VEHICULAR APPLICATIONS 65 - 73 2017 [Refereed][Not invited]
     
    Self-driving technologies have been increasingly developed and tested in recent years (e.g., Volvo's and Google's self-driving cars). However, only a limited number of investigations have so far been conducted into communication between self-driving cars and pedestrians. For example, when a pedestrian is about to cross a street, that pedestrian needs to know the intension of the approaching self-driving car. In the present study, we designed a novel interface known as "Eyes on a Car" to address this problem. We added eyes onto a car so as to establish eye contact communication between that car and pedestrians. The car looks at the pedestrian in order to indicate its intention to stop. This novel interface design was evaluated via a virtual reality (VR) simulated environment featuring a street-crossing scenario. The evaluation results show that pedestrians can make the correct street-crossing decision more quickly if the approaching car has the novel interface "eyes" than in the case of normal cars. In addition, the results show that pedestrians feel safer with regard to crossing a street if the approaching car has eyes and if the eyes look at them.
  • Kazuyo Mizuno, Daisuke Sakamoto, Takeo Igarashi
    IS and T International Symposium on Electronic Imaging Science and Technology 58 - 69 2470-1173 2017 [Refereed][Not invited]
     
    Category search is a searching activity where the user has an example image and searches for other images of the same category. This activity often requires appropriate keywords of target categories making it difficult to search images without prior knowledge of appropriate keywords. Text annotations attached to images are a valuable resource for helping users to find appropriate keywords for the target categories. We propose an image exploration system in this article for category image search without the prior knowledge of category keywords. Our system integrates content-based and keyword-based image exploration and seamlessly switches exploration types according to user interests. The system enables users to learn target categories both in image and keyword representation through exploration activities. Our user study demonstrated the effectiveness of image exploration using our system, especially for the search of images with unfamiliar category compared to the single-modality image search.
  • Mari Hirano, Kanako Ogura, Mizuho Kitahara, Daisuke Sakamoto, Haruhiko Shimoyama
    Health Psychology Open 4 (1) 2055102917707185  2055-1029 2017 [Refereed][Not invited]
     
    Most of computerized cognitive behavioral therapy targeted restoration and few have targeted primary prevention. The purpose of this study is to obtain the knowledge for further development on preventive mental healthcare application. We developed a personal mental healthcare application which aimed to give users the chance to manage their mental health by self-monitoring and regulating their behavior. Through the 30-day field trial, the results showed improvement of mood score through conducting of suggested action, and the depressive mood of the participants was significantly decreased after the trial. The possibility of application and further problem was confirmed.
  • 杉浦裕太, LEE Calista, 尾形正泰, WITHANA Anusha, 坂本大介, 牧野泰才, 五十嵐健夫, 稲見昌彦
    情報処理学会論文誌ジャーナル(Web) 57 (12) 2542‐2553 (WEB ONLY)  1882-7764 2016/12 [Refereed][Not invited]
  • 尉林暉, 杉浦裕太, 坂本大介, TOBY Chong, 宮田なつき, 多田充徳, 大隈隆史, 蔵田武志, 新村猛, 持丸正明, 五十嵐健夫
    情報処理学会論文誌ジャーナル(Web) 57 (12) 2610‐2616 (WEB ONLY)  1882-7764 2016/12 [Refereed][Not invited]
  • Morihiro Nakamura, Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    COMPUTER GRAPHICS FORUM 35 (7) 323 - 332 0167-7055 2016/10 [Refereed][Not invited]
     
    We present an interactive design system for designing free-formed bamboo-copters, where novices can easily design free-formed, even asymmetric bamboo-copters that successfully fly. The designed bamboo-copters can be fabricated using digital fabrication equipment, such as a laser cutter. Our system provides two useful functions for facilitating this design activity. First, it visualizes a simulated flight trajectory of the current bamboo-copter design, which is updated in real time during the user's editing. Second, it provides an optimization function that automatically tweaks the current bamboo-copter design such that the spin quality-how stably it spins-and the flight quality-how high and long it flies-are enhanced. To enable these functions, we present non-trivial extensions over existing techniques for designing free-formed model airplanes [UKSI14], including a wing discretization method tailored to free-formed bamboo-copters and an optimization scheme for achieving stable bamboo-copters considering both spin and flight qualities.
  • Kenji Suzuki, Kazumasa Okabe, Ryuuki Sakamoto, Daisuke Sakamoto
    Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2016 478 - 482 2016/09/06 [Refereed][Not invited]
     
    We present a concept of using a movable background to navigate a caret on small mobile devices. The standard approach to selecting text on mobile devices is to directly touch the location on the text that a user wants to select. This is problematic because the user's finger hides the area to select. Our concept is to use a movable background to navigate the caret. Users place a caret by tapping on the screen and then move the background by touching and dragging. In this method, the caret is fixed on the screen and the user drags the background text to navigate the caret. We compared our technique with the iPhone's default UI and found that even though participants were using our technique for the first time, average task completion time was not different or even faster than Default UI in the case of the small font size and got a significantly higher usability score than Default UI.
  • Daisuke Sakamoto, Yuta Sugiura, Masahiko Inami, Takeo Igarashi
    COMPUTER 49 (7) 20 - 25 0018-9162 2016/07 [Refereed][Not invited]
  • Exploring subtle foot plantar-based gestures using sock-style pressure sensors
    Koumei Fukahori, Daisuke Sakamoto, Takeo Igarashi
    Computer Software 33 (2) 116 - 124 0289-6540 2016/05/01 
    We propose subtle foot-based gestures named foot plantar-based (FPB) gestures that are used with sockstyle pressure sensors. In this system, the user can control a computing device by changing his or her foot plantar distributions, e.g., pressing the floor with his or her toe. Because such foot movement is subtle, it is suitable for use especially in a public space such as a crowded train. In this work, we focus on a user-defined gesture that is designed by the end-users, not developers of this system. We first conduct a guessability study that asks people what is the appropriate gesture for a specific command to control the computing device. Then, we implement a gesture recognizer with a machine learning technique. To avoid unexpected gesture activations, we also collect foot plantar pressure patterns made during daily activities such as walking, as negative training data. Finally, we conclude with several applications to further illustrate the utility of FPB gestures.
  • 深堀孔明, 坂本大介, 五十嵐健夫
    コンピュータソフトウェア 33 (2) 2_116‐2_124(J‐STAGE)  0289-6540 2016/04 [Refereed][Not invited]
  • SUZUKI Ryohei, SAKAMOTO Daisuke, IGARASHI Takeo
    Computer Software 日本ソフトウェア科学会 33 (1) 1.103-1.110 (J-STAGE) - 1_110 0289-6540 2016/04 [Refereed][Not invited]
     
    We propose a video annotation system called "AnnoTone", which supports video-editing process such as cropping and effects generation, by embedding annotations describing contextual information of a scene, such as geo-location of the video camera and quality of performance of actors, during a recording. The system converts inputted annotation data into high-frequency audio signals, which are almost inaudible to the human ear, and transmits them from a smartphone speaker placed near a video camera. After recording, embedded annotations are extracted from video files and exploited to support video-editing. The signals are not completely inaudible to the human ear, but we confirmed that they can be removed from video files without considerable quality loss, using audio filters. We also tested the reliability of signal embedding and the durability of annotation signals against audio conversions by experiments, and showed the feasibility of the proposed technique in practical situations. We present several example applications using AnnoTone, and discuss the possibility of novel video-editing techniques realized by annotation embedding.
  • Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016 2520 - 2532 2016 [Refereed][Not invited]
     
    Color enhancement is a very important aspect of photo editing. Even when photographers have tens of or hundreds of photographs, they must enhance each photo one by one by manually tweaking sliders in software such as brightness and contrast, because automatic color enhancement is not always satisfactory for them. To support this repetitive manual task, we present self-reinforcing color enhancement, where the system implicitly and progressively learns the user's preferences by training on their photo editing history. The more photos the user enhances, the more effectively the system supports the user. We present a working prototype system called SelPh, and then describe the algorithms used to perform the self-reinforcement. We conduct a user study to investigate how photographers would use a self-reinforcing system to enhance a collection of photos. The results indicate that the participants were satisfied with the proposed system and strongly agreed that the self-reinforcing approach is preferable to the traditional workflow.
  • Shigeo Yoshida, Takumi Shirokura, Yuta Sugiura, Daisuke Sakamoto, Tetsuo Ono, Masahiko Inami, Takeo Igarashi
    IEEE COMPUTER GRAPHICS AND APPLICATIONS 36 (1) 62 - 69 0272-1716 2016/01 [Refereed][Not invited]
  • 小山裕己, 坂本大介, 五十嵐健夫
    コンピュータソフトウェア 日本ソフトウェア科学会 33 (1) 63‐77(J‐STAGE) - 1_77 0289-6540 2016 [Refereed][Not invited]
     
    Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. To facilitate such exploration, we first present a new technique to analyze a parameter space to obtain a distribution of human preference. Our technique uses crowdsourced human computation to collect data for analysis. As a result of this analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two user interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We applied our technique to four applications with different design parameter spaces.
  • 三上裕明, 坂本大介, 五十嵐健夫
    情報処理学会論文誌トランザクション プログラミング(Web) 8 (4) 1-14 (WEB ONLY)  1882-7802 2015/12 [Refereed][Not invited]
  • Kenji Suzuki, Kazumasa Okabe, Ryuuki Sakamoto, Daisuke Sakamoto
    UIST 2015 - Adjunct Publication of the 28th Annual ACM Symposium on User Interface Software and Technology 79 - 80 2015/11/06 [Refereed][Not invited]
     
    We present a "Fix and Slide" technique, which is a concept to use a movable background to place a caret insertion point and to select text on a mobile device. Standard approach to select text on mobile devices is touching to the text where a user wants to select, and sometimes pop-up menu is displayed and they choose "select" mode and then start to specify an area to be selected. A big problem is that the user's finger hides the area to select this is called a "fat finger problem." We use the movable background to navigate a caret. First a user places a caret by tapping on a screen and then moves the background by touching and dragging on a screen. In this situation, the caret is fixed on the screen so that the user can move the background to navigate the caret where the user wants to move the caret. We implement the Fix and Slide technique on iOS device (iPhone) to demonstrate the impact of this text selection technique on small mobile devices.
  • Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Masaaki Mochimaru, Takeo Igarashi
    SIGGRAPH Asia 2015 Posters, SA 2015 24:1  2015/11/02 [Refereed][Not invited]
     
    Architecture-scale design requires two different viewpoints: a small-scale internal view, i.e., a first-person view of the space to see local details as an occupant of the space, and a large-scale external view, i.e., a top-down view of the entire space to make global decisions when designing the space. Architects or designers need to switch between these two viewpoints, but this can be inefficient and time-consuming. We present a collaborative design system, Dollhouse, to address this problem. By using our system, users can discuss the design of the space from two viewpoints simultaneously. This system also supports a set of interaction techniques to facilitate communication between these two user groups.
  • Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Masaaki Mochimaru, Takeo Igarashi
    SIGGRAPH Asia 2015 Emerging Technologies, SA 2015 8:1-8:2  2015/11/02 [Refereed][Not invited]
     
    This research addresses architecture-scale problem-solving involving the design of living or working spaces, such as architecture and floor plans. Such design systems require two different viewpoints: A small-scale internal view, i.e., a first-person view of the space to see local details as an occupant of the space, and a large-scale external view, i.e., a top-down view of the entire space to make global decisions when designing the space. Architects or designers need to switch between these two viewpoints to make various decisions, but this can be inefficient and time-consuming. We present a system to address the problem, which facilitates asymmetric collaboration between users requiring these different viewpoints. One group of users comprises the designers of the space, who observe and manipulate the space from a top-down view using a large tabletop interface. The other group of users represents occupants of the space, who observe and manipulate the space based on internal views using head-mounted displays (HMDs). The system also supports a set of interaction techniques to facilitate communication between these two user groups. Our system can be used for the design of various spaces, such as offices, restaurants, operating rooms, parks, and kindergartens.
  • Sugiura Yuta, Kakehi Gota, Whitana Anusha, Sakamoto Daisuke, Sugimoto Maki, Igarashi Takeo, Inami Masahiko
    Transactions of the Virtual Reality Society of Japan 特定非営利活動法人 日本バーチャルリアリティ学会 20 (3) 209 - 217 1344-011X 2015/09 [Refereed][Not invited]
     
    We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring the shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical soft objects in the household without complex installation procedures and without spoiling the softness of the object because it requires no physical connection. Six LEDs in the module emit IR light in six orthogonal directions, and six corresponding photosensors measure the reflected light energy. One can easily convert almost any soft object into a touch-input device that can detect both touch position and surface displacement by embedding multiple FuwaFuwa sensor modules in the object. A variety of example applications illustrate the utility of the FuwaFuwa sensor module. An evaluation of the proposed deformation measurement technique confirms its effectiveness.
  • Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH '15, Los Angeles, CA, USA, August 9-13, 2015, Posters Proceedings ACM 2:1 - 1 2015/08 [Refereed][Not invited]
  • 中嶋誠, 坂本大介, 五十嵐健夫
    情報処理学会論文誌ジャーナル(Web) 56 (4) 1317-1327 (WEB ONLY)  1882-7764 2015/04 [Refereed][Not invited]
  • Naoki Sasaki, Hsiang-Ting Chen, Daisuke Sakamoto, Takeo Igarashi
    COMPUTER ANIMATION AND VIRTUAL WORLDS 26 (2) 185 - 194 1546-4261 2015/03 [Refereed][Not invited]
     
    We present facetons, geometric modeling primitives designed for building architectural models especially effective for a virtual environment where six degrees of freedom input devices are available. A faceton is an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple interaction of faceton, users can easily create 3D architecture models. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry, but it is driven by a novel adaptive bounding algorithm and is specifically designed for 3D modeling activities in an immersive virtual environment. We describe the modeling method and our current implementation. The implementation is still experimental but shows potential as a viable alternative to traditional modeling methods. Copyright (c) 2014 John Wiley & Sons, Ltd.
  • 坂本大介, 小松孝徳, 五十嵐健夫
    ヒューマンインタフェース学会論文誌 17 (1/4) 85 - 95 2186-828X 2015/02 [Refereed][Not invited]
  • Koumei Fukahori, Daisuke Sakamoto, Takeo Igarashi
    CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS 3019 - 3028 2015 [Refereed][Not invited]
     
    We propose subtle foot-based gestures named foot plantar-based (FPB) gestures that are used with sock-placed pressure sensors. In this system, the user can control a computing device by changing his or her foot plantar distributions, e.g., pressing the floor with his/her toe. Because such foot movement is subtle, it is suitable for use especially in a public space such as a crowded train. In this study, we first conduct a guessability study to design a user-defined gesture set for interaction with a computing device. Then, we implement a gesture recognizer with a machine learning technique. To avoid unexpected gesture activations, we also collect foot plantar pressure patterns made during daily activities such as walking, as negative training data. Additionally, we evaluate the unobservability of FPB gestures by using crowdsourcing. Finally, we conclude with several applications to further illustrate the utility of FPB gestures.
  • Ryohei Suzuki, Daisuke Sakamoto, Takeo Igarashi
    CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS 57 - 66 2015 [Refereed][Not invited]
     
    We present a video annotation system called "AnnoTone", which can embed various contextual information describing a scene, such as geographical location. Then the system allows the user to edit the video using this contextual information, enabling one to, for example, overlay with map or graphical annotations. AnnoTone converts annotation data into high-frequency audio signals (which are inaudible to the human ear), and then transmits them from a smartphone speaker placed near a video camera. This scheme makes it possible to add annotations using standard video cameras with no requirements for specific equipment other than a smartphone. We designed the audio watermarking protocol using dual-tone multi-frequency signaling, and developed a general-purpose annotation framework including an annotation generator and extractor. We conducted a series of performance tests to understand the reliability and the quality of the watermarking method. We then created several examples of video-editing applications using annotations to demonstrate the usefulness of Annotone, including an After Effects plugin.
  • Hamada Takeo, Taniguchi Shohei, Ikejima Sachiko, Shimizu Keisuke, Sakamoto Daisuke, Hasegawa Shoichi, Inami Masahiko, Igarashi Takeo
    Transactions of the Virtual Reality Society of Japan 特定非営利活動法人 日本バーチャルリアリティ学会 20 (3) 229 - 238 1344-011X 2015 [Refereed][Not invited]
     
    We propose a puppet-based user interface - named Avatouch - for specifying massage position without looking at the control device. Users can indicate massage position on their backs by touching the puppet's one (Figurel). Besides, we also develop a massage chair system with both a push-button interface and Avatouch. Experimental results confirm that almost half of subjects kept Avatouch well away from their faces. Furthermore, two participants modulated massage position without looking at the plushie. In this paper, we firstly explain about Avatouch. Then, we describe a massage chair system and a user study to observe how people use each interface. At the end of this paper, we will discuss the advantage and disadvantage of Avatouch.
  • Takahito Hamanaka, Daisuke Sakamoto, Takeo Igarashi
    ACM International Conference Proceeding Series 2014- 13:1-13:10  2014/11/11 [Refereed][Not invited]
     
    We present a system called Aibiki, which can support users in practicing the shamisen, a three-stringed Japanese musical instrument, via an automatic and adaptive score scroll. We chose Nagauta, as an example of a type of shamisen music. Each piece typically lasts 10-40 min furthermore, both hands are required to play the shamisen, and it is not desirable to turn pages manually during a performance. In addition, there are some characteristic issues that are particular to the shamisen, including the variable tempo of the music and the unique timbre of the instrument, which makes pitch detection difficult using standard techniques. In this work, we describe an application that automatically scrolls through a musical score, initially at a predefined tempo. Because there is often a difference between the predefined tempo and tempo with which the musician plays the piece, the application adjusts speed of the score scroll based on the input from a microphone. We evaluated the performance of the application via a user study. We find that the system was able to scroll the score in time to the actual performance, and that the system was useful for practicing and playing the shamisen.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi, Masataka Goto
    HAI 2014 - Proceedings of the 2nd International Conference on Human-Agent Interaction 345 - 351 2014/10/29 [Refereed][Not invited]
     
    In this paper, we propose a to-do list interface for sharing tasks between human and multiple agents including robots and software personal assistants. While much work on software architectures aims to achieve efficient (semi-)autonomous task coordination among human and agents, little work on user interfaces can be found for user-oriented flexible task coordination. Instead, most of the existing human-agent interfaces are designed to command a single agent to handle specific kinds of tasks. Meanwhile, our interface is designed to be a platform to share any kinds of tasks between users and multiple agents. When agents can handle the task, they ask for details and permission to execute it. Otherwise, they try supporting users or just keep silent. New tasks can be registered not only by humans but also by agents when errors occur that can only be fixed by human users. We present the interaction design and implementation of the interface, Sharedo, with three example agents, followed by brief user feedback collected from a preliminary user study.
  • Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology 65 - 74 2014/10/05 [Refereed][Not invited]
     
    Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color or executing blendshape facial expression control. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. We present a new technique to analyze such highdimensional parameter space to obtain a distribution of human preference. Our method uses crowdsourcing to gather pairwise comparisons between various parameter sets. As a result of analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We created four applications with different design parameter spaces. As a result, the system could facilitate the user's design exploration.
  • Fangzhou Wang, Yang Li, Daisuke Sakamoto, Takeo Igarashi
    International Conference on Intelligent User Interfaces, Proceedings IUI 169 - 178 2014 [Refereed][Not invited]
     
    One of the difficulties with standard route maps is accessing to multi-scale routing information. The user needs to display maps in both a large scale to see details and a small scale to see an overview, but this requires tedious interaction such as zooming in and out. We propose to use a hierarchical structure for a route map, called a "Route Tree", to address this problem, and describe an algorithm to automatically construct such a structure. A Route Tree is a hierarchical grouping of all small route segments to allow quick access to meaningful large and small-scale views. We propose two Route Tree applications, "RouteZoom" for interactive map browsing and "TreePrint" for route information printing, to show the applicability and usability of the structure. We conducted a preliminary user study on RouteZoom, and the results showed that RouteZoom significantly lowers the interaction cost for obtaining information from a map compared to a traditional interactive map. © 2014 ACM.
  • Koumei Fukahori, Daisuke Sakamoto, Jun Kato, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 1453 - 1458 2014 [Refereed][Not invited]
     
    Programmers write and edit their source code in a text editor. However, when they design the look-and-feel of a game application such as an image of a game character and an arrangement of a button, it would be more intuitive to edit the application by directly interacting with these objects on a game window. Although modern game engines realize this facility, they use a highly structured framework and limit what the programmer can edit. In this paper, we present CapStudio, a development environment for a visual application with an interactive screencast. A screencast is a movie player-like output window with code editing functionality. The screencast works with a traditional text editor. Modifications of source code in the text editor and visual elements on the screencast will be immediately reflected on each other. We created an example application and confirmed the feasibility of our approach.
  • Makoto Nakajima, Daisuke Sakamoto, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 321 - 330 2014 [Refereed][Not invited]
     
    We present an animation creation workflow for integrating offline physical, painted media into the digital authoring of Flash-style animations. Generally, animators create animations with standardized digital authoring software. However, the results tend to lack the individualism or atmosphere of physical media. In contrast, illustrators have skills in painting physical media but have limited experience in animation. To incorporate their skills, we present a workflow that integrates the offline painting and digital animation creation processes in a labor-saving manner. First, a user makes a rough sketch of the visual elements and defines their movements using our digital authoring software with a sketch interface. Then these images are exported to printed pages, and users can paint using offline physical media. Finally, the work is scanned and imported back into the digital content, forming a composite animation that combines digital and physical media. We present an implementation of this system to demonstrate its workflow. We also discuss the advantages of using physical media in digital animations through design evaluations.
  • James E. Young, Takeo Igarashi, Ehud Sharlin, Ehud Sakamoto, Jeffrey Allen
    ACM Transactions on Interactive Intelligent Systems 3 (4) 23:1-23:36  2160-6463 2014 [Refereed][Not invited]
     
    We present a series of projects for end-user authoring of interactive robotic behaviors, with a particular focus on the style of those behaviors: we call this approach Style-by-Demonstration (SBD).We provide an overview introduction of three different SBD platforms: SBD for animated character interactive locomotion paths, SBD for interactive robot locomotion paths, and SBD for interactive robot dance. The primary contribution of this article is a detailed cross-project SBD analysis of the interaction designs and evaluation approaches employed, with the goal of providing general guidelines stemming from our experiences, for both developing and evaluating SBD systems. In addition, we provide the first full account of our Puppet Master SBD algorithm, with an explanation of how it evolved through the projects. © 2014 ACM.
  • Daniel Saakes, Vipul Choudhary, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    23rd International Conference on Artificial Reality and Telexistence, ICAT 2013, Tokyo, Japan, December 11-13, 2013 IEEE Computer Society 13 - 19 2013/12 [Refereed][Not invited]
  • Naoki Sasaki, Hsiang-Ting Chen, Daisuke Sakamoto, Takeo Igarashi
    Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 77 - 82 2013 [Refereed][Not invited]
     
    We present faceton, a geometric modeling primitive designed for building architectural models, using a six degrees of freedom (DoF) input device in a virtual environment (VE). A faceton is given as an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple drag-and-drop and group interaction of faceton, users can easily create 3D architecture models in the VE. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling in VE, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry (CSG), but it is driven by a novel adaptive bounding algorithm and is specifically designed for the 3D modeling activities in an immersive virtual environment. Copyright © 2013 ACM.
  • Daisuke Sakamoto, Takanori Komatsu, Takeo Igarashi
    MobileHCI 2013 - Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services 69 - 78 2013 [Refereed][Not invited]
     
    We propose a technique called voice augmented manipulation (VAM) for augmenting user operations in a mobile environment. This technique augments user interactions on mobile devices, such as finger gestures and button pressing, with voice. For example, when a user makes a finger gesture on a mobile phone and voices a sound into it, the operation will continue until stops making the sound or makes another finger gesture. The VAM interface also provides a button-based interface, and the function connected to the button is augmented by voiced sounds. Two experiments verified the effectiveness of the VAM technique and showed that repeated finger gestures significantly decreased compared to current touch-input techniques, suggesting that VAM is useful in supporting user control in a mobile environment. © 2013 ACM.
  • Ko Mizoguchi, Daisuke Sakamoto, Takeo Igarashi
    HUMAN-COMPUTER INTERACTION - INTERACT 2013, PT IV 8120 603 - 610 0302-9743 2013 [Refereed][Not invited]
     
    A scrollbar is the most basic function of a graphical user interface. It is usually displayed on one side of an application window when a displayed document is larger than the window. However, the scrollbar is mostly presented as a simple bar without much information, and there is still plenty of room for improvement. In this paper, we propose an overview scrollbar that displays an overview of the entire document on it and implemented four types of overview scrollbars that use different compression methods to render the overviews. We conducted a user study to investigate how people use these scrollbars and measured the performance of them. Our results suggest that overview scrollbars are more usable than is a traditional scrollbar when people search targets that are recognizable in overview.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 3097 - 3100 2013 [Refereed][Not invited]
     
    Current programming environments use textual or symbolic representations. While these representations are appropriate for describing logical processes, they are not appropriate for representing raw values such as human and robot posture data, which are necessary for handling gesture input and controlling robots. To address this issue, we propose Picode, a text-based development environment augmented with inline visual representations: photos of human and robots. With Picode, the user first takes a photo to bind it to posture data. She then drag-and-drops the photo into the code editor, where it is displayed as an inline image. A preliminary user study revealed positive effects of taking photos on the programming experience. Copyright © 2013 ACM.
  • Yuta Sugiura, Calista Lee, Masayasu Ogata, Anusha Indrajith Withana, Yasutoshi Makino, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    CHI Conference on Human Factors in Computing Systems, CHI '12, Extended Abstracts Volume, Austin, TX, USA, May 5-10, 2012 ACM 1443 - 1444 2012/05 [Refereed][Not invited]
  • Shigeo Yoshida, Daisuke Sakamoto, Yuta Sugiura, Masahiko Inami, Takeo Igarashi
    SIGGRAPH Asia 2012 Emerging Technologies, SA 2012 2012 [Refereed][Not invited]
     
    RoboJockey is an interface for creating robot behavior and giving people a new entertainment experience with robots, in particular, making robots dance, such as the "Disc jockey" and "Video jockey" (Figure 1, left). The users can create continuous robot dance behaviors on the interface by using a simple visual language (Figure 1, right). The system generates music with beat and choreographs the robots in a dance using user-created behaviors. The RoboJockey has a multi-touch tabletop interface, which gives users a multi-user collaboration every object is designed as a circle, and it can be operated from all positions around the tabletop interface. RoboJockey supports a humanoid robot, which has a capable of expressing human like dance behaviors (Figure 1, center). Copyright © 2012 ACM, Inc.
  • Amy Wibowo, Daisuke Sakamoto, Jun Mitani, Takeo Igarashi
    Proceedings of the 6th International Conference on Tangible, Embedded and Embodied Interaction, TEI 2012 99 - 102 2012 [Refereed][Not invited]
     
    This paper introduces DressUp, a computerized system for designing dresses with 3D input using the form of the human body as a guide. It consists of a body-sized physical mannequin, a screen, and tangible prop tools for drawing in 3D on and around the mannequin. As the user draws, he/she modifies or creates pieces of digital cloth, which are displayed on a model of the mannequin on the screen. We explore the capacity of our 3D input tools to create a variety of dresses. We also describe observations gained from users designing actual physical garments with the system. © 2012 ACM.
  • Genki Furumi, Daisuke Sakamoto, Takeo Igarashi
    ITS 2012 - Proceedings of the ACM Conference on Interactive Tabletops and Surfaces 193 - 196 2012 [Refereed][Not invited]
     
    The screen of a tabletop computer is often occluded by physical objects such as coffee cups. This makes it difficult to see the virtual elements under the physical objects (visibility) and manipulate them (manipulability). Here we present a user interface widget, called "SnapRail," to address these problems, especially occlusion of a manipulable collection of virtual discrete elements such as icons. SnapRail detects a physical object on the surface and the virtual elements under the object. It then snaps the virtual elements to a rail widget that appears around the object. The user can then manipulate the virtual elements along the rail widget. We conducted a preliminary user study to evaluate the potential of this interface and collect initial feedback. The SnapRail interface received positive feedback from participants of the user study. © 2012 ACM.
  • Kohei Matsumura, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    International Conference on Intelligent User Interfaces, Proceedings IUI 305 - 306 2012 [Refereed][Not invited]
     
    We present universal earphones that use both a proximity sensor and a skin conductance sensor and we demonstrate several implicit interaction techniques they achieve by automatically detecting the context of use. The universal earphones have two main features. The first involves detecting the left and right sides of ears, which provides audio to either ear, and the second involves detecting the shared use of earphones and this provides mixed stereo sound to both earphones. These features not merely free users from having to check the left and right sides of earphones, but they enable them to enjoy sharing stereo audio with other people.
  • Yuta Sugiura, Calista Lee, Masayasu Ogata, Anusha Withana, Yasutoshi Makino, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 725 - 734 2012 [Refereed][Not invited]
     
    PINOKY is a wireless ring-like device that can be externally attached to any plush toy as an accessory that animates the toy by moving its limbs. A user is thus able to instantly convert any plush toy into a soft robot. The user can control the toy remotely or input the movement desired by moving the plush toy and having the data recorded and played back. Unlike other methods for animating plush toys, PINOKY is non-intrusive, so alterations to the toy are not required. In a user study, 1) the roles of plush toys in the participants' daily lives were examined, 2) how participants played with plush toys without PINOKY was observed, 3) how they played with plush toys with PINOKY was observed, and their reactions to the device were surveyed. On the basis of the results, potential applications were conceptualized to illustrate the utility of PINOKY. Copyright 2012 ACM.
  • Jeffrey Allen, James E. Young, Daisuke Sakamoto, Takeo Igarashi
    Proceedings of the Designing Interactive Systems Conference, DIS '12 592 - 601 2012 [Refereed][Not invited]
     
    As robots continue to enter people's everyday spaces, we argue that it will be increasingly important to consider the robots' movement style as an integral component of their interaction design. That is, aspects of the robot's movement which are not directly related to a task at hand (e.g., pick up a ball) can have a strong impact on how people perceive that action (e.g., aggressively or hesitantly). We call these elements the movement style. We believe that perceptions of this kind of style will be highly dependent on the culture, group, or individual, and so people will need to have the ability to customize their robot. Therefore, in this work we use Style by Demonstration, a style focus on the more-traditional programming by demonstration technique, and present the Puppet Dancer system, an interface for constructing paired and interactive robotic dances. In this paper we detail the Puppet Dancer interface and interaction design, explain our new algorithms for teaching dance by demonstration, and present the results from a formal qualitative study. © 2012 ACM.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    Proceedings of the Designing Interactive Systems Conference, DIS '12 248 - 257 2012 [Refereed][Not invited]
     
    There are many toolkits for physical UIs, but most physical UI applications are not locomotive. When the programmer wants to make things move around in the environment, he faces difficulty related to robotics. Toolkits for robot programming, unfortunately, are usually not as accessible as those for building physical UIs. To address this interdisciplinary issue, we propose Phybots, a toolkit that allows researchers and interaction designers to rapidly prototype applications with locomotive robotic things. The contributions of this research are the combination of a hardware setup, software API, its underlying architecture and a graphical runtime debug tool that supports the whole prototyping activity. This paper introduces the toolkit, applications and lessons learned from three user studies. © 2012 ACM.
  • Sharedo: To-doリストによる人-ロボット間のタスク共有
    加藤淳, 坂本大介, 五十嵐健夫
    第19回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2011) 2011/12 [Refereed][Not invited]
  • PINOKY:ぬいぐるみを駆動するリング型のデバイス
    杉浦裕太, LeeCalista, 尾形正泰, 牧野泰才, 坂本大介, 稲見昌彦, 五十嵐健夫
    第19回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2011) 2011/12 [Refereed][Not invited]
  • FuwaFuwa:反射型光センサによる柔軟物体への接触位置および圧力の計測手法の提案とその応用
    杉浦裕太, 筧豪太, ウィタナ アヌーシャ, リーカリスタ, 坂本大介, 杉本麻樹, 稲見昌彦, 五十嵐健夫
    エンタテインメントコンピューティング2011 (EC2011) 2011/10/07 [Refereed][Not invited]
  • 加藤 淳, 坂本 大介, 稲見 昌彦, 五十嵐 健夫
    情報処理学会論文誌 情報処理学会 52 (4) 1425 - 1437 1882-7764 2011/04/15 
    小型の移動ロボットは家庭での日常的なタスクを支援するアプリケーションが期待されており,タスクの指示において洗練されたユーザインタフェースが必要となる.しかし,Human-Computer Interaction研究者の多くを含むロボット工学の知識を持たないソフトウェアプログラマにとって,ロボットを用いたアプリケーションをプロトタイピングすることはいまだ容易とはいえない.そこで我々は,Graphical User Interfaceのように1APIコールで平面上のロボットや物体を移動させ,リスナで二次元座標値の変化イベントを取得することのできるツールキットAndyを開発した.Andyはロボットや物体の上面にビジュアルマーカを貼り付け,俯瞰カメラの撮像からマーカ検出することで作業空間床面上の二次元座標系を取得する.本稿では,ツールキットAndyの狙い,APIと実装の概要,ユーザスタディの方法と結果および関連研究について報告する.Small mobile robots are expected to be utilized for helping daily tasks at home. We need sophisticated user interfaces for them. However, prototyping of robot applications is still difficult for software programmers without prior knowledge of robotics including many researchers in the field of Human-Computer Interaction. We developed a software toolkit called "Andy", with which programmers can make robots move and push objects on a flat surface with one API call and receive their two-dimensional motion events by registering listeners. Design of the APIs is influenced by programming style of Graphical User Interface. Andy provides two-dimensional absolute coordinates on the surface by detecting visual markers attached to top surfaces of robots and objects from captured images of a ceiling-mounted camera. We will report the aim of the toolkit, summary of its APIs and implementation, method and results of user studies and related work.
  • Jun Kato, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    IPSJ Journal 一般社団法人情報処理学会 52 (4) 1425 - 1437 0387-5806 2011/04 [Refereed][Not invited]
     
    Small mobile robots are expected to be utilized for helping daily tasks at home. We need sophisticated user interfaces for them. However, prototyping of robot applications is still difficult for software programmers without prior knowledge of robotics including many researchers in the field of Human-Computer Interaction. We developed a software toolkit called "Andy", with which programmers can make robots move and push objects on a flat surface with one API call and receive their two-dimensional motion events by registering listeners. Design of the APIs is influenced by programming style of...
  • 杉浦 裕太, 筧 豪太, 坂本 大介, アヌーシャウィタナ, 稲見 昌彦, 五十嵐 健夫
    情報処理学会論文誌 52 (2) 737 - 742 1882-7764 2011/02/15 
    本稿では,人の歩行動作を模した指のジェスチャによる二足歩行ロボットへの動作指示手法を提案する.本インタフェースでは人が手を用いて,人の歩行動作を表現する際に行うジェスチャに注目し,これを小型のマルチタッチインタフェース上で再現することで二足歩行ロボットの歩行などの操作を実現する.本インタフェースを用いたアプリケーションとしてサッカー環境を構築し,公開デモンストレーションを行った.この観察結果から,本インタフェースに初めて触ったユーザであっても歩行や,高度なキック・ジャンプといった操作について柔軟に指示することが可能であることが確認された.We propose an operating method for bipedal robots by using two-fingered gestures on a multi-touch surface. We focus on finger gestures that people represent the human with moving fingers, such as walking, running, kicking and turning. These bipedal gestures are natural and intuitive enough for the end-users to control the humanoid robots. The system captures those finger gestures on a multi-touch display as the direct operation method. The capturing method is easy and simple, but robust enough for the entertainment applications. We show an example application with our proposed method, and demonstration at the international exhibition. We conclude with the results of observation and future implementation of our method.
  • Yuta Sugiura, Gota Kakehi, Daisuke Sakamoto, Anusha Withana, Masahiko Inami, Takeo Igarashi
    IPSJ Journal 一般社団法人情報処理学会 52 (2) 737 - 742 0387-5806 2011/02 [Refereed][Not invited]
     
    We propose an operating method for bipedal robots by using two-fingered gestures on a multi-touch surface. We focus on finger gestures that people represent the human with moving fingers, such as walking, running, kicking and turning. These bipedal gestures are natural and intuitive enough for the end-users to control the humanoid robots. The system captures those finger gestures on a multi-touch display as the direct operation method. The capturing method is easy and simple, but robust enough for the entertainment applications. We show an example application with our proposed method, and d...
  • 杉浦 裕太, 筧 豪太, 坂本 大介
    情報処理学会論文誌 論文誌ジャーナル 情報処理学会 52 (2) 737 - 742 1882-7837 2011/02
  • YOSHIDA Shigeo, SAKAMOTO Daisuke, INAMI Masahiko, IGARASHI Takeo
    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 一般社団法人 日本機械学会 2011 _2A1-H05_1 - _2A1-H05_4 2011 [Not refereed]
     
    Robots are introduced in various environments and places such as offices and homes where we mundanely live. Particularly, one of the most expected functions of home robot is the transportation of objects in a living environment. This paper proposes an algorithm of object transportation under environments with objects. This algorithm is an expansion of the object transportation by dipole field algorithm so that robots can flexibly avoid objects. We compute the midpoints of the triangles that do not contain obstacles. We then use Dijkstra's algorithm to compute a path to move the robot to the object, and then compute a path to allow the robot to push the object to the goal using the dipole field algorithm.
  • Yuta Sugiura, Anusha Withana, Teruki Shinohara, Masayasu Ogata, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    SIGGRAPH Asia 2011 Emerging Technologies, SA'11 2011 [Refereed][Not invited]
     
    We propose a cooperative cooking robot system that operates with humans in an open environment. The system can cook a meal by pouring various ingredients into a boiling pot on an induction heating cooker and adjusting the heating strength according to a recipe that is developed by the user. Our contribution is in the design of the system incorporating robotic- and human-specific elements in a shared workspace so as to achieve a cooperative rudimentary cooking capability. First, we provide a graphical user interface to display detailed cooking instructions to the user. Second, we use small mobile robots instead of built-in arms to save space, improve flexibility, and increase safety. Third, we use special cooking tools that are shared with the robot. We hope insights obtained in this study will be useful for the design of other household systems in the future. A previous version of our system has been presented [1]. This demonstration will show an extended version with a new robot and improved interaction design.
  • Yuta Sugiura, Gota Kakehi, Anusha Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi
    UIST'11 - Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology 509 - 516 2011 [Refereed][Not invited]
     
    We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring the shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical soft objects in the household without complex installation procedures and without spoiling the softness of the object because it requires no physical connection. Six LEDs in the module emit IR light in six orthogonal directions, and six corresponding photosensors measure the reflected light energy. One can easily convert almost any soft object into a touch-input device that can detect both touch position and surface displacement by embedding multiple FuwaFuwa sensor modules in the object. A variety of example applications illustrate the utility of the FuwaFuwa sensor module. An evaluation of the proposed deformation measurement technique confirms its effectiveness. © 2011 ACM.
  • Gota Kakehi, Yuta Sugiura, Anusha Withana, Calista Lee, Naohisa Nagaya, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi
    ACM SIGGRAPH 2011 Emerging Technologies, SIGGRAPH'11 5  2011 [Refereed][Not invited]
     
    Soft objects are widely used in our day-to-day lives, and provide both comfort and safety in contrast to hard objects. Also, soft objects are able to provide a natural and rich haptic sensation. In human-computer interaction, soft interfaces have been shown to be able to increase emotional attachment between human and machines, and increase the entertainment value of the interaction. We propose the FuwaFuwa sensor, a small, flexible and wireless module to effectively measure shape deformation in soft objects using IR-based directional photoreflectivity measurements. By embedding multiple FuwaFuwa sensors within a soft object, we can easily convert any soft object into a touch-input device able to detect both touch position and surface displacement. Furthermore, since it is battery-powered and equipped with wireless communication, it can be easily installed in any soft object. Besides that, because the FuwaFuwa sensor is small and wireless, it can be inserted into the soft object easily without affecting its soft properties.
  • Kexi Liu, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    29TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS 647 - 656 2011 [Refereed][Not invited]
     
    As various home robots come into homes, the need for efficient robot task management tools is arising. Current tools are designed for controlling individual robots independently, so they are not ideally suitable for assigning coordinated action among multiple robots. To address this problem, we developed a management tool for home robots with a graphical editing interface. The user assigns instructions by selecting a tool from a toolbox and sketching on a bird's-eye view of the environment. Layering supports the management of multiple tasks in the same room. Layered graphical representation gives a quick overview of and access to rich information tied to the physical environment. This paper describes the prototype system and reports on our evaluation of the system.
  • Masahiro Shiomi, Daisuke Sakamoto, Takayuki Kanda, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS 3 (1) 27 - 40 1875-4791 2011/01 [Refereed][Not invited]
     
    We developed a networked robot system in which ubiquitous sensors support robot sensing and a human operator processes the robot's decisions during interaction. To achieve semi-autonomous operation for a communication robot functioning in real environments, we developed an operator-requesting mechanism that enables the robot to detect situations that it cannot handle autonomously. Therefore, a human operator helps by assuming control with minimum effort. The robot system consists of a humanoid robot, floor sensors, cameras, and a sound-level meter. For helping people in real environments, we implemented such basic communicative behaviors as greetings and route guidance in the robot and conducted a field trial at a train station to investigate the robot system's effectiveness. The results attest to the high acceptability of the robot system in a public space and also show that the operator-requesting mechanism correctly requested help in 84.7% of the necessary situations; the operator only had to control 25% of the experiment time in the semi-autonomous mode with a robot system that successfully guided 68% of the visitors.
  • Foldy: GUI操作によるロボットへの服の畳み方の教示
    杉浦裕太, 坂本大介, Tabare Gowon, 高橋大樹, 稲見昌彦, 五十嵐健夫
    第18回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2010) 7 - 12 2010/12 [Refereed][Not invited]
  • matereal: インタラクティブなロボットアプリケーションのプロトタイピング用ツールキット
    加藤淳, 坂本大介, 五十嵐健夫
    第18回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2010) 83 - 88 2010/12 [Refereed][Not invited]
  • RoboJockey: 連続的なロボットパフォーマンスのためのインタフェース
    代蔵巧, 坂本大介, 杉浦裕太, 小野哲雄, 稲見昌彦, 五十嵐健夫
    インタラクション2010 インタラクティブ発表(プレミアム) 2010/03 [Refereed][Not invited]
  • Walky: 指の擬人的な動作を用いた歩行ロボットへの操作手法
    杉浦裕太, 筧豪太, Anusha I. Withana, Charith L. Fernando, 坂本大介, 稲見昌彦, 五十嵐健夫
    インタラクション2010 インタラクティブ発表(プレミアム) 2010/03 [Refereed][Not invited]
  • Takumi Shirokura, Daisuke Sakamoto, Yuta Sugiura, Tetsuo Ono, Masahiko Inami, Takeo Igarashi
    UIST 2010 - 23rd ACM Symposium on User Interface Software and Technology, Adjunct Proceedings 399 - 400 2010 [Refereed][Not invited]
     
    We developed a RoboJockey (Robot Jockey) interface for coordinating robot actions, such as dancing - similar to "Disc jockey" and "Video jockey". The system enables a user to choreograph a dance for a robot to perform by using a simple visual language. Users can coordinate humanoid robot actions with a combination of arm and leg movements. Every action is automatically performed to background music and beat. The RoboJockey will give a new entertainment experience with robots to the end-users.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    UIST 2010 - 23rd ACM Symposium on User Interface Software and Technology, Adjunct Proceedings 387 - 388 2010 [Refereed][Not invited]
     
    We introduce a technique to detect simple gestures of "surfing" (moving a hand horizontally) on a standard keyboard by analyzing recorded sounds in real-time with a microphone attached close to the keyboard. This technique allows the user to maintain a focus on the screen while surfing on the keyboard. Since this technique uses a standard keyboard without any modification, the user can take full advantage of the input functionality and tactile quality of his favorite keyboard supplemented with our interface.
  • Yuta Sugiura, Daisuke Sakamoto, Anusha Withana, Masahiko Inami, Takeo Igarashi
    CHI2010: PROCEEDINGS OF THE 28TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4 2427 - + 2010 [Refereed][Not invited]
     
    We propose a cooking system that operates in an open environment. The system cooks a meal by pouring various ingredients into a boiling pot on an induction heating cooker and adjusts the heating strength according to the user's instructions. We then describe how the system incorporates robotic- and human-specific elements in a shared workspace so as to achieve a cooperative rudimentary cooking capability. First, we use small mobile robots instead of built-in arms to save space, improve flexibility and increase safety. Second, we use detachable visual markers to allow the user to easily configure the real-world environment. Third, we provide a graphical user interface to display detailed cooking instructions to the user. We hope insights obtained in this experiment will be useful for the design of other household systems in the future.
  • Sakamoto Daisuke, Takumi Shirokura, Yuta Sugiura, Tetsuo Ono, Masahiko Inami, Takeo Igarashi
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTER ENTERTAINMENT TECHNOLOGY (ACE 2010) 53 - 56 2010 [Refereed][Not invited]
     
    We developed a RoboJockey (Robot Jockey) interface for coordinating robot actions, such as dancing, similar to a "disc jockey" or "video jockey" who selects and plays recorded music or video for an audience, in this case, robot's actions, and giving people a new entertainment experience with robots. The system enables a user to choreograph a robot to dance using a simple visual language. Every icon on the interface is circular and can be operated from all positions around the tabletop interface. Users can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements, and the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. We demonstrated RoboJockey at a Japanese domestic symposium, and confirmed that people enjoyed using the system and successfully created entertaining robot dances.
  • Cooky: 調理順序指示インタフェースと料理ロボットの開発
    杉浦 裕太, 坂本 大介, Withana Anusha, 稲見 昌彦, 五十嵐 健夫
    第17回インタラクティブシステムとソフトウェアに関するワークショップ (WISS2009) 1341-870X 2009/12 [Refereed][Not invited]
  • SHIOMI Masahiro, SAKAMOTO Daisuke, KANDA Takayuki, ISHI Carlos Toshinori, ISHIGURO Hiroshi, HAGITA Norihiro
    The Transactions of the Institute of Electronics, Information and Communication Engineers. A 一般社団法人電子情報通信学会 92 (11) 773 - 783 0913-5707 2009/11 [Refereed][Not invited]
     
    本論文は,我々が開発した半自律型コミュニケーションロボットシステムについて記述する.我々は,オペレータの負荷を減らしつつ,効率の良い半自律動作を実現するために,オペレータコールアルゴリズムを開発した.本アルゴリズムは,自律的にロボットが単体で解決できない状況を検出し,オペレータへのコールを行う.コールされたオペレータは必要に応じてロボットの遠隔操作を行い,その状況への対処を行う.つまり,ロボットは基本的に自律的に動作して人々との相互作用を行い,問題が発生した場合にのみオペレータをコールして半自律動作を行う.開発したロボットシステムの有用性を検証するため,駅構内で道案内を行う半自律型コミュニケーションロボットを用いた実証実験を行った.実験の結果,半自律で動作したロボットは68.1%の割合で道案内を正しく行い,完全自律で動作したロボットは29.9%の割合で道案内を正しく行った.このとき,オペレータは実験時間中25%のみ,ロボットの一部遠隔操作を行った.これらの結果から,1人のオペレータが複数のロボットを同時に操作することが実現可能であることが示唆された.
  • 塩見昌裕, 坂本大介, 坂本大介, 神田崇行, 石井カルロス寿憲, 石黒浩, 石黒浩, 萩田紀博
    電子情報通信学会論文誌 A 一般社団法人電子情報通信学会 J92-A (11) 773 - 783 0913-5707 2009/11 [Refereed][Not invited]
     
    本論文は,我々が開発した半自律型コミュニケーションロボットシステムについて記述する.我々は,オペレータの負荷を減らしつつ,効率の良い半自律動作を実現するために,オペレータコールアルゴリズムを開発した.本アルゴリズムは,自律的にロボットが単体で解決できない状況を検出し,オペレータへのコールを行う.コールされたオペレータは必要に応じてロボットの遠隔操作を行い,その状況への対処を行う.つまり,ロボットは基本的に自律的に動作して人々との相互作用を行い,問題が発生した場合にのみオペレータをコールして半自律動作を行う.開発したロボットシステムの有用性を検証するため,駅構内で道案内を行う半自律型コミュニケーションロボットを用いた実証実験を行った.実験の結果,半自律で動作したロボットは68.1%の割合で道案内を正しく行い,完全自律で動作したロボットは29.9%の割合で道案内を正しく行った.このとき,オペレータは実験時間中25%のみ,ロボットの一部遠隔操作を行った.これらの結果から,1人のオペレータが複数のロボットを同時に操作することが実現可能であることが示唆された.
  • Thomas Seifried, Michael Haller, Stacey D. Scott, Florian Perteneder, Christian Rendl, Daisuke Sakamoto, Masahiko Inami
    ACM International Conference on Interactive Tabletops and Surfaces, ITS 2009, Banff / Calgary, Alberta, Canada, November 23-25, 2009 ACM 33 - 40 2009/11 [Refereed][Not invited]
  • Daisuke Sakamoto, Kotaro Hayashi, Takayuki Kanda, Masahiro Shiomi, Satoshi Koizumi, Hiroshi Ishiguro, Tsukasa Ogasawara, Norihiro Hagita
    International Journal of Social Robotics 1 (2) 157 - 169 1875-4791 2009/04 [Refereed][Not invited]
     
    This paper reports a method that uses humanoid robots as a communication medium. Even though many interactive robots are being developed, their interactivity remains much poorer than that of humans due to their limited perception abilities. In our approach, the role of interactive robots is limited to a broadcasting medium for exploring the best way to attract people's interest to information provided by robots. We propose using robots as a passive social medium, in which they behave as if they are talking together. We conducted an eight-day field experiment at a train station to investigate the effects of such a passive social medium. © Springer Science & Business Media BV 2009.
  • 複数台ロボットのマルチタッチディスプレイによる操作インタフェース
    加藤 淳, 坂本 大介, 稲見 昌彦, 五十嵐 健夫
    インタラクション2009 インタラクティブ発表 2009/03 [Refereed][Not invited]
  • Thomas Seifried, Christian Rendl, Florian Perteneder, Jakob Leitner, Michael Haller, Daisuke Sakamoto, Jun Kato, Masahiko Inami, Stacey D. Scott
    ACM SIGGRAPH 2009 Emerging Technologies, SIGGRAPH '09 2009 
    The amount of digital appliances and media found in domestic environments has risen drastically over the last decade, for example, digital TVs, DVD and Blu-ray players, digital picture frames, digital gaming systems, electronically moveable window blinds, and robotic vacuum cleaners. As these devices become more compatible to Internet and wireless networking (e.g. Internet-ready TVs, streaming digital picture frames, and WiFi gaming systems, such as Nintendo's Wii and Sony's Playstation) and as networking and WiFi home infrastructures become more prevalent, new opportunities arise for developing centralized control of these myriad devices and media into so called "Universal remote controls". However, many remote controls lack intuitive interfaces for mapping control functions to the device intended being controlled. This often results in trial and error button pressing, or experimentation with graphical user interface (GUI) controls, before a user achieves their intended action.
  • Daisuke Sakamoto, Hiroshi Ishiguro
    Kansei Engineering International, Japan Society of Kansei Engineering 2009/01 [Refereed][Not invited]
  • Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2 9 - + 2009 [Refereed][Not invited]
     
    The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.
  • Jun Kato, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 3443 - 3448 2009 [Refereed][Not invited]
     
    We must give some form of a command to robots in order to have the robots do a complex task. An initial instruction is required even if they do their tasks autonomously. We therefore need interfaces for the operation and teaching of robots. Natural languages, joysticks, and other pointing devices are currently used for this purpose. These interfaces, however, have difficulty in operating multiple robots simultaneously. We developed a multi-touch interface with a top-down view from a ceiling camera for controlling multiple mobile robots. The user specifies a vector field followed by all robots on the view. This paper describes the user interface and its implementation, and future work of the project.
  • Daisuke Sakamoto, Koichiro Honda, Masahiko Inami, Takeo Igarashi
    CHI2009: PROCEEDINGS OF THE 27TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4 197 - 200 2009 [Refereed][Not invited]
     
    Numerous robots have been developed, and some of them are already being used in homes, institutions, and workplaces. Despite the, development of useful robot functions, the focus so far has not been on user interfaces of robots. General users of robots find it hard to understand what the robots are doing and what kind of work they can do. This paper presents an interface for the commanding home robots by using stroke gestures on a computer screen. This interface allows the user to control robots and design their behaviors by sketching the robot's behaviors and actions on a top-down view from ceiling cameras. To convey a feeling of directly controlling the robots, our interface employs the live camera view. In this study, we focused on a house-cleaning task that is typical of home robots, and developed a sketch interface for designing behaviors of vacuuming robots.
  • Masahiro Shiomi, Daisuke Sakamoto, Takayuki Kanda, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita
    HRI 2008 - Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction: Living with Robots 303 - 310 2008 [Refereed][Not invited]
     
    This paper reports an initial field trial with a prototype of a semiautonomous communication robot at a train station. We developed an operator-requesting mechanism to achieve semiautonomous operation for a communication robot functioning in real environments. The operator-requesting mechanism autonomously detects situations that the robot cannot handle by itself a human operator helps by assuming control of the robot. This approach gives semi-autonomous robots the ability to function naturally with minimum human effort. Our system consists of a humanoid robot and ubiquitous sensors. The robot has such basic communicative behaviors as greeting and route guidance. The experimental results revealed that the operator-requesting mechanism correctly requested operator's help in 85% of the necessary situations the operator only had to control 25% of the experiment time in the semi-autonomous mode with a robot system that successfully guided 68% of the passengers. At the same time, this trial provided the opportunity to gather user data for the further development of natural behaviors for such robots operating in real environments. Copyright 2008 ACM.
  • SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ISHIGURO HIROSHI, HAGITA NORIHIRO
    IPSJ journal 一般社団法人情報処理学会 48 (12) 3729 - 3738 1882-7764 2007/12/15 [Not refereed][Not invited]
     
    In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirmed that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human-likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.
  • SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ONO TETSUO, ISHIGURO HIROSHI, ISHIGURO HIROSHI, HAGITA NORIHIRO
    情報処理学会論文誌 ACM 48 (12) 3729 - 3738 0387-5806 2007/12 [Refereed][Not invited]
  • Kotaro Hayashi, Daisuke Sakamoto, Takayuki Kanda, Masahiro Shiomi, Satoshi Koizumi, Hiroshi Ishiguro, Tsukasa Ogasawara, Norihiro Hagita
    HRI 2007 - Proceedings of the 2007 ACM/IEEE Conference on Human-Robot Interaction - Robot as Team Member 137 - 144 2007 [Refereed][Not invited]
     
    This paper reports a method that uses humanoid robots as a communication medium. There are many interactive robots under development, but due to their limited perception, their interactivity is still far poorer than that of humans. Our approach in this paper is to limit robots' purpose to a non-interactive medium and to look for a way to attract people's interest in the information that robots convey. We propose using robots as a passive-social medium, in which multiple robots converse with each other. We conducted a field experiment at a train station for eight days to investigate the effects of a passive-social medium. Copyright 2007 ACM.
  • Takayuki Kanda, Masayuki Kamasima, Michita Imai, Tetsuo Ono, Daisuke Sakamoto, Hiroshi Ishiguro, Yuichiro Anzai
    AUTONOMOUS ROBOTS 22 (1) 87 - 100 0929-5593 2007/01 [Refereed][Not invited]
     
    This paper reports the findings for a humanoid robot that expresses its listening attitude and understanding to humans by effectively using its body properties in a route guidance situation. A human teaches a route to the robot, and the developed robot behaves similar to a human listener by utilizing both temporal and spatial cooperative behaviors to demonstrate that it is indeed listening to its human counterpart. The robot's software consists of many communicative units and rules for selecting appropriate communicative units. A communicative unit realizes a particular cooperative behavior such as eye-contact and nodding, found through previous research in HRI. The rules for selecting communicative units were retrieved through our preliminary experiments with a WOZ method. An experiment was conducted to verify the effectiveness of the robot, with the results revealing that a robot displaying cooperative behavior received the highest subjective evaluation, which is rather similar to a human listener. A detailed analysis showed that this evaluation was mainly due to body movements as well as utterances. On the other hand, subjects' utterance to the robot was encouraged by the robot's utterances but not by its body movements.
  • 坂本大介, 小野哲雄
    ヒューマンインタフェース学会論文誌 8 (3) 381 - 390 1344-7262 2006/08 [Refereed][Not invited]
  • KOMATSU TAKANORI, SUZUKI SHOJI, SUZUKI KEIJI, MATSUBARA HITOSHI, ONO TETSUO, SAKAMOTO DAISUKE, SATO TAKAMASA, UCHIMOTO TOMOHIRO, OKADA HAJIME, KITANO ISAMU, MUNEKATA NAGISA, SATO TOMONORI, TAKAHASHI KAZUYUKI, HONMA MASATO, OSADA JUN'ICHI, HATA MASAYUKI, INUI HIDEO
    日本バーチャルリアリティ学会論文誌 特定非営利活動法人 日本バーチャルリアリティ学会 11 (2) 213 - 223 1344-011X 2006/06 [Refereed][Not invited]
     
    Our project aims to develop robot authoring system especially for non-robotic researchers, such as cognitive psychologists, social psychologists, designers and art performers, to provide an intuitive robot operating environment, which enable them do authoring the robot as they want. Concretely, we have been developing the robot system which has the following characteristics. 1) Changing the robot's appearances and functions by attaching or removing Sub Modules, e.g., arms, tails, ears, wings, on/from Core Module. When this Sub Module was attached on Core Module (robot's base body), the particular information embed in Sub Modules is sent to the robot controller and this controller change the robot's behaviors according to the received information. 2) Authoring (editing or turning-up) the robot's behaviors with using intuitive command system not like a traditional program language (e.g., move (10.0, 0.0)), but similar to our natural language (e.g., "move," "run").
  • SAKAMOTO DAISUKE, ONO TETSUO
    コンピュータソフトウェア 23 (2) 101 - 107 0289-6540 2006/04 [Refereed][Not invited]
  • Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI 2006, Salt Lake City, Utah, USA, March 2-3, 2006 ACM 355 - 356 2006/03 [Refereed][Not invited]
  • Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Masayuki Kamashima, Michita Imai, Hiroshi Ishiguro
    Int. J. Hum.-Comput. Stud. 62 (2) 247 - 265 2005/12 [Refereed][Not invited]
  • KANDA TAKAYUKI, KAMASHIMA MASAYUKI, KAMASHIMA MASAYUKI, IMAI MICHITA, IMAI MICHITA, ONO TETSUO, ONO TETSUO, SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, ISHIGURO HIROSHI, ISHIGURO HIROSHI, ANZAI YUICHIRO
    日本ロボット学会誌 The Robotics Society of Japan 23 (7) 898 - 909 0289-1824 2005/10 [Refereed][Not invited]
     
    This paper reports the findings of a humanoid robot that pretends to listen to humans by effectively using its body properties in a route guidance situation. A human teaches a route to the robot, and the developed robot behaves as a human-like listener by utilizing both temporal and spatial cooperative behaviors to demonstrate that it is indeed listening to its human counterpart. The robot consists of many communicative units and rules for selecting appropriate units. A communicative unit realizes a particular cooperative behavior such as eye-contact and nodding, found through previous research. The rules for selecting communicative units were retrieved through WOZ experiments. An experiment was conducted to verify the effectiveness of the developed robot, and, as a result, the robot with cooperative behavior received higher subjective evaluation, which is rather similar to a human listener. The detailed analysis showed that this higher evaluation was due mainly to body movements as well as utterances. On the other hand, subjects' utterance to the robot was promoted by the robot's utterances but not by the body movements.
  • Masayuki Kamashima, Takayuki Kanda, Michita Imai, Tetsuo Ono, Daisuke Sakamoto, Hiroshi Ishiguro, Yuichiro Anzai
    2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, September 28 - October 2, 2004 IEEE 2506 - 2513 2004/09 [Refereed][Not invited]
  • D Sakamoto, T Kanda, T Ono, M Kamashima, M Imai, H Ishiguro
    RO-MAN 2004: 13TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS 443 - 448 2004 [Refereed][Not invited]
     
    Research on humanoid robots has produced various uses for their body properties in communication. In particular, mutual relationships of body movements between a robot and a human are considered to be important for smooth and natural communication, as they are in human-human communication. We have developed a semi-autonomous humanoid robot system that is capable of cooperative body movements with humans using environment-based sensors and switching communicative units. And we conducted an experiment using this robot system and verified the importance of cooperative behaviors in a route-guidance situation where a human gives directions to the robot. This result indicates that the cooperative body movements greatly enhance the emotional impressions of human in a route-guidance situation. We believe these results will allow us to develop interactive humanoid robots that sociably communicate with humans.

MISC

  • 坂本 大介, 吉田 成朗  コンピュータソフトウェア = Computer software / 日本ソフトウェア科学会 編  37-  (3)  25  -30  2020/08
  • 木ノ原中也, 巻口誉宗, 高田英明, 高田英明, 坂本大介, 小野哲雄  日本ソフトウェア科学会研究会資料シリーズ(Web)  (91)  2020
  • Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, Norihiro Hagita  Geminoid Studies: Science and Technologies for Humanlike Teleoperated Androids  39  -56  2018/04  [Not refereed][Not invited]
     
    © Springer Nature Singapore Pte Ltd. 2018. In this study, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants feel a stronger presence of the operator when he talks through the android than when he appears on a video monitor in a video conference system. In addition, participants talk with the robot naturally and evaluate its humanlike-ness as equal to a man on a video monitor. We also discuss a remote-controlled system for telepresence that uses a humanlike android robot as a new telecommunication medium.
  • Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro  Geminoid Studies: Science and Technologies for Humanlike Teleoperated Androids  235  -247  2018/04  [Not refereed][Not invited]
     
    © Springer Nature Singapore Pte Ltd. 2018. The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations simultaneously. This study investigates the influence that the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurements. The persuasive agent advertised a Bluetooth headset. The results show that an android is perceived as being as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants who were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.
  • Saki Sakaguchi, Eunice Ratna Sari, Taku Hachisu, Adi B. Tedjasaputra, Kunihiro Kato, Masitah Ghazali, Kaori Ikematsu, Ellen Yi-Luen Do, Jun Kato, Array,Jun Nishida, Daisuke Sakamoto, Yoshifumi Kitamura, Jinwoo Kim, Anirudha Joshi, Zhengjie Liu  Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, April 21-26, 2018  2018  [Refereed][Not invited]
  • 坂本大介  OHM  104-  (2)  19‐20  2017/02/05  [Not refereed][Not invited]
  • 坂本 大介  ヒューマンインタフェース学会誌 = Human interface = Journal of Human Interface Society  19-  (4)  190  -192  2017  [Not refereed][Not invited]
  • Kohei Matsumura, Masa Ogata, Saki Sakaguchi, Takashi Ijiri, Takeshi Nishida, Jun Kato, Hiromi Nakamura, Daisuke Sakamoto, Yoshifumi Kitamura  Conference on Human Factors in Computing Systems - Proceedings  07-12--  3325  -3330  2016/05/07  [Refereed][Not invited]
     
    This symposium showcases the latest work from Japan on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among young researchers and students and create a fresh research community.
  • Jun Kato, Hiromi Nakamura, Yuta Sugiura, Taku Hachisu, Daisuke Sakamoto, Koji Yatani, Yoshifumi Kitamura  Conference on Human Factors in Computing Systems - Proceedings  18-  2321  -2324  2015/04/18  [Refereed][Not invited]
     
    This symposium showcases the latest work from Japan on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among young researchers and students and create a fresh research community. Copyright is held by the author/owner(s).
  • Daisuke Sakamoto  Interactions  22-  (1)  52  -55  2015/01/01  [Refereed][Not invited]
     
    Christoph Bartneck's 2009 scientometric analysis of CHI conference proceedings revealed the number of Asian scientists participating in the event was small as compared with other countries. Bartneck's analysis was more broadly focused on both quantity and quality, including numbers of citations and best paper awards. The analysis focused on the quantitative analysis, including geography, organization, and author statistics due to limited space and resources. Experts employed Christoph Bartneck's idea of credit for the analysis in which one paper equaled one credit to conduct their respective investigations.
  • 坂本 大介  ヒューマンインタフェース学会誌 = Human interface = Journal of Human Interface Society  15-  (4)  289  -294  2013  [Not refereed][Not invited]
  • 石井 志保子, 坂本 大介, 髙木 英典, 森 俊哉, 竹澤 悠典, 田嶋 文生  東京大学理学系研究科・理学部ニュース  44-  (3)  15  -17  2012/09  
    「特異点」/「クラウドソーシング」/「相関エレクトロニクス」/「火山ガス」/「人工DNA」/「Tajima's D」
  • Michael Haller, Thomas Seifried, Stacey D. Scott, Florian Perteneder, Christian Rendl, Daisuke Sakamoto, Masahiko Inami, Pranav Mistry, Pattie Maes, Seth E. Hunter, David Merrill,Jeevan J. Kalanithi, Susanne Seitinger, Daniel M. Taub, Alex S. Taylor  Interactions  18-  (3)  8  -9  2011  [Refereed][Not invited]
  • 近藤誠, 杉浦裕太, 筧豪太, 坂本大介, 稲見昌彦  ヒューマンインタフェース学会誌  12-  (3)  175  -182  2010/08/25  [Not refereed][Not invited]
  • 坂本 大介  日本バーチャルリアリティ学会誌 = Journal of the Virtual Reality Society of Japan  15-  (1)  44  -45  2010/03/31  [Not refereed][Not invited]
  • SAKAMOTO Daisuke  IPSJ Magazine  50-  (7)  672  -672  2009/07/15  [Not refereed][Not invited]
  • 塩見昌裕, 塩見昌裕, 坂本大介, 坂本大介, 神田崇行, 石井カルロス寿憲, 石黒浩, 石黒浩, 萩田紀博  画像ラボ  18-  (4)  23  -27  2007/04/01  [Not refereed][Not invited]
  • 坂本 大介  日本バーチャルリアリティ学会誌 = Journal of the Virtual Reality Society of Japan  10-  (2)  104  -104  2005/06/25
  • 鈴木昭二, 鈴木恵二, 松原仁, 小野哲雄, 小松孝徳, 内本友洋, 岡田孟, 北野勇, 坂本大介, 佐藤崇正, 本間正人, 畑雅之, 乾英男  日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)  2005-  2005

Presentations

  • 岩井望、崔明根、坂本大介、小野哲雄.
    情報処理学会シンポジウム「インタラクション2024」  2024/03  情報処理学会
     
    拡張現実感(Augmented Reality; AR)におけるインタラクションを実現するインタフェースとしてハンドジェスチャやウェアラブルデバイスなどによる入力手法が研究されてきている.ARは身体を活用したインタラクションが必要となる点において,速度と精度だけでなく社会的受容性にも配慮されたインタフェースが求められている.本研究では小型かつ身体に装着できるデバイスとして小型の中指装着型トラックボールに注目する.身体装着型トラックボールはx,yの2軸だけでなく,回転操作のような3次元的な操作も可能であるためARでの利用に適しているのではないかと考えた.本稿では身体装着型トラックボールを使用することの有用性と実用性について調査するため,身体装着型トラックボールを含む3種類の入力手法の比較実験を行った.その結果,身体装着型トラックボールは既存手法に対し補完的な入力を提供し,球を転がす触覚フィードバックやユーザの指の大きさに依存しないデバイスの特性がARでのインタラクションにおいて有用な操作方法である可能性が示唆された.
  • 崔明根、坂本大介、小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2024/03  情報処理学会
     
    Mixed Reality(MR)環境で表示される MR オブジェクトに対して,人はまるで現実のオブジェクトに対するような振る舞いを行う.本稿では,MR オブジェクトに対する衝突回避行動を活用することでユーザをより良い方向へ誘導するナッジ手法 MR Nudge を提案する.本手法では身体移動を行わせたくない位置に MR オブジェクトを配置することで,MR オブジェクトとの干渉を回避させる行動へ誘導を行う.本研究では MR Nudge を用いることで誘導が可能か,また MR Nudge の誘導に強制性が無いか調査する実験を行った.実験の結果,MR Nudge を用いることで 54% の実験参加者を誘導することが可能であり,かつ強制性が無いことが明らかになった.
  • 田邊早也佳,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2024/01  情報処理学会
     
    本研究では人が「犬派」か「猫派」の判断を下すときに,動物の外見や動作といったどのような属性を重視しているのか検証した.多くの人はペットに対する好みに基づいて,自分を「犬派」か「猫派」のどちらかであると認識し,先行研究ではこれらのタイプの間には性格の違いがあるということが判明している.本研究では,特に社会的支配志向の特性が高い人は犬のような従順なペットを好む傾向があり,人は自分の性格を補完してくれるペットを好むという研究に注目し,実験・考察を行う.実験では予備実験として犬型ロボット「aibo」と猫型ロボット「Marscat」に実装した動作がそれぞれどのような動物の動作であると感じるか,クラウドソーシングを用いて調査を行った.これをもとに本実験では回答者の社会的支配志向のスコアと動物型ロボットへの評価スコアを比較し,その結果を考察した.
  • 崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2023/05  情報処理学会
     
    ユーザの両目の視線方向を用いて,ユーザの注視点の深度(視線深度)を推定する手法が存在する.視線深度が明らかである場合ユーザの三次元注視点が分かるため,視線深度を用いることで二次元環境では設計できないような視線インタフェースデザインが可能となる.しかし視線深度の推定精度は低く,さらに目標物がない場合は特定の深度に視線を固定することが難しいため,視線深度を活用した精度の高いインタラクションを実現することは困難である.我々はユーザの視線位置を示すポインタである視線カーソルをインタラクション領域の前方に固定し,注視目標とすることで特定の視線深度への注視を容易に行う手法を提案する.本稿では提案手法のパラメータ調査を目的とした視線カーソル選択タスクに関する実験を行った.実験の結果から,視線カーソルを配置すべき深度や深度バッファ,適切な滞留時間が明らかになった.
  • 崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2023/05  情報処理学会
     
    三次元 VR 環境ではターゲットが他のオブジェクトに部分的に遮蔽され可視領域が狭まり,視線入力によるターゲット選択が困難となる場合がある.我々は視線と頭部方向の成す角度(視線角度)25°-45°の領域(Kuiper Belt)を活用した VR 環境における遮蔽された小さなオブジェクトの選択手法を提案する.本手法では選択候補の決定と,選択候補に対応したメニューアイテムの選択によってターゲットを選択する.メニューアイテムは Kuiper Belt に配置されており,選択候補が決定される領域(視線角度 25°以下の領域)とメニューアイテム選択領域が明確に分離されているため,Midas Touch が生じずに視線のみで選択を完了することが可能である.提案手法のユーザビリティ調査を目的として,最大 256 個のオブジェクトが密集した環境において部分的にターゲットが他のオブジェクトに隠れた状態で選択タスクを行った.実験の結果,オブジェクトが密集し,ターゲットが部分的に隠れた状態においても,提案手法は通常の視線入力手法よりも小さなオブジェクトの選択が容易であることが確認された.
  • Gino .Aiki: 合気道の身体の使い方の習得を支援するMRソフトウェア
    鈴木湧登,坂本大介,小野哲雄
    第30回インタラクティブシステムとソフトウェアに関するワークショップ(WISS 2022)  2022/12  日本ソフトウェア科学会
  • スマートフォン背面のジェスチャ入力を実現するスマホリング型デバイスの設計と実装
    日下部完,坂本大介,小野哲雄
    第30回インタラクティブシステムとソフトウェアに関するワークショップ(WISS 2022)  2022/12  日本ソフトウェア科学会
  • 若杉直生,崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2022/11  情報処理学会
     
    日常生活において周囲が低照度である環境は多く存在し,更には災害時などでしばし十分な光源が得られないことがある.このような環境では視覚的認知機能が低下するため,作業効率の低下や障害物衝突などの事故が発生し得る.本研究では低照度環境におけるタスク支援を目的として,MR 向けの低照度環境に対する深度情報提示手法を提案する.現在流通している MR デバイスは低照度での動作を想定していないため,VR を用いた仮想空間で低照度環境の再現を行い,提案手法が低照度における探索作業の効率を改善できるか調査した.
  • 小柳元志郎,崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2022/11  情報処理学会
     
    Virtual Reality(VR)においてレイは広く活用されており,拡張性に優れた手法である.一方で,ハンドヘルド Augmented Reality(AR)においては,VR と同様の操作性を有したレイを実装することは難しい.これは,眼前にハンドヘルドデバイスを掲げる必要があるために,ハンドヘルドデバイスを一定以上回転させることが出来ず,VR コントローラーが持つ回転の 3 自由度を AR 操作時に表現することが難しいためである.ゆえに本研究では,バーチャルジョイスティックを用いてレイの回転に関する自由度を補完する手法と,ハンドヘルド AR における他のオブジェクト選択手法を比較し,ユーザビリティ調査を行った.
  • 青木美春,崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2022/11  情報処理学会
     
    社会生活へバーチャルエージェントが活用されたシステムが浸透してきている.バーチャルエージェントを用いたシステムの活用にはユーザの信頼が必要不可欠であるが,近年の研究により,人々はアルゴリズムの性能が人間を上回ることを認知していても,アルゴリズム自体への忌避感によって,アルゴリズムを信頼しにくい「アルゴリズム嫌悪」という現象があることが明らかになってきた.しかし,この「アルゴリズム嫌悪」とバーチャルエージェントの外見・音声の間の関連性については明らかになっている部分が少ない.本論文では,人間に情報を提供するバーチャルエージェントの外観や音声を変化させることにより,「アルゴリズム嫌悪」に影響を及ぼす信頼感がどのように変化するかを,実験を通して検討する.
  • 後藤健斗,水丸和樹,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2022/11  情報処理学会
     
    “天使と悪魔” は道徳的ジレンマを表現する方法として用いられる.本研究では,天使と悪魔の関係をロボットで再現し,ジレンマ状況の中で人間の自制心,意思決定や行動がどのように変化するかを調査する.ロボットの条件を Neutral,Angel,Devil とし,実験参加者を Neutral-Neutral 群と Angel-Devil 群に分け,紙にアルファベットを書き続けるというタスクを行ってもらうことで,タスクの継続時間から 2 つの群の自制心を比較した.その結果,Angel-Devil 群のほうが有意にタスクの継続時間が長くなった.また,Godspeed によるロボットの印象評価を行ったところ,擬人化や好ましさの項目においてロボットの条件間で有意差が確認された.実験後アンケートでは,ロボットが Angel と Devil の役割を果たすことができたと考えられる回答がみられた.
  • 小柳元志郎,崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2022/03  情報処理学会
     
    スマートフォンにおける操作方法として,搭載されたジャイロセンサを利用した 3 自由度の傾け操作がある.端末の傾きを利用した操作に関する研究(例えば,チルト操作など)は行われているが,操作方法,操作感度などのパラメータ設定の検討は十分ではない.そこで本研究では,スマートフォンの画面上に表示されたスライダーを操作するタスクを用いて,端末の傾きをスライダーの変化量に対応させる手法(Position 手法)と傾きを変化率に対応させる手法(Rate 手法)について,パラメータ設定の比較,検討を行った.本研究ではスマートフォンを横向きに持ち,端末の画面に対して垂直な方向を軸とした回転(Yaw 回転)を傾きとして利用した.Yaw 回転を利用することで画面は常に利用者の方を向く.スマートフォンを左右に傾けてスライダーを指定された位置に移動させ,その操作完了時間と精度,ユーザビリティで評価を行った.その結果,操作完了時間と精度において,左右に最大 30 度傾けることのできる操作感度の Position 手法が優れている傾向が見られたものの有意差は確認されなかった.
  • 日下部完,崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2022/03  情報処理学会
     
    従来のハンドジェスチャによる入力手法の研究では,操作コマンドの入力のようなバイナリ入力のみに焦点が置かれ,連続的な値を扱う場合の検討が行われてきていない.そこで,本稿では RGB カメラからの映像を入力とした,連続値を扱うインターフェースのハンドジェスチャによる操作手法の検討を行う.ハンドジェスチャは 4 種類用意し,1 自由度の数値の操作タスクによって,各ジェスチャのスケール,入力方向,入力回数を変化による性能を評価する 2 つの評価実験を行った.結果として,比較を行ったハンドジェスチャの中では,手を水平に移動するハンドジェスチャが最もユーザビリティが優れていた.また,ジェスチャ入力時間とエラーにはトレードオフの関係が確認された.
  • 髙松大悟,崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2022/03  情報処理学会
     
    Virtual Reality(VR)においてレイは広く活用されており,拡張性に優れた手法である.一方で,ハンドヘルド Augmented Reality(AR)においては,VR と同様の操作性を有したレイを実装することは難しい.これは,眼前にハンドヘルドデバイスを掲げる必要があるために,ハンドヘルドデバイスを一定以上回転させることが出来ず,VR コントローラが持つ回転の 3 自由度を AR 操作時に表現することが難しいためである.ゆえに本研究では,バーチャルジョイスティックを用いてレイの回転に関する自由度を補完する手法を提案する.本研究では,提案手法のおけるパラメータである,Control Display 比(CD 比)の適切な値の調査を目的とした実験を行った.その結果,適切な CD 比が 3 であることが明らかになった.
  • ストリームライブチャット入力を想定した半透明ダブルフリックキーボードの入力性能の実験的検証
    阿部 優樹, 崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会シンポジウム「インタラクション2022」  2022/03  情報処理学会
  • Kuiper Belt: バーチャルリアリティにおける極端な視線角度を用いた視線入力手法の検討
    崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会シンポジウム「インタラクション2022」  2022/02  情報処理学会
  • 崔明根,坂本大介,小野哲雄
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2019/12  情報処理学会
     
    本論文では,密集した小さなターゲットに対して高速にターゲット選択を行うことができるよう拡張した手法であるバブルレンズ法を,視線操作インタフェースに拡張した手法Bubble Gaze Lensを提案する.本手法は,サッケードが弾道的な運動と修正的な運動によって構成されていることを利用して,ターゲット付近でレンズを拡大し,視線入力インタフェースにおける小さなターゲットの選択を容易にする.本稿では,既存手法であるBubble Gaze Cursorと提案手法Bubble Gaze Lensに対してポインティングタスクを行った.その結果,提案手法は既存手法と比較して高速に動作し,エラー率を54.0%削減した.さらに,ユーザビリティやメンタルワークロードにおいても提案手法の方が既存手法よりも有意に優れていた.
  • 桂 大地, 坂本 大介, 小野 哲雄
    エンタテインメントコンピューティングシンポジウム2019論文集  2019/09
  • 巻口誉宗,高田英明,本田健悟,坂本大介,小野哲雄
    マルチメディア,分散協調とモバイルシンポジウム2019論文集  2019/06  情報処理学会
     
    被写体をテーブル上に表示し,全周囲から立体的に視聴可能な映像表示技術は,エンターティンメント分野や産業分野での幅広い応用が考えられる.我々はこれまで,複数のユーザがテーブルトップ型ディスプレイの周囲 360 度好きな方向から,その角度に応じた 3D 映像を同時に視聴できるスクリーンシステムを提案した.このシステムでは,水平方向に対してなめらかな運動視差を提示できる一方,ユーザの身長差への対応や,視点の上下移動といった垂直方向の運動視差の提示は困難であった.そこで本稿では,スクリーン中心に設置した 360 度カメラの映像から画像認識によってテーブル周囲のユーザの視点位置を検出し,ユーザに提示する視点画像の仮想カメラを上下させることで垂直視差を再現する手法を提案する.この手法により,身長差のある複数のユーザの視聴や上下方向に視点移動を行った場合でもそれぞれのユーザに正確な3D 映像を提示できる.我々は提案手法を実装し,テーブル周囲のユーザの視点位置に合わせて対応する視点画像をリアルタイムに変更できることを確認し,垂直視差再現への提案手法の有効性を示した.
  • 松村耕平, 尾形正泰, 小野哲雄, 加藤淳, 阪口紗季, 坂本大介, 杉本雅則, 角康之, 中村裕美, 西田健志, 樋口啓太, 安尾萌, 渡邉拓貴
    情報処理学会研究報告(Web)  2017/08
  • 藍 圭介, 坂本 大介, 小野 哲雄
    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報  2017/07  電子情報通信学会
  • 藍 圭介, 坂本 大介, 小野 哲雄
    聴覚研究会資料 = Proceedings of the auditory research meeting  2017/07  日本音響学会
  • 渡部 敏之, 坂本 大介, 小野 哲雄
    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報  2017/06
  • 鈴木健司,岡部和昌,坂本竜基,坂本大介
    情報処理学会 研究報告グループウェアとネットワークサービス(GN)  2017/01  情報処理学会
     
    本稿は,スマートフォンやタッチディスプレイにおいてポインティングをおこなう際,ポインタや指示したいオブジェクトが指に隠れて正確な操作ができなくなる問題,いわゆる Fat Finger Problem に対して,ポインタそのものの位置を指示するのではなく,ポインタが存在する画面を操作してポインタの位置指定を相対的におこなう手法について述べる.この手法は Fat Finger Problem が起こる多くの場面に有用であると考えられるが,本研究では特にスマートフォンにおける文字列選択がしにくい課題へのアプローチを目標とした.評価は,iOS における文章の一部文字列を本手法,および OS が標準で提供している手法それぞれで選択できるプロトタイプを作成したうえで,被験者実験をおこなった.その結果,提案手法をはじめて操作したユーザであっても従来手法と変わらない速度で作業が完了できた,もしくは早い場合があったことが確認され,また通常の手法よりも理解しやすいことが示唆された.
  • 平野 真理, 小倉 加奈子, 坂本 大介, 岩野 裕利, 山下 靖典, 土田 剛生, 下山 晴彦
    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報  2016/09  電子情報通信学会
  • 濱中敬人,坂本大介,五十嵐健夫
    情報処理学会 研究報告ヒューマンコンピュータインタラクション(HCI)  2014/03  情報処理学会
     
    日本の伝統楽器の一つ三味線は,演奏会においては暗譜演奏が慣例となっている.また,頻繁に変化する演奏速度や独特のフォーマットの譜面などの特徴は,暗譜のための練習を難しくしている.そこで本研究では,予め与えられた速度情報に則り譜面を自動スクロールし,演奏者が画面を見るだけで演奏できるシステムを提案する.加えて,本システムではマイク入力された演奏を元に譜本上で演奏箇所の探索を行い,演奏箇所が画面に収まるよう譜面スクロール速度を適応的に調整する.実装後に各機能の性能評価を行った結果,音高推定精度にはまだ改善の余地があるが,その結果を用いた楽譜探索には概ね成功し,ユーザの演奏に合わせた楽譜スクロールが可能になった.またユーザテストにおいては,参加者数は少ないもののこのシステムが演奏練習支援につながるという評価を得られた.
  • CHALLA Akki REDDY, SAKAMOTO Daisuke, INAMI Masahiko, IGARASHI Takeo
    電子情報通信学会技術研究報告 = IEICE technical report : 信学技報  2013/06 
    Controlling appliances, such as television sets and airconditioning units, in foreign countries is difficult because labels on the control devices (i.e., remote controls) are written in unfamiliar languages. We present a multilanguage user interface for home appliances on NFC-enabled smartphones to address this problem. The user first taps the remote of a home appliance with the smartphone. The smartphone then reads the ID from the NFC tag embedded in the remote and displays a visual copy of the remote. The text labels on the remote are automatically translated to the language of the user by reading the default configuration of the smartphone. The user can directly control home appliances with the smartphone when a network control is enabled.
  • 坂本大介, 坂本大介
    日本ロボット学会学術講演会予稿集(CD-ROM)  2012/09
  • 濱田健夫, 濱田健夫, 谷口祥平, 池島紗知子, 清水敬輔, 坂本大介, 坂本大介, 長谷川晶一, 稲見昌彦, 稲見昌彦, 五十嵐健夫, 五十嵐健夫
    日本バーチャルリアリティ学会大会論文集(CD-ROM)  2012/09
  • 2P1-O01 FuwaFuwa : Detecting Touch Position and Pressure Changes on Soft Objects Using Photoreflective Sensor(VR and Interface)
    SUGIURA Yuta, KAKEHI Gota, LEE Calista, SUGIMOTO Maki, SAKAMOTO Daisuke, INAMI Masahiko, IGARASHI Takeo
    ロボティクス・メカトロニクス講演会講演概要集  2012/05  The Japan Society of Mechanical Engineers
     
    We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring the shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical soft objects in the household without complex installation procedures and without spoiling the softness of the object because it requires no physical connection. Six LEDs in the module emit IR light in six orthogonal directions, and six corresponding photosensors measure the reflected light energy. One can easily convert almost any soft object into a touch-input device that can detect both touch position and surface displacement by embedding multiple FuwaFuwa sensor modules in the object.
  • 2P1-O02 Operation Method with Self-projectable Finger Gestures for Bipedal Robots(VR and Interface)
    SUGIURA Yuta, SAKAMOTO Daisuke, INAMI Masahiko, IGARASHI Takeo
    ロボティクス・メカトロニクス講演会講演概要集  2012/05  The Japan Society of Mechanical Engineers
     
    We propose an operating method for bipedal robots by using two-fingered gestures on a multi-touch surface. We focus on finger gestures that people represent the human with moving fingers, such as walking, running, kicking and turning. These bipedal gestures are natural and intuitive enough for the end-users to control the humanoid robots. The system captures those finger gestures on a multi-touch display as the direct operation method. The capturing method is easy and simple, but robust enough for the entertainment applications. We show an example application with our proposed method, and demonstration at the international exhibition. We conclude with the results of observation and future implementation of our method.
  • 杉浦裕太, 杉浦裕太, 筧豪太, LEE Calista, LEE Calista, 杉本麻樹, 杉本麻樹, 坂本大介, 坂本大介, 稲見昌彦, 稲見昌彦, 五十嵐健夫, 五十嵐健夫
    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)  2012/05
  • 杉浦裕太, 杉浦裕太, 坂本大介, 坂本大介, 稲見昌彦, 稲見昌彦, 五十嵐健夫, 五十嵐健夫
    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)  2012/05
  • 濱田健夫, 濱田健夫, 坂本大介, 坂本大介, 稲見昌彦, 稲見昌彦, 五十嵐健夫, 五十嵐健夫
    日本バーチャルリアリティ学会大会論文集(CD-ROM)  2011/09
  • 2A1-H05 A Dipole Field Object delivery algorithm with Object avoidance mechanism(Robots for Home/Office Application)
    YOSHIDA Shigeo, SAKAMOTO Daisuke, INAMI Masahiko, IGARASHI Takeo
    ロボティクス・メカトロニクス講演会講演概要集  2011/05  The Japan Society of Mechanical Engineers
     
    Robots are introduced in various environments and places such as offices and homes where we mundanely live. Particularly, one of the most expected functions of home robot is the transportation of objects in a living environment. This paper proposes an algorithm of object transportation under environments with objects. This algorithm is an expansion of the object transportation by dipole field algorithm so that robots can flexibly avoid objects. We compute the midpoints of the triangles that do not contain obstacles. We then use Dijkstra's algorithm to compute a path to move the robot to the object, and then compute a path to allow the robot to push the object to the goal using the dipole field algorithm.
  • 坂本 大介
    研究報告エンタテインメントコンピューティング(EC)  2010/12 
    エンタテインメントコンピューティング研究はまだ黎明期にあるにも関わらず,既にこれに特化した国際会議や国際ジャーナルが存在する.この流れの中で,国際舞台で活躍する人材の育成が急務となっている.本稿では,学生を追えたばかりの筆者が考えるエンタテインメントコンピューティング研究の学び方や教え方について考察する.Even though the entertainment computing research has short history, there are already some international conferences and journals. At this point, it's urgent to create the methodology of the human resource development. In this report, I discuss that how to learn and teach the entertainment computing research.
  • FUKUCHI Kentaro, SAKAMOTO Daisuke
    研究報告エンタテインメントコンピューティング(EC)  2010/12 
    本稿ではこれからのエンタテインメントコンピューティング (EC) の研究において、その歴史調査の重要性について論じる。EC 研究を推進していく上では、まず歴史調査により研究の方向性や正統な評価を定めていく必要がある。また、調査結果に基いた教育用資料を作成することで、EC 分野での人材育成や、EC 研究の必要性・有用性を社会に対して発信していくべきである。最後に、EC の歴史調査を進めていく上での過去の製品・研究の動態保存について、その重要性を論じる。We discuss the importance of the study of the history of Entertainment Computing (EC) in this paper. To promote the research in EC, the study of its history is needed to evaluate research quality and design the goal of the field. The study is also essential for educational purpose for human resource development and promotion of the importance of the research in EC to our society. Finally we discuss what we should do to study the history including active preservation of products.
  • 杉浦裕太, 杉浦裕太, 筧豪太, 筧豪太, WITHANA Anusha Indrajith, WITHANA Anusha Indrajith, FERNANDO Charith Lasantha, FERNANDO Charith Lasantha, 坂本大介, 坂本大介, 稲見昌彦, 稲見昌彦, 五十嵐健夫, 五十嵐健夫
    情報処理学会シンポジウム論文集  2010/02
  • 代蔵巧, 代蔵巧, 坂本大介, 坂本大介, 杉浦裕太, 杉浦裕太, 小野哲雄, 稲見昌彦, 稲見昌彦, 五十嵐健夫, 五十嵐健夫
    情報処理学会シンポジウム論文集  2010/02
  • 加藤淳, 加藤淳, 坂本大介, 坂本大介, 五十嵐健夫, 五十嵐健夫
    プログラミング・シンポジウム予稿集  2010/01
  • FUKUCHI KENTARO, SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, SAKAMOTO DAISUKE
    情報処理学会研究報告(CD-ROM)  2009/10
  • Kentaro Fukuchi, Daisuke Sakamoto
    IPSJ SIG technical reports  2009/08 
    Entertainment computing (EC) research is a one of hot areas in HCI. Many research groups presented many interactive applications in entertainment, however it is hard to say what is entertainment computing. In this report, we briefly review the annual EC symposium in Japan, and try to find what will be a central dogma of entertainment computing.
  • 1P1-F13 An Interface for Home Robots by Sketching Behaviors on a Top-down View of a Real World
    HONDA Koichiro, SAKAMOTO Daisuke, INAMI Masahiko, IGARASHI Takeo
    ロボティクス・メカトロニクス講演会講演概要集  2009/05  The Japan Society of Mechanical Engineers
     
    Recently, various kinds of home robots have been developed and are getting common in our lives. However, most of the users of home robots do not have the knowledge of hardware and robotics, so they should not be compelled to do any troublesome procedures and manipulation. In this paper, we introduce an intuitive interface for simple and direct manipulation of home robots, focusing on how to interact with robots when use them. In particular, our system adopts pen-stroke gesture as an input command on computer screen. We designed and implemented several types of gestural commands representing corresponding tasks, and then conducted some usability experiments.
  • HONDA KOICHIRO, HONDA KOICHIRO, SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, INAMI MASAHIKO, INAMI MASAHIKO, IGARASHI TAKEO, IGARASHI TAKEO
    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)  2009/05
  • 加藤淳, 加藤淳, 坂本大介, 坂本大介, 坂本大介, 稲見昌彦, 稲見昌彦, 五十嵐健夫, 五十嵐健夫
    情報処理学会シンポジウム論文集  2009/02
  • Sakamoto Daisuke
    IPSJ SIG technical reports  2008/06 
    In this paper, I briefly introduce three projects that I studied about the entertainment computing employing the information technology and the robotics technology. Then, I discuss applications for entertainment computing employing those technologies.
  • ONO TETSUO, SAKAMOTO DAISUKE, OGAWA KOHEI, KOMAGOME DAISUKE
    情報処理学会研究報告  2008/01
  • SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ONO TETSUO, ISHIGURO HIROSHI, ISHIGURO HIROSHI, HAGITA NORIHIRO
    情報処理学会シンポジウム論文集  2007/03
  • SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ONO TETSUO, ISHIGURO HIROSHI, ISHIGURO HIROSHI, HAGITA NORIHIRO
    情報処理学会研究報告  2006/12
  • SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ONO TETSUO, ISHIGURO HIROSHI, ISHIGURO HIROSHI, HAGITA NORIHIRO
    人工知能学会知識ベースシステム研究会資料  2006/12
  • SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ONO TETSUO, ISHIGURO HIROSHI, ISHIGURO HIROSHI, HAGITA NORIHIRO
    電子情報通信学会技術研究報告  2006/12
  • SAKAMOTO DAISUKE, ONO TETSUO
    情報処理学会研究報告  2005/11
  • SAKAMOTO DAISUKE, ONO TETSUO
    人工知能学会知識ベースシステム研究会資料  2005/11
  • SAKAMOTO DAISUKE, OSADA JUN'ICHI, ZENJIRO, MIYAUCHI MITSURU, SATO TAKAMASA, UCHIMOTO TOMOHIRO, KITANO ISAMU, OKADA HAJIME, HONMA MASAHITO, KOMATSU TAKANORI, SUZUKI SHOJI, SUZUKI KEIJI, ONO TETSUO, MATSUBARA HITOSHI, HATA MASAYUKI, INUI HIDEO
    情報処理学会シンポジウム論文集  2005/09
  • SATO TAKAMASA, SAKAMOTO DAISUKE, UCHIMOTO TOMOHIRO, KITANO ISAMU, OKADA HAJIME, HONMA MASATO, KOMATSU TAKANORI, SUZUKI SHOJI, SUZUKI KEIJI, ONO TETSUO, MATSUBARA HITOSHI, HATA MASAYUKI, INUI HIDEO
    日本ロボット学会学術講演会予稿集(CD-ROM)  2005/09
  • SUZUKI SHOJI, SUZUKI KEIJI, MATSUBARA HITOSHI, ONO TETSUO, KOMATSU TAKANORI, UCHIMOTO TOMOHIRO, OKADA HAJIME, KITANO ISAMU, SAKAMOTO DAISUKE, SATO TAKAMASA, HONMA MASATO, HATA MASAYUKI, INUI HIDEO
    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)  2005/06
  • 坂本大介, 小野哲雄
    プログラミング・シンポジウム報告書  2005/01
  • Suzuki Sho'ji, Sato Takamasa, Honma Masato, Hata Masayuki, Inui Hideo, Suzuki Keiji, Matsubara Hitoshi, Ono Tetsuo, Komatsu Takanori, Uchimoto Tomohiro, Okada Hajime, Kitano Isamu, Sakamoto Daisuke
    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)  2005  The Japan Society of Mechanical Engineers
  • Robot Musical : ロボットの振る舞いの実装に関するデザイン手法
    坂本大介
    エンタテインメントコンピューティング2005  2005  情報処理学会
  • 坂本大介, 小野哲雄
    日本ソフトウエア科学会大会講演論文集(CD-ROM)  2005
  • SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, IMAI MICHITA, KAMASHIMA MASAYUKI, ISHIGURO HIROSHI
    情報処理学会シンポジウム論文集  2004/03

Association Memberships

  • IEEE   ACM   INFORMATION PROCESSING SOCIETY OF JAPAN   

Research Projects

  • 科学技術振興機構(JST):創発的研究支援事業
    Date (from‐to) : 2023/04 -2026/03 
    Author : 坂本 大介
  • 日本学術振興会:科学研究費助成事業 基盤研究(B)
    Date (from‐to) : 2021/04 -2025/03 
    Author : 坂本 大介
  • 日本学術振興会:科学研究費助成事業 基盤研究(B)
    Date (from‐to) : 2020/04 -2023/03 
    Author : 山崎 晶子, 坂本 大介, 大澤 博隆, 小林 貴訓, 中西 英之, 山崎 敬一
  • オンライン配信におけるリモート観客のエンゲージメント強化手法に関する研究
    NTT人間情報研究所:
    Date (from‐to) : 2022/08 -2023/03 
    Author : 坂本大介、小野哲雄
  • ノーステック財団(財団法人 北海道科学技術総合振興センター):研究開発助成事業
    Date (from‐to) : 2022/08 -2023/03 
    Author : 坂本大介, 福屋伸朗
  • 保健所による積極的疫学調査を支援するツールの研究開発
    国立研究開発法人 科学技術振興機構:研究成果展開事業
    Date (from‐to) : 2021/08 -2022/03 
    Author : 坂本 大介
  • 公益財団法人 大川情報通信基金:2020年度 研究助成
    Date (from‐to) : 2021/01 -2021/12 
    Author : 坂本 大介
  • 感染症危機管理における位置情報活用に向けた基盤的技術の開発
    日本医療研究開発機構(AMED):ウイルス等感染症対策技術開発事業(基礎研究支援)
    Date (from‐to) : 2020/10 -2021/06 
    Author : 奥村 貴史, 髙橋 邦彦, 坂本 大介, 大向 一輝, 山本 泰智, 河口 信夫, 升井 洋志, 関本 義秀, 江上 周作
  • Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research
    Date (from‐to) : 2016/04 -2021/03 
    Author : Shimoyama Haruhiko
     
    Cognitive behavioral therapy (CBT) has become a major means to treat mental health issues. Given the high prevalence of the issues in Japan, it is an urgent task to provide patients with CBT properly. Meanwhile, there has long been a problem with patients not visiting a clinician, even when there is a mental health issue, and research into the service gap, that is, the difference between the need for and uptake of mental health services, has been ongoing.In order to fill the gap, first, we developed internet-based CBT(ICBT) and website to treat the issues. Next, we made a portal site to leads to the website and the apps of ICBT. RCT research indicated that it increased mental health literacy and decreased self-stigma significantly. Then, we added the online guide by psychologists and developed the detailed manual. Finally, we again conducted RCT research, which showed the empathetic communication facilitated by the guide developed the working alliance to complete the ICBT.
  • NTTサービスエボリューション研究所:
    Date (from‐to) : 2019/08 -2020/03 
    Author : 小野哲雄, 坂本大介
  • 身体的特徴を考慮したボルダリングコース難易度推定とコース作成支援
    ノーステック財団(財団法人 北海道科学技術総合振興センター):研究開発助成事業
    Date (from‐to) : 2019/08 -2020/03 
    Author : 坂本大介, 船戸大輔
  • エリアカーソル法によるロバストな視線入力インタフェースの開発
    ノーステック財団(財団法人 北海道科学技術総合振興センター):研究開発助成事業
    Date (from‐to) : 2019/07 -2020/03 
    Author : 坂本大介
  • NTTサービスエボリューション研究所:
    Date (from‐to) : 2018/08 -2019/03 
    Author : 小野哲雄, 坂本大介
  • 認知行動療法のICT化とサポートネットワーク構築によるバリアフリーなメンタルケア
    日本学術振興会:課題設定による先導的人文学・社会科学研究推進事業
    Date (from‐to) : 2013/10 -2015/09 
    Author : 下山 晴彦, 高橋 美保, 平野 真理, 國吉 康夫, 坂本 大介, Edward Watkins, William Yule, 山本 奨, 石丸 径一郎, 中嶋 義文, 原田 誠一, 西田 文比古, 村瀬 嘉代子, 松丸 未来, 野田 香織
  • Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Young Scientists (B)
    Date (from‐to) : 2012/04 -2014/03 
    Author : SAKAMOTO Daisuke
     
    We create a system that supports domestic robots for performing household tasks by utilizing a crowdsourcing. We consider that this makes it easy to create maps of a house, detect a location of an object, and perform a complex task by utilizing the crowdsourcing. On the other hand, there are some concerns to use the crowdsourcing services, such as 1) real-time operation, 2) privacy issue, and 3) designing an appropriate user interface. In this research, we created a proof-of-concept prototype system and conducted an experiment to investigate the appropriateness of the system.
  • 日本学術振興会:科学研究費助成事業 特別研究員奨励費
    Date (from‐to) : 2008 -2009 
    Author : 坂本 大介
     
    研究題目に沿い、本研究においては対話ロボット用のOS、特にロボットに詳しくない者においても簡単にロボット用のアプリケーションを開発することができる環境の開発を行った。ロボットが社会に広く普及するためには、多くのユーザによってそのアプリケーション開発が行われることが重要であり、本研究はこれを実現する環境を提供することを目的としていた。この環境の開発については前年度にプロトタイプの開発が終了しており、本年はこれの評価を行うために、実際にロボットのアプリケーションの開発を行った。 具体的には1)タブレットPCを用いた柔軟な家庭用ロボットへの指示インタフェースの開発を行った。ここで開発された技術はUpper Austria University of Applied Sciences, Media Interaction研究室との共同研究プロジェクトに応用され、実際に使われている。2)家庭用ロボットとの共同作業により、実際に調理を行うことができるロボットシステムの開発を行った。3)ロボットに不慣れなユーザであっても簡単にロボットを用いたエンタテインメントを経験できるような直感的なインタフェースの開発を行った。これは主に二つのプロジェクトで行われ、どちらも実際に国内会議、および国際会議においてデモンストレーションを行った。現在はさらに自然なインタラクションによるロボットとの対話技術の実現に関する研究を行っており、これについても前年度に開発した環境が重要な役割を担っている。これらのアプリケーション開発とデモンストレーションを通して、実社会におけるロボットの役割や存在意義について示していくことは分野発展のために非常に重要であると考えている。 また、これらの開発を通じて得られた結果は、実際に現在の対話ロボット用OSの開発に生かされており、有効なフィードバックが働いていると考えている。

Industrial Property Rights



Copyright © MEDIA FUSION Co.,Ltd. All rights reserved.