Researcher Database

Daisuke Sakamoto
Faculty of Information Science and Technology Computer Science and Information Technology Synergetic Information Engineering
Associate Professor

Researcher Profile and Settings

Affiliation

  • Faculty of Information Science and Technology Computer Science and Information Technology Synergetic Information Engineering

Job Title

  • Associate Professor

Degree

  • Systems Information Science(Future University-HAKODATE)

URL

J-Global ID

Profile

  • Daisuke Sakamoto is an Associate Professor of Human-Computer Interaction lab, Hokkaido University. He received his B.A.Media Architecture, M.Systems Information Science, and Ph.D. in Systems Information Science from Future University-Hakodate in 2004, 2006, and 2008, respectively. He was an Intern researcher of ATR Intelligent Robotics and Communication Labs (2006-2008). He worked at The University of Tokyo as a Research Fellow of the Japan Society for the Promotion of Science (2008-2010). He joined JST ERATO Igarashi Design Interface Project as a researcher (2010). After that he backed to The University of Tokyo as an Assistant Professor (2011) and a Project Lecturer (2013-2016). His research interests include Human-Computer Interaction and Human-Robot Interaction, which focused on the user interaction with people and interaction design for the computing systems.

Research Interests

  • Entertainment Computing   Interaction Design   User Interface   Human-Robot Interaction   Human-Computer Interaction   

Research Areas

  • Informatics / Human interfaces and interactions / Human-computer Interaction

Academic & Professional Experience

  • 2019/04 - Today Hokkaido University Faculty of Information Science and Technology Associate Professor
  • 2017/01 - Today Japan Science and Technology Agency ERATO Hasuo Metamathematics for Systems Design Project Research Advisor
  • 2019/04 - 2019/09 Waseda University Faculty of Science and Engineering
  • 2017/03 - 2019/03 Hokkaido University Graduate School of Information Science and Technology Associate Professor
  • 2018/04 - 2018/09 Waseda University Faculty of Science and Engineering
  • 2017/04 - 2018/03 Meiji University Graduate School of Advanced Mathematical Sciences
  • 2017/04 - 2017/09 Waseda University Faculty of Science and Engineering
  • 2013/01 - 2017/02 The University of Tokyo Graduate School of Information Science and Technology Project Lecturer
  • 2016/04 - 2016/08 The University of Tokyo College of Arts and Science Part-time lecturer
  • 2015/04 - 2015/08 The University of Tokyo College of Arts and Science Part-time lecturer
  • 2014/09 - 2014/09 Hokkaido University Graduate School of Information Science and Technology Part-time lecture
  • 2013/09 - 2013/09 Hokkaido University Graduate School of Information Science and Technology Part-time lecturer
  • 2011/04 - 2013/03 Tokyo University of the Arts Art Media Center Part-time Lecturer
  • 2011/04 - 2013/03 Japan Science and Technology Agency ERATO Igarashi Design Interface Project Research Advisor
  • 2008/04 - 2013/03 Advanced Telecommunications Research Institute International Communication Robot Dept. Cooperative Researchers
  • 2011/04 - 2013/01 The University of Tokyo Graduate School of Information Science and Technology Assistant Professor
  • 2011/08 - 2011/10 University of Manitoba Department of Computer Science Visiting Researcher
  • 2010/04 - 2011/03 Japan Science and Technology Agency ERATO Igarashi Design Interface Project Researcher
  • 2008/10 - 2010/03 Japan Science and Technology Agency ERATO Igarashi Design Interface Project Collaborator
  • 2008/04 - 2010/03 Japan Society for Promotion Science The University of Tokyo Postdoctoral Research Fellow
  • 2006/04 - 2008/03 Advanced Telecommunications Research Institute International Communication Robot Dept. Intern

Education

  • 2006/04 - 2008/03  Future University-Hakodate  Graduate School of Systems Information Science  Ph.D. Course
  • 2004/04 - 2006/03  Future University-Hakodate  Graduate School of Systems Information Science  Master Course
  • 2000/04 - 2004/03  Future University-Hakodate  Department of Systems Information Science

Association Memberships

  • ACM   INFORMATION PROCESSING SOCIETY OF JAPAN   

Research Activities

Published Papers

  • 巻口 誉宗, 高田 英明, 坂本 大介, 小野 哲雄
    情報処理学会論文誌デジタルコンテンツ(DCON) 8 (1) 1 - 10 2187-8897 2020/02/26 [Refereed][Not invited]
     
    The aerial image projection methods using a semi-transparent screen or a half mirror are widely used in the entertainment field. In these conventional method, large scale devices are required for the display area of the aerial image and it is difficult to produce a widely movement of the subject. In this paper, we propose a movable double-sided transmission type multi-layered aerial image display technology for the purpose of achieve the movement of the aerial image out of the stage and to the audience seats. This technology is a simple optical system combining 4 displays and 4 half mirrors. The observer can observe both sides of the object as aerial images from two directions, front and back of the device, and can also observe two layers of near and far background aerial images from both sides by transmission and reflection with a half mirror. Since the order relationship between the depths of the near view and the distant view does not depend on the viewing directions of the front and back faces, multiple people can simultaneously view multi-layered, highly realistic aerial images from both sides of the device. The two background layers are shared for the double-side viewing direction by multiple harf-mirror structure. Although the proposed method has only four display surfaces, there are total of six aerial image layers, three from each side. We report widely from optical configuration of the proposed method, prototype implementation and application of actual events.
  • 崔 明根, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 61 (2) 221 - 232 1882-7764 2020/02/15 [Not refereed][Not invited]
     
    Selecting a small target with an eye-gaze interface is difficult. Redesigning interface and/or increasing operation time are required for making eye-gaze interface easy to use. In this paper, we present a method of using an idea of bubble cursor, which is a kind of area cursor, for the eye-gaze interface in order to make it easy to select a small target while maintaining operation time and generality of interface design. We performed an experiment to validate our concept by comparing three interfaces, standard bubble cursor technique with a mouse, a standard eye-gaze interface with a point cursor, and the bubble cursor as an area cursor with eye-gaze interface in order to understand how the bubble cursor contributes to eye-gaze input interface. Results indicated that the bubble cursor with the eye-gaze interface always faster than the standard point cursor-based eye-gaze interface, and the usability score was also significantly higher than the standard eye-gaze interface. From those results, the bubble gaze cursor technique is an effective method to make eye-gaze pointing easier and faster.
  • Kenji Suzuki, Ryuuki Sakamoto, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2018, Barcelona, Spain, September 03-06, 2018 ACM 61 (2) 233 - 243 1882-7764 2020/02/15 [Not refereed][Not invited]
     
    We present new alternative interfaces for zooming out on a mobile device: Bounce Back and Force Zoom. These interfaces are designed to be used with a single hand. They use a pressure-sensitive multitouch technology in which the pressure itself is used to zoom. Bounce Back senses the intensity of pressure while the user is pressing down on the display. When the user releases his or her finger, the view is bounced back to zoom out. Force Zoom also senses the intensity of pressure, and the zoom level is associated with this intensity. When the user presses down on the display, the view is scaled back according to the intensity of the pressure. We conducted a user study to investigate the efficiency and usability of our interfaces by comparing with previous pressure-sensitive zooming interface and Google Maps zooming interface as a baseline. Results showed that Bounce Back and Force Zoom was evaluated as significantly superior to that of previous research; number of operations was significantly lower than default mobile Google Maps interface and previous research.
  • Kenji Suzuki, Daisuke Sakamoto, Sakiko Nishi, Tetsuo Ono
    Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2019, Taipei, Taiwan, October 1-4, 2019. ACM 66:1-66:6  2019/10 [Refereed][Not invited]
  • Lei Ma, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 7th International Conference on Human-Agent Interaction, HAI 2019, Kyoto, Japan, October 06-10, 2019 ACM 324 - 326 2019/10 [Refereed][Not invited]
  • Subaru Ouchi, Kazuki Mizumaru, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 7th International Conference on Human-Agent Interaction, HAI 2019, Kyoto, Japan, October 06-10, 2019 ACM 232 - 233 2019/10 [Refereed][Not invited]
  • 巻口 誉宗, 高田 英明, 本田 健悟, 坂本 大介, 小野 哲雄
    マルチメディア,分散協調とモバイルシンポジウム2019論文集 (2019) 176 - 179 2019/06/26 [Not refereed][Not invited]
  • 鈴木健司, 岡部和昌, 坂本竜基, 坂本大介
    情報処理学会論文誌ジャーナル(Web) 60 (2) 354‐363 (WEB ONLY)  1882-7764 2019/02 [Refereed][Not invited]
  • 黒澤 紘生, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 60 (2) 364 - 375 1882-7764 2019/02 [Refereed][Not invited]
     
    We present a target selection method for smartwatches, which employs a combination of a tilt operation and electromyography (EMG). First, a user tilts his/her arm to indicate the direction of cursor movement on the smartwatch; then s/he applies forces on the arm. EMG senses the force and moves the cursor to the direction where the user is tilting his/her arm to manipulate the cursor. In this way, the user can simply manipulate the cursor on the smartwatch with minimal effort, by tilting the arm and applying force to it. We conducted an experiment to investigate its performance and to understand its usability. Results showed that participants selected small targets with an accuracy greater than 93.89%. In addition, performance significantly improved compared to previous tilting operation methods. Likewise, its accuracy was stable as targets became smaller, indicating that the method is unaffected by the "fat finger problem".
  • Motohiro Makiguchi, Daisuke Sakamoto, Hideaki Takada, Kengo Honda, Tetsuo Ono
    Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019, New Orleans, LA, USA, October 20-23, 2019 ACM 625 - 637 2019 [Refereed][Not invited]
  • Yuta Sugiura, Hikaru Ibayashi, Toby Chong, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Takashi Shinmura, Masaaki Mochimaru, Takeo Igarashi
    Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI 2018, Hachioji, Japan, December 02-03, 2018 ACM 21:1-21:6  2018/12 [Refereed][Not invited]
  • 水丸 和樹, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 59 (12) 2279 - 2287 1882-7764 2018/12 [Refereed][Not invited]
     
    A unique space called social space is formed in human groups. This space is strongly formed when people belonging to the group are actively communicating and will influence the behavior of a third party not belonging to the group. Moreover, overlapping speech occurs unconsciously in human everyday conversations, expressing entrainment, interest, understanding, and so on to the other party as well as becoming an important factor for producing active conversation. On the other hand, in recent years, humanoid robots have been put into practical use and demonstration experiments are actively being carried out. When assuming a future environment in which humans coexist with multiple robots, it is necessary to consider the space formed in a group of robots, but such research has not been conducted sufficiently. In this research, we implemented active communication between robots by overlapping their speech and investigated how humans perceived the space whitch emerged from it. As a result, it was indicated that overlapping speech improved the impression of conversation activity and that the space whitch emerged in the group of robots affected the behavior of the person observing the conversation.
  • 山下 峻, 藍 圭介, 坂本 大介, 小野 哲雄
    情報処理学会論文誌 59 (11) 1965 - 1977 1882-7764 2018/11 [Refereed][Not invited]
     
    We present a composition support system that shows candidates of next melodies following user created music and melody. Non-expert of music composition, like beginners, have difficulties to create melodies in music. It would be effective if a system could support music composition. We develop an algorithm to generate candidate melodies following user's input, especially music and melodies. The algorithm was inspired by the idea of predictive text input interface. We actually generated and confirmed the candidate melodies by the proposed method, we found that it has room for improvement. Based on this result, we also tried to improve the proposed method and tried to improve the quality of candidate melodies. We then created a system for showing candidate in music composition interface to support writing music activities. We conduct evaluation studies to investigate the effectiveness of the proposed method and improvement. In the 1st evaluation study, we compared three condition melody generation methods and two condition dictionaries. From results, we confirmed that the effectiveness of the proposed method using combining Markov process and pattern matching and two dictionaries. In the 2nd evaluation study, we compared melodies in pre- and post-improvement. As a result, the score of music generated in post-improvement system, was improved and we confirmed that the effectiveness of the improvement.
  • Hiroki Kurosawa, Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2018, Barcelona, Spain, September 03-06, 2018 ACM 43:1-43:11  2018/09 [Refereed][Not invited]
  • 春日 遥, 坂本 大介, 棟方 渚, 小野 哲雄
    情報処理学会論文誌 59 (8) 1520 - 1531 1882-7764 2018/08 [Refereed][Not invited]
     
    Pets have been humans' best friends since ancient times. People have been living with pets since then, and relationships between people and their pets, understood as family members at home, have been well researched. Social robots have recently entered family lives, and a new research field is emerging that examines triadic relationships between people, pets, and social robots. An exploratory field experiment was conducted to investigate how a social robot affects human-animal relationships within the home. In this experiment, a small humanoid robot, NAO, was introduced into the homes of 10 families, and 22 participants (with 12 pets: 4 dogs and 8 cats), called "owners" hereafter, were asked to interact with the humanoid robot. The robot was operated under two conditions: speaking positively to the pets and speaking negatively to the pets. The contents of the utterances from robot to pet, which comprised about 30 seconds of about 2 minutes of dialogue, were different under the two conditions. The results of this study indicated that changing the attitude of NAO toward the pets affected the owners' impressions of the robot.
  • Yuki Koyama, Issei Sato, Daisuke Sakamoto, Takeo Igarashi
    ACM TRANSACTIONS ON GRAPHICS 36 (4) 48:1-48:11  0730-0301 2017/07 [Refereed][Not invited]
     
    Parameter tweaking is a common task in various design scenarios. For example, in color enhancement of photographs, designers tweak multiple parameters such as "brightness" and "contrast" to obtain the best visual impression. Adjusting one parameter is easy; however, if there are multiple correlated parameters, the task becomes much more complex, requiring many trials and a large cognitive load. To address this problem, we present a novel extension of Bayesian optimization techniques, where the system decomposes the entire parameter tweaking task into a sequence of one-dimensional line search queries that are easy for human to perform by manipulating a single slider. In addition, we present a novel concept called crowd-powered visual design optimizer, which queries crowd workers, and provide a working implementation of this concept. Our single-slider manipulation microtask design for crowdsourcing accelerates the convergence of the optimization relative to existing comparison-based microtask designs. We applied our framework to two different design domains: photo color enhancement and material BRDF design, and thereby showed its applicability to various design domains.
  • Hiroaki Mikami, Daisuke Sakamoto, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 2017- 6208 - 6219 2017/05/02 [Refereed][Not invited]
     
    Experimentation plays an essential role in exploratory programming, and programmers apply version control operations when switching the part of the source code back to the past state during experimentation. However, these operations, which we refer to as micro-versioning, are not well supported in current programming environments. We first examined previous studies to clarify the requirements for a micro-versioning tool. We then developed a micro-versioning tool that displays visual cues representing possible micro-versioning operations in a textual code editor. Our tool includes a history model that generates meaningful candidates by combining a regional undo model and tree-structured undo model. The history model uses code executions as a delimiter to segment text edit operations into meaning groups. A user study involving programmers indicated that our tool satisfies the above-mentioned requirements and that it is useful for exploratory programming. Copyright is held by the owner/author(s). Publication rights licensed to ACM.
  • Haruka Kasuga, Daisuke Sakamoto, Nagisa Munekata, Tetsuo Ono
    Proceedings of the 5th International Conference on Human Agent Interaction, HAI 2017, Bielefeld, Germany, October 17 - 20, 2017 ACM 61 - 69 2017 [Refereed][Not invited]
  • Chia-Ming Chang, Koki Toda, Daisuke Sakamoto, Takeo Igarashi
    AUTOMOTIVEUI 2017: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AUTOMOTIVE USER INTERFACES AND INTERACTIVE VEHICULAR APPLICATIONS 65 - 73 2017 [Refereed][Not invited]
     
    Self-driving technologies have been increasingly developed and tested in recent years (e.g., Volvo's and Google's self-driving cars). However, only a limited number of investigations have so far been conducted into communication between self-driving cars and pedestrians. For example, when a pedestrian is about to cross a street, that pedestrian needs to know the intension of the approaching self-driving car. In the present study, we designed a novel interface known as "Eyes on a Car" to address this problem. We added eyes onto a car so as to establish eye contact communication between that car and pedestrians. The car looks at the pedestrian in order to indicate its intention to stop. This novel interface design was evaluated via a virtual reality (VR) simulated environment featuring a street-crossing scenario. The evaluation results show that pedestrians can make the correct street-crossing decision more quickly if the approaching car has the novel interface "eyes" than in the case of normal cars. In addition, the results show that pedestrians feel safer with regard to crossing a street if the approaching car has eyes and if the eyes look at them.
  • Kazuyo Mizuno, Daisuke Sakamoto, Takeo Igarashi
    IS and T International Symposium on Electronic Imaging Science and Technology 58 - 69 2470-1173 2017 [Refereed][Not invited]
     
    Category search is a searching activity where the user has an example image and searches for other images of the same category. This activity often requires appropriate keywords of target categories making it difficult to search images without prior knowledge of appropriate keywords. Text annotations attached to images are a valuable resource for helping users to find appropriate keywords for the target categories. We propose an image exploration system in this article for category image search without the prior knowledge of category keywords. Our system integrates content-based and keyword-based image exploration and seamlessly switches exploration types according to user interests. The system enables users to learn target categories both in image and keyword representation through exploration activities. Our user study demonstrated the effectiveness of image exploration using our system, especially for the search of images with unfamiliar category compared to the single-modality image search.
  • Mari Hirano, Kanako Ogura, Mizuho Kitahara, Daisuke Sakamoto, Haruhiko Shimoyama
    Health Psychology Open 4 (1) 2055102917707185  2055-1029 2017 [Refereed][Not invited]
     
    Most of computerized cognitive behavioral therapy targeted restoration and few have targeted primary prevention. The purpose of this study is to obtain the knowledge for further development on preventive mental healthcare application. We developed a personal mental healthcare application which aimed to give users the chance to manage their mental health by self-monitoring and regulating their behavior. Through the 30-day field trial, the results showed improvement of mood score through conducting of suggested action, and the depressive mood of the participants was significantly decreased after the trial. The possibility of application and further problem was confirmed.
  • 杉浦裕太, LEE Calista, 尾形正泰, WITHANA Anusha, 坂本大介, 牧野泰才, 五十嵐健夫, 稲見昌彦
    情報処理学会論文誌ジャーナル(Web) 57 (12) 2542‐2553 (WEB ONLY)  1882-7764 2016/12 [Refereed][Not invited]
  • 尉林暉, 杉浦裕太, 坂本大介, TOBY Chong, 宮田なつき, 多田充徳, 大隈隆史, 蔵田武志, 新村猛, 持丸正明, 五十嵐健夫
    情報処理学会論文誌ジャーナル(Web) 57 (12) 2610‐2616 (WEB ONLY)  1882-7764 2016/12 [Refereed][Not invited]
  • Morihiro Nakamura, Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    COMPUTER GRAPHICS FORUM 35 (7) 323 - 332 0167-7055 2016/10 [Refereed][Not invited]
     
    We present an interactive design system for designing free-formed bamboo-copters, where novices can easily design free-formed, even asymmetric bamboo-copters that successfully fly. The designed bamboo-copters can be fabricated using digital fabrication equipment, such as a laser cutter. Our system provides two useful functions for facilitating this design activity. First, it visualizes a simulated flight trajectory of the current bamboo-copter design, which is updated in real time during the user's editing. Second, it provides an optimization function that automatically tweaks the current bamboo-copter design such that the spin quality-how stably it spins-and the flight quality-how high and long it flies-are enhanced. To enable these functions, we present non-trivial extensions over existing techniques for designing free-formed model airplanes [UKSI14], including a wing discretization method tailored to free-formed bamboo-copters and an optimization scheme for achieving stable bamboo-copters considering both spin and flight qualities.
  • Kenji Suzuki, Kazumasa Okabe, Ryuuki Sakamoto, Daisuke Sakamoto
    Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2016 478 - 482 2016/09/06 [Refereed][Not invited]
     
    We present a concept of using a movable background to navigate a caret on small mobile devices. The standard approach to selecting text on mobile devices is to directly touch the location on the text that a user wants to select. This is problematic because the user's finger hides the area to select. Our concept is to use a movable background to navigate the caret. Users place a caret by tapping on the screen and then move the background by touching and dragging. In this method, the caret is fixed on the screen and the user drags the background text to navigate the caret. We compared our technique with the iPhone's default UI and found that even though participants were using our technique for the first time, average task completion time was not different or even faster than Default UI in the case of the small font size and got a significantly higher usability score than Default UI.
  • Daisuke Sakamoto, Yuta Sugiura, Masahiko Inami, Takeo Igarashi
    COMPUTER 49 (7) 20 - 25 0018-9162 2016/07 [Refereed][Not invited]
  • 深堀孔明, 坂本大介, 五十嵐健夫
    コンピュータソフトウェア 33 (2) 2_116‐2_124(J‐STAGE)  0289-6540 2016/04 [Refereed][Not invited]
  • 鈴木良平, 坂本大介, 五十嵐健夫
    コンピュータソフトウェア 33 (1) 1.103-1.110 (J-STAGE)  0289-6540 2016/04 [Refereed][Not invited]
  • Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016 2520 - 2532 2016 [Refereed][Not invited]
     
    Color enhancement is a very important aspect of photo editing. Even when photographers have tens of or hundreds of photographs, they must enhance each photo one by one by manually tweaking sliders in software such as brightness and contrast, because automatic color enhancement is not always satisfactory for them. To support this repetitive manual task, we present self-reinforcing color enhancement, where the system implicitly and progressively learns the user's preferences by training on their photo editing history. The more photos the user enhances, the more effectively the system supports the user. We present a working prototype system called SelPh, and then describe the algorithms used to perform the self-reinforcement. We conduct a user study to investigate how photographers would use a self-reinforcing system to enhance a collection of photos. The results indicate that the participants were satisfied with the proposed system and strongly agreed that the self-reinforcing approach is preferable to the traditional workflow.
  • Shigeo Yoshida, Takumi Shirokura, Yuta Sugiura, Daisuke Sakamoto, Tetsuo Ono, Masahiko Inami, Takeo Igarashi
    IEEE COMPUTER GRAPHICS AND APPLICATIONS 36 (1) 62 - 69 0272-1716 2016/01 [Refereed][Not invited]
  • 小山裕己, 坂本大介, 五十嵐健夫
    コンピュータソフトウェア 33 (1) 63‐77(J‐STAGE)  0289-6540 2016 [Refereed][Not invited]
  • 三上裕明, 坂本大介, 五十嵐健夫
    情報処理学会論文誌トランザクション プログラミング(Web) 8 (4) 1-14 (WEB ONLY)  1882-7802 2015/12 [Refereed][Not invited]
  • Kenji Suzuki, Kazumasa Okabe, Ryuuki Sakamoto, Daisuke Sakamoto
    UIST 2015 - Adjunct Publication of the 28th Annual ACM Symposium on User Interface Software and Technology 79 - 80 2015/11/06 [Refereed][Not invited]
     
    We present a "Fix and Slide" technique, which is a concept to use a movable background to place a caret insertion point and to select text on a mobile device. Standard approach to select text on mobile devices is touching to the text where a user wants to select, and sometimes pop-up menu is displayed and they choose "select" mode and then start to specify an area to be selected. A big problem is that the user's finger hides the area to select this is called a "fat finger problem." We use the movable background to navigate a caret. First a user places a caret by tapping on a screen and then moves the background by touching and dragging on a screen. In this situation, the caret is fixed on the screen so that the user can move the background to navigate the caret where the user wants to move the caret. We implement the Fix and Slide technique on iOS device (iPhone) to demonstrate the impact of this text selection technique on small mobile devices.
  • Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Masaaki Mochimaru, Takeo Igarashi
    SIGGRAPH Asia 2015 Posters, SA 2015 24:1  2015/11/02 [Refereed][Not invited]
     
    Architecture-scale design requires two different viewpoints: a small-scale internal view, i.e., a first-person view of the space to see local details as an occupant of the space, and a large-scale external view, i.e., a top-down view of the entire space to make global decisions when designing the space. Architects or designers need to switch between these two viewpoints, but this can be inefficient and time-consuming. We present a collaborative design system, Dollhouse, to address this problem. By using our system, users can discuss the design of the space from two viewpoints simultaneously. This system also supports a set of interaction techniques to facilitate communication between these two user groups.
  • Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Masaaki Mochimaru, Takeo Igarashi
    SIGGRAPH Asia 2015 Emerging Technologies, SA 2015 8:1-8:2  2015/11/02 [Refereed][Not invited]
     
    This research addresses architecture-scale problem-solving involving the design of living or working spaces, such as architecture and floor plans. Such design systems require two different viewpoints: A small-scale internal view, i.e., a first-person view of the space to see local details as an occupant of the space, and a large-scale external view, i.e., a top-down view of the entire space to make global decisions when designing the space. Architects or designers need to switch between these two viewpoints to make various decisions, but this can be inefficient and time-consuming. We present a system to address the problem, which facilitates asymmetric collaboration between users requiring these different viewpoints. One group of users comprises the designers of the space, who observe and manipulate the space from a top-down view using a large tabletop interface. The other group of users represents occupants of the space, who observe and manipulate the space based on internal views using head-mounted displays (HMDs). The system also supports a set of interaction techniques to facilitate communication between these two user groups. Our system can be used for the design of various spaces, such as offices, restaurants, operating rooms, parks, and kindergartens.
  • 杉浦裕太, 筧豪太, WHITANA Anusha, 坂本大介, 杉本麻樹, 五十嵐健夫, 稲見昌彦
    日本バーチャルリアリティ学会論文誌 20 (3) 209 - 217 1344-011X 2015/09 [Refereed][Not invited]
  • Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH '15, Los Angeles, CA, USA, August 9-13, 2015, Posters Proceedings ACM 2:1  2015/08 [Refereed][Not invited]
  • 中嶋誠, 坂本大介, 五十嵐健夫
    情報処理学会論文誌ジャーナル(Web) 56 (4) 1317-1327 (WEB ONLY)  1882-7764 2015/04 [Refereed][Not invited]
  • Naoki Sasaki, Hsiang-Ting Chen, Daisuke Sakamoto, Takeo Igarashi
    COMPUTER ANIMATION AND VIRTUAL WORLDS 26 (2) 185 - 194 1546-4261 2015/03 [Refereed][Not invited]
     
    We present facetons, geometric modeling primitives designed for building architectural models especially effective for a virtual environment where six degrees of freedom input devices are available. A faceton is an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple interaction of faceton, users can easily create 3D architecture models. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry, but it is driven by a novel adaptive bounding algorithm and is specifically designed for 3D modeling activities in an immersive virtual environment. We describe the modeling method and our current implementation. The implementation is still experimental but shows potential as a viable alternative to traditional modeling methods. Copyright (c) 2014 John Wiley & Sons, Ltd.
  • 坂本大介, 小松孝徳, 五十嵐健夫
    ヒューマンインタフェース学会論文誌 17 (1/4) 85 - 95 2186-828X 2015/02 [Refereed][Not invited]
  • Koumei Fukahori, Daisuke Sakamoto, Takeo Igarashi
    CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS 3019 - 3028 2015 [Refereed][Not invited]
     
    We propose subtle foot-based gestures named foot plantar-based (FPB) gestures that are used with sock-placed pressure sensors. In this system, the user can control a computing device by changing his or her foot plantar distributions, e.g., pressing the floor with his/her toe. Because such foot movement is subtle, it is suitable for use especially in a public space such as a crowded train. In this study, we first conduct a guessability study to design a user-defined gesture set for interaction with a computing device. Then, we implement a gesture recognizer with a machine learning technique. To avoid unexpected gesture activations, we also collect foot plantar pressure patterns made during daily activities such as walking, as negative training data. Additionally, we evaluate the unobservability of FPB gestures by using crowdsourcing. Finally, we conclude with several applications to further illustrate the utility of FPB gestures.
  • Ryohei Suzuki, Daisuke Sakamoto, Takeo Igarashi
    CHI 2015: PROCEEDINGS OF THE 33RD ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS 57 - 66 2015 [Refereed][Not invited]
     
    We present a video annotation system called "AnnoTone", which can embed various contextual information describing a scene, such as geographical location. Then the system allows the user to edit the video using this contextual information, enabling one to, for example, overlay with map or graphical annotations. AnnoTone converts annotation data into high-frequency audio signals (which are inaudible to the human ear), and then transmits them from a smartphone speaker placed near a video camera. This scheme makes it possible to add annotations using standard video cameras with no requirements for specific equipment other than a smartphone. We designed the audio watermarking protocol using dual-tone multi-frequency signaling, and developed a general-purpose annotation framework including an annotation generator and extractor. We conducted a series of performance tests to understand the reliability and the quality of the watermarking method. We then created several examples of video-editing applications using annotations to demonstrate the usefulness of Annotone, including an After Effects plugin.
  • Hamada Takeo, Taniguchi Shohei, Ikejima Sachiko, Shimizu Keisuke, Sakamoto Daisuke, Hasegawa Shoichi, Inami Masahiko, Igarashi Takeo
    Transactions of the Virtual Reality Society of Japan 特定非営利活動法人 日本バーチャルリアリティ学会 20 (3) 229 - 238 1344-011X 2015 [Refereed][Not invited]
     
    We propose a puppet-based user interface - named Avatouch - for specifying massage position without looking at the control device. Users can indicate massage position on their backs by touching the puppet's one (Figurel). Besides, we also develop a massage chair system with both a push-button interface and Avatouch. Experimental results confirm that almost half of subjects kept Avatouch well away from their faces. Furthermore, two participants modulated massage position without looking at the plushie. In this paper, we firstly explain about Avatouch. Then, we describe a massage chair system and a user study to observe how people use each interface. At the end of this paper, we will discuss the advantage and disadvantage of Avatouch.
  • Takahito Hamanaka, Daisuke Sakamoto, Takeo Igarashi
    ACM International Conference Proceeding Series 2014- 13:1-13:10  2014/11/11 [Refereed][Not invited]
     
    We present a system called Aibiki, which can support users in practicing the shamisen, a three-stringed Japanese musical instrument, via an automatic and adaptive score scroll. We chose Nagauta, as an example of a type of shamisen music. Each piece typically lasts 10-40 min furthermore, both hands are required to play the shamisen, and it is not desirable to turn pages manually during a performance. In addition, there are some characteristic issues that are particular to the shamisen, including the variable tempo of the music and the unique timbre of the instrument, which makes pitch detection difficult using standard techniques. In this work, we describe an application that automatically scrolls through a musical score, initially at a predefined tempo. Because there is often a difference between the predefined tempo and tempo with which the musician plays the piece, the application adjusts speed of the score scroll based on the input from a microphone. We evaluated the performance of the application via a user study. We find that the system was able to scroll the score in time to the actual performance, and that the system was useful for practicing and playing the shamisen.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi, Masataka Goto
    HAI 2014 - Proceedings of the 2nd International Conference on Human-Agent Interaction 345 - 351 2014/10/29 [Refereed][Not invited]
     
    In this paper, we propose a to-do list interface for sharing tasks between human and multiple agents including robots and software personal assistants. While much work on software architectures aims to achieve efficient (semi-)autonomous task coordination among human and agents, little work on user interfaces can be found for user-oriented flexible task coordination. Instead, most of the existing human-agent interfaces are designed to command a single agent to handle specific kinds of tasks. Meanwhile, our interface is designed to be a platform to share any kinds of tasks between users and multiple agents. When agents can handle the task, they ask for details and permission to execute it. Otherwise, they try supporting users or just keep silent. New tasks can be registered not only by humans but also by agents when errors occur that can only be fixed by human users. We present the interaction design and implementation of the interface, Sharedo, with three example agents, followed by brief user feedback collected from a preliminary user study.
  • Yuki Koyama, Daisuke Sakamoto, Takeo Igarashi
    UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology 65 - 74 2014/10/05 [Refereed][Not invited]
     
    Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color or executing blendshape facial expression control. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. We present a new technique to analyze such highdimensional parameter space to obtain a distribution of human preference. Our method uses crowdsourcing to gather pairwise comparisons between various parameter sets. As a result of analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We created four applications with different design parameter spaces. As a result, the system could facilitate the user's design exploration.
  • Fangzhou Wang, Yang Li, Daisuke Sakamoto, Takeo Igarashi
    International Conference on Intelligent User Interfaces, Proceedings IUI 169 - 178 2014 [Refereed][Not invited]
     
    One of the difficulties with standard route maps is accessing to multi-scale routing information. The user needs to display maps in both a large scale to see details and a small scale to see an overview, but this requires tedious interaction such as zooming in and out. We propose to use a hierarchical structure for a route map, called a "Route Tree", to address this problem, and describe an algorithm to automatically construct such a structure. A Route Tree is a hierarchical grouping of all small route segments to allow quick access to meaningful large and small-scale views. We propose two Route Tree applications, "RouteZoom" for interactive map browsing and "TreePrint" for route information printing, to show the applicability and usability of the structure. We conducted a preliminary user study on RouteZoom, and the results showed that RouteZoom significantly lowers the interaction cost for obtaining information from a map compared to a traditional interactive map. © 2014 ACM.
  • Koumei Fukahori, Daisuke Sakamoto, Jun Kato, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 1453 - 1458 2014 [Refereed][Not invited]
     
    Programmers write and edit their source code in a text editor. However, when they design the look-and-feel of a game application such as an image of a game character and an arrangement of a button, it would be more intuitive to edit the application by directly interacting with these objects on a game window. Although modern game engines realize this facility, they use a highly structured framework and limit what the programmer can edit. In this paper, we present CapStudio, a development environment for a visual application with an interactive screencast. A screencast is a movie player-like output window with code editing functionality. The screencast works with a traditional text editor. Modifications of source code in the text editor and visual elements on the screencast will be immediately reflected on each other. We created an example application and confirmed the feasibility of our approach.
  • Makoto Nakajima, Daisuke Sakamoto, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 321 - 330 2014 [Refereed][Not invited]
     
    We present an animation creation workflow for integrating offline physical, painted media into the digital authoring of Flash-style animations. Generally, animators create animations with standardized digital authoring software. However, the results tend to lack the individualism or atmosphere of physical media. In contrast, illustrators have skills in painting physical media but have limited experience in animation. To incorporate their skills, we present a workflow that integrates the offline painting and digital animation creation processes in a labor-saving manner. First, a user makes a rough sketch of the visual elements and defines their movements using our digital authoring software with a sketch interface. Then these images are exported to printed pages, and users can paint using offline physical media. Finally, the work is scanned and imported back into the digital content, forming a composite animation that combines digital and physical media. We present an implementation of this system to demonstrate its workflow. We also discuss the advantages of using physical media in digital animations through design evaluations.
  • James E. Young, Takeo Igarashi, Ehud Sharlin, Ehud Sakamoto, Jeffrey Allen
    ACM Transactions on Interactive Intelligent Systems 3 (4) 23:1-23:36  2160-6463 2014 [Refereed][Not invited]
     
    We present a series of projects for end-user authoring of interactive robotic behaviors, with a particular focus on the style of those behaviors: we call this approach Style-by-Demonstration (SBD).We provide an overview introduction of three different SBD platforms: SBD for animated character interactive locomotion paths, SBD for interactive robot locomotion paths, and SBD for interactive robot dance. The primary contribution of this article is a detailed cross-project SBD analysis of the interaction designs and evaluation approaches employed, with the goal of providing general guidelines stemming from our experiences, for both developing and evaluating SBD systems. In addition, we provide the first full account of our Puppet Master SBD algorithm, with an explanation of how it evolved through the projects. © 2014 ACM.
  • Daniel Saakes, Vipul Choudhary, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    23rd International Conference on Artificial Reality and Telexistence, ICAT 2013, Tokyo, Japan, December 11-13, 2013 IEEE Computer Society 13 - 19 2013/12 [Refereed][Not invited]
  • Naoki Sasaki, Hsiang-Ting Chen, Daisuke Sakamoto, Takeo Igarashi
    Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 77 - 82 2013 [Refereed][Not invited]
     
    We present faceton, a geometric modeling primitive designed for building architectural models, using a six degrees of freedom (DoF) input device in a virtual environment (VE). A faceton is given as an oriented point floating in the air and defines a plane of infinite extent passing through the point. The polygonal mesh model is constructed by taking the intersection of the planes associated with the facetons. With the simple drag-and-drop and group interaction of faceton, users can easily create 3D architecture models in the VE. The faceton primitive and its interaction reduce the overhead associated with standard polygonal mesh modeling in VE, where users have to manually specify vertexes and edges which could be far away. The faceton representation is inspired by the research on boundary representations (B-rep) and constructive solid geometry (CSG), but it is driven by a novel adaptive bounding algorithm and is specifically designed for the 3D modeling activities in an immersive virtual environment. Copyright © 2013 ACM.
  • Daisuke Sakamoto, Takanori Komatsu, Takeo Igarashi
    MobileHCI 2013 - Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services 69 - 78 2013 [Refereed][Not invited]
     
    We propose a technique called voice augmented manipulation (VAM) for augmenting user operations in a mobile environment. This technique augments user interactions on mobile devices, such as finger gestures and button pressing, with voice. For example, when a user makes a finger gesture on a mobile phone and voices a sound into it, the operation will continue until stops making the sound or makes another finger gesture. The VAM interface also provides a button-based interface, and the function connected to the button is augmented by voiced sounds. Two experiments verified the effectiveness of the VAM technique and showed that repeated finger gestures significantly decreased compared to current touch-input techniques, suggesting that VAM is useful in supporting user control in a mobile environment. © 2013 ACM.
  • Ko Mizoguchi, Daisuke Sakamoto, Takeo Igarashi
    HUMAN-COMPUTER INTERACTION - INTERACT 2013, PT IV 8120 603 - 610 0302-9743 2013 [Refereed][Not invited]
     
    A scrollbar is the most basic function of a graphical user interface. It is usually displayed on one side of an application window when a displayed document is larger than the window. However, the scrollbar is mostly presented as a simple bar without much information, and there is still plenty of room for improvement. In this paper, we propose an overview scrollbar that displays an overview of the entire document on it and implemented four types of overview scrollbars that use different compression methods to render the overviews. We conducted a user study to investigate how people use these scrollbars and measured the performance of them. Our results suggest that overview scrollbars are more usable than is a traditional scrollbar when people search targets that are recognizable in overview.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 3097 - 3100 2013 [Refereed][Not invited]
     
    Current programming environments use textual or symbolic representations. While these representations are appropriate for describing logical processes, they are not appropriate for representing raw values such as human and robot posture data, which are necessary for handling gesture input and controlling robots. To address this issue, we propose Picode, a text-based development environment augmented with inline visual representations: photos of human and robots. With Picode, the user first takes a photo to bind it to posture data. She then drag-and-drops the photo into the code editor, where it is displayed as an inline image. A preliminary user study revealed positive effects of taking photos on the programming experience. Copyright © 2013 ACM.
  • Yuta Sugiura, Calista Lee, Masayasu Ogata, Anusha Indrajith Withana, Yasutoshi Makino, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    CHI Conference on Human Factors in Computing Systems, CHI '12, Extended Abstracts Volume, Austin, TX, USA, May 5-10, 2012 ACM 1443 - 1444 2012/05 [Refereed][Not invited]
  • Shigeo Yoshida, Daisuke Sakamoto, Yuta Sugiura, Masahiko Inami, Takeo Igarashi
    SIGGRAPH Asia 2012 Emerging Technologies, SA 2012 2012 [Refereed][Not invited]
     
    RoboJockey is an interface for creating robot behavior and giving people a new entertainment experience with robots, in particular, making robots dance, such as the "Disc jockey" and "Video jockey" (Figure 1, left). The users can create continuous robot dance behaviors on the interface by using a simple visual language (Figure 1, right). The system generates music with beat and choreographs the robots in a dance using user-created behaviors. The RoboJockey has a multi-touch tabletop interface, which gives users a multi-user collaboration every object is designed as a circle, and it can be operated from all positions around the tabletop interface. RoboJockey supports a humanoid robot, which has a capable of expressing human like dance behaviors (Figure 1, center). Copyright © 2012 ACM, Inc.
  • Amy Wibowo, Daisuke Sakamoto, Jun Mitani, Takeo Igarashi
    Proceedings of the 6th International Conference on Tangible, Embedded and Embodied Interaction, TEI 2012 99 - 102 2012 [Refereed][Not invited]
     
    This paper introduces DressUp, a computerized system for designing dresses with 3D input using the form of the human body as a guide. It consists of a body-sized physical mannequin, a screen, and tangible prop tools for drawing in 3D on and around the mannequin. As the user draws, he/she modifies or creates pieces of digital cloth, which are displayed on a model of the mannequin on the screen. We explore the capacity of our 3D input tools to create a variety of dresses. We also describe observations gained from users designing actual physical garments with the system. © 2012 ACM.
  • Genki Furumi, Daisuke Sakamoto, Takeo Igarashi
    ITS 2012 - Proceedings of the ACM Conference on Interactive Tabletops and Surfaces 193 - 196 2012 [Refereed][Not invited]
     
    The screen of a tabletop computer is often occluded by physical objects such as coffee cups. This makes it difficult to see the virtual elements under the physical objects (visibility) and manipulate them (manipulability). Here we present a user interface widget, called "SnapRail," to address these problems, especially occlusion of a manipulable collection of virtual discrete elements such as icons. SnapRail detects a physical object on the surface and the virtual elements under the object. It then snaps the virtual elements to a rail widget that appears around the object. The user can then manipulate the virtual elements along the rail widget. We conducted a preliminary user study to evaluate the potential of this interface and collect initial feedback. The SnapRail interface received positive feedback from participants of the user study. © 2012 ACM.
  • Kohei Matsumura, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    International Conference on Intelligent User Interfaces, Proceedings IUI 305 - 306 2012 [Refereed][Not invited]
     
    We present universal earphones that use both a proximity sensor and a skin conductance sensor and we demonstrate several implicit interaction techniques they achieve by automatically detecting the context of use. The universal earphones have two main features. The first involves detecting the left and right sides of ears, which provides audio to either ear, and the second involves detecting the shared use of earphones and this provides mixed stereo sound to both earphones. These features not merely free users from having to check the left and right sides of earphones, but they enable them to enjoy sharing stereo audio with other people.
  • Yuta Sugiura, Calista Lee, Masayasu Ogata, Anusha Withana, Yasutoshi Makino, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 725 - 734 2012 [Refereed][Not invited]
     
    PINOKY is a wireless ring-like device that can be externally attached to any plush toy as an accessory that animates the toy by moving its limbs. A user is thus able to instantly convert any plush toy into a soft robot. The user can control the toy remotely or input the movement desired by moving the plush toy and having the data recorded and played back. Unlike other methods for animating plush toys, PINOKY is non-intrusive, so alterations to the toy are not required. In a user study, 1) the roles of plush toys in the participants' daily lives were examined, 2) how participants played with plush toys without PINOKY was observed, 3) how they played with plush toys with PINOKY was observed, and their reactions to the device were surveyed. On the basis of the results, potential applications were conceptualized to illustrate the utility of PINOKY. Copyright 2012 ACM.
  • Jeffrey Allen, James E. Young, Daisuke Sakamoto, Takeo Igarashi
    Proceedings of the Designing Interactive Systems Conference, DIS '12 592 - 601 2012 [Refereed][Not invited]
     
    As robots continue to enter people's everyday spaces, we argue that it will be increasingly important to consider the robots' movement style as an integral component of their interaction design. That is, aspects of the robot's movement which are not directly related to a task at hand (e.g., pick up a ball) can have a strong impact on how people perceive that action (e.g., aggressively or hesitantly). We call these elements the movement style. We believe that perceptions of this kind of style will be highly dependent on the culture, group, or individual, and so people will need to have the ability to customize their robot. Therefore, in this work we use Style by Demonstration, a style focus on the more-traditional programming by demonstration technique, and present the Puppet Dancer system, an interface for constructing paired and interactive robotic dances. In this paper we detail the Puppet Dancer interface and interaction design, explain our new algorithms for teaching dance by demonstration, and present the results from a formal qualitative study. © 2012 ACM.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    Proceedings of the Designing Interactive Systems Conference, DIS '12 248 - 257 2012 [Refereed][Not invited]
     
    There are many toolkits for physical UIs, but most physical UI applications are not locomotive. When the programmer wants to make things move around in the environment, he faces difficulty related to robotics. Toolkits for robot programming, unfortunately, are usually not as accessible as those for building physical UIs. To address this interdisciplinary issue, we propose Phybots, a toolkit that allows researchers and interaction designers to rapidly prototype applications with locomotive robotic things. The contributions of this research are the combination of a hardware setup, software API, its underlying architecture and a graphical runtime debug tool that supports the whole prototyping activity. This paper introduces the toolkit, applications and lessons learned from three user studies. © 2012 ACM.
  • Sharedo: To-doリストによる人-ロボット間のタスク共有
    加藤淳, 坂本大介, 五十嵐健夫
    第19回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2011) 2011/12 [Refereed][Not invited]
  • PINOKY:ぬいぐるみを駆動するリング型のデバイス
    杉浦裕太, LeeCalista, 尾形正泰, 牧野泰才, 坂本大介, 稲見昌彦, 五十嵐健夫
    第19回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2011) 2011/12 [Refereed][Not invited]
  • FuwaFuwa:反射型光センサによる柔軟物体への接触位置および圧力の計測手法の提案とその応用
    杉浦裕太, 筧豪太, ウィタナ アヌーシャ, リーカリスタ, 坂本大介, 杉本麻樹, 稲見昌彦, 五十嵐健夫
    エンタテインメントコンピューティング2011 (EC2011) 2011/10/07 [Refereed][Not invited]
  • Jun Kato, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    IPSJ Journal 一般社団法人情報処理学会 52 (4) 1425 - 1437 0387-5806 2011/04 [Refereed][Not invited]
     
    Small mobile robots are expected to be utilized for helping daily tasks at home. We need sophisticated user interfaces for them. However, prototyping of robot applications is still difficult for software programmers without prior knowledge of robotics including many researchers in the field of Human-Computer Interaction. We developed a software toolkit called "Andy", with which programmers can make robots move and push objects on a flat surface with one API call and receive their two-dimensional motion events by registering listeners. Design of the APIs is influenced by programming style of...
  • Yuta Sugiura, Gota Kakehi, Daisuke Sakamoto, Anusha Withana, Masahiko Inami, Takeo Igarashi
    IPSJ Journal 一般社団法人情報処理学会 52 (2) 737 - 742 0387-5806 2011/02 [Refereed][Not invited]
     
    We propose an operating method for bipedal robots by using two-fingered gestures on a multi-touch surface. We focus on finger gestures that people represent the human with moving fingers, such as walking, running, kicking and turning. These bipedal gestures are natural and intuitive enough for the end-users to control the humanoid robots. The system captures those finger gestures on a multi-touch display as the direct operation method. The capturing method is easy and simple, but robust enough for the entertainment applications. We show an example application with our proposed method, and d...
  • Yuta Sugiura, Anusha Withana, Teruki Shinohara, Masayasu Ogata, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    SIGGRAPH Asia 2011 Emerging Technologies, SA'11 2011 [Refereed][Not invited]
     
    We propose a cooperative cooking robot system that operates with humans in an open environment. The system can cook a meal by pouring various ingredients into a boiling pot on an induction heating cooker and adjusting the heating strength according to a recipe that is developed by the user. Our contribution is in the design of the system incorporating robotic- and human-specific elements in a shared workspace so as to achieve a cooperative rudimentary cooking capability. First, we provide a graphical user interface to display detailed cooking instructions to the user. Second, we use small mobile robots instead of built-in arms to save space, improve flexibility, and increase safety. Third, we use special cooking tools that are shared with the robot. We hope insights obtained in this study will be useful for the design of other household systems in the future. A previous version of our system has been presented [1]. This demonstration will show an extended version with a new robot and improved interaction design.
  • Yuta Sugiura, Gota Kakehi, Anusha Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi
    UIST'11 - Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology 509 - 516 2011 [Refereed][Not invited]
     
    We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring the shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical soft objects in the household without complex installation procedures and without spoiling the softness of the object because it requires no physical connection. Six LEDs in the module emit IR light in six orthogonal directions, and six corresponding photosensors measure the reflected light energy. One can easily convert almost any soft object into a touch-input device that can detect both touch position and surface displacement by embedding multiple FuwaFuwa sensor modules in the object. A variety of example applications illustrate the utility of the FuwaFuwa sensor module. An evaluation of the proposed deformation measurement technique confirms its effectiveness. © 2011 ACM.
  • Gota Kakehi, Yuta Sugiura, Anusha Withana, Calista Lee, Naohisa Nagaya, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi
    ACM SIGGRAPH 2011 Emerging Technologies, SIGGRAPH'11 5  2011 [Refereed][Not invited]
     
    Soft objects are widely used in our day-to-day lives, and provide both comfort and safety in contrast to hard objects. Also, soft objects are able to provide a natural and rich haptic sensation. In human-computer interaction, soft interfaces have been shown to be able to increase emotional attachment between human and machines, and increase the entertainment value of the interaction. We propose the FuwaFuwa sensor, a small, flexible and wireless module to effectively measure shape deformation in soft objects using IR-based directional photoreflectivity measurements. By embedding multiple FuwaFuwa sensors within a soft object, we can easily convert any soft object into a touch-input device able to detect both touch position and surface displacement. Furthermore, since it is battery-powered and equipped with wireless communication, it can be easily installed in any soft object. Besides that, because the FuwaFuwa sensor is small and wireless, it can be inserted into the soft object easily without affecting its soft properties.
  • Kexi Liu, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    29TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS 647 - 656 2011 [Refereed][Not invited]
     
    As various home robots come into homes, the need for efficient robot task management tools is arising. Current tools are designed for controlling individual robots independently, so they are not ideally suitable for assigning coordinated action among multiple robots. To address this problem, we developed a management tool for home robots with a graphical editing interface. The user assigns instructions by selecting a tool from a toolbox and sketching on a bird's-eye view of the environment. Layering supports the management of multiple tasks in the same room. Layered graphical representation gives a quick overview of and access to rich information tied to the physical environment. This paper describes the prototype system and reports on our evaluation of the system.
  • Masahiro Shiomi, Daisuke Sakamoto, Takayuki Kanda, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS 3 (1) 27 - 40 1875-4791 2011/01 [Refereed][Not invited]
     
    We developed a networked robot system in which ubiquitous sensors support robot sensing and a human operator processes the robot's decisions during interaction. To achieve semi-autonomous operation for a communication robot functioning in real environments, we developed an operator-requesting mechanism that enables the robot to detect situations that it cannot handle autonomously. Therefore, a human operator helps by assuming control with minimum effort. The robot system consists of a humanoid robot, floor sensors, cameras, and a sound-level meter. For helping people in real environments, we implemented such basic communicative behaviors as greetings and route guidance in the robot and conducted a field trial at a train station to investigate the robot system's effectiveness. The results attest to the high acceptability of the robot system in a public space and also show that the operator-requesting mechanism correctly requested help in 84.7% of the necessary situations; the operator only had to control 25% of the experiment time in the semi-autonomous mode with a robot system that successfully guided 68% of the visitors.
  • Foldy: GUI操作によるロボットへの服の畳み方の教示
    杉浦裕太, 坂本大介, Tabare Gowon, 高橋大樹, 稲見昌彦, 五十嵐健夫
    第18回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2010) 7 - 12 2010/12 [Refereed][Not invited]
  • matereal: インタラクティブなロボットアプリケーションのプロトタイピング用ツールキット
    加藤淳, 坂本大介, 五十嵐健夫
    第18回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2010) 83 - 88 2010/12 [Refereed][Not invited]
  • RoboJockey: 連続的なロボットパフォーマンスのためのインタフェース
    代蔵巧, 坂本大介, 杉浦裕太, 小野哲雄, 稲見昌彦, 五十嵐健夫
    インタラクション2010 インタラクティブ発表(プレミアム) 2010/03 [Refereed][Not invited]
  • Walky: 指の擬人的な動作を用いた歩行ロボットへの操作手法
    杉浦裕太, 筧豪太, Anusha I. Withana, Charith L. Fernando, 坂本大介, 稲見昌彦, 五十嵐健夫
    インタラクション2010 インタラクティブ発表(プレミアム) 2010/03 [Refereed][Not invited]
  • Takumi Shirokura, Daisuke Sakamoto, Yuta Sugiura, Tetsuo Ono, Masahiko Inami, Takeo Igarashi
    UIST 2010 - 23rd ACM Symposium on User Interface Software and Technology, Adjunct Proceedings 399 - 400 2010 [Refereed][Not invited]
     
    We developed a RoboJockey (Robot Jockey) interface for coordinating robot actions, such as dancing - similar to "Disc jockey" and "Video jockey". The system enables a user to choreograph a dance for a robot to perform by using a simple visual language. Users can coordinate humanoid robot actions with a combination of arm and leg movements. Every action is automatically performed to background music and beat. The RoboJockey will give a new entertainment experience with robots to the end-users.
  • Jun Kato, Daisuke Sakamoto, Takeo Igarashi
    UIST 2010 - 23rd ACM Symposium on User Interface Software and Technology, Adjunct Proceedings 387 - 388 2010 [Refereed][Not invited]
     
    We introduce a technique to detect simple gestures of "surfing" (moving a hand horizontally) on a standard keyboard by analyzing recorded sounds in real-time with a microphone attached close to the keyboard. This technique allows the user to maintain a focus on the screen while surfing on the keyboard. Since this technique uses a standard keyboard without any modification, the user can take full advantage of the input functionality and tactile quality of his favorite keyboard supplemented with our interface.
  • Yuta Sugiura, Daisuke Sakamoto, Anusha Withana, Masahiko Inami, Takeo Igarashi
    CHI2010: PROCEEDINGS OF THE 28TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4 2427 - + 2010 [Refereed][Not invited]
     
    We propose a cooking system that operates in an open environment. The system cooks a meal by pouring various ingredients into a boiling pot on an induction heating cooker and adjusts the heating strength according to the user's instructions. We then describe how the system incorporates robotic- and human-specific elements in a shared workspace so as to achieve a cooperative rudimentary cooking capability. First, we use small mobile robots instead of built-in arms to save space, improve flexibility and increase safety. Second, we use detachable visual markers to allow the user to easily configure the real-world environment. Third, we provide a graphical user interface to display detailed cooking instructions to the user. We hope insights obtained in this experiment will be useful for the design of other household systems in the future.
  • Sakamoto Daisuke, Takumi Shirokura, Yuta Sugiura, Tetsuo Ono, Masahiko Inami, Takeo Igarashi
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTER ENTERTAINMENT TECHNOLOGY (ACE 2010) 53 - 56 2010 [Refereed][Not invited]
     
    We developed a RoboJockey (Robot Jockey) interface for coordinating robot actions, such as dancing, similar to a "disc jockey" or "video jockey" who selects and plays recorded music or video for an audience, in this case, robot's actions, and giving people a new entertainment experience with robots. The system enables a user to choreograph a robot to dance using a simple visual language. Every icon on the interface is circular and can be operated from all positions around the tabletop interface. Users can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements, and the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. We demonstrated RoboJockey at a Japanese domestic symposium, and confirmed that people enjoyed using the system and successfully created entertaining robot dances.
  • Cooky: 調理順序指示インタフェースと料理ロボットの開発
    杉浦 裕太, 坂本 大介, Withana Anusha, 稲見 昌彦, 五十嵐 健夫
    第17回インタラクティブシステムとソフトウェアに関するワークショップ (WISS2009) 1341-870X 2009/12 [Refereed][Not invited]
  • SHIOMI Masahiro, SAKAMOTO Daisuke, KANDA Takayuki, ISHI Carlos Toshinori, ISHIGURO Hiroshi, HAGITA Norihiro
    The Transactions of the Institute of Electronics, Information and Communication Engineers. A 一般社団法人電子情報通信学会 92 (11) 773 - 783 0913-5707 2009/11 [Refereed][Not invited]
     
    本論文は,我々が開発した半自律型コミュニケーションロボットシステムについて記述する.我々は,オペレータの負荷を減らしつつ,効率の良い半自律動作を実現するために,オペレータコールアルゴリズムを開発した.本アルゴリズムは,自律的にロボットが単体で解決できない状況を検出し,オペレータへのコールを行う.コールされたオペレータは必要に応じてロボットの遠隔操作を行い,その状況への対処を行う.つまり,ロボットは基本的に自律的に動作して人々との相互作用を行い,問題が発生した場合にのみオペレータをコールして半自律動作を行う.開発したロボットシステムの有用性を検証するため,駅構内で道案内を行う半自律型コミュニケーションロボットを用いた実証実験を行った.実験の結果,半自律で動作したロボットは68.1%の割合で道案内を正しく行い,完全自律で動作したロボットは29.9%の割合で道案内を正しく行った.このとき,オペレータは実験時間中25%のみ,ロボットの一部遠隔操作を行った.これらの結果から,1人のオペレータが複数のロボットを同時に操作することが実現可能であることが示唆された.
  • 塩見昌裕, 坂本大介, 坂本大介, 神田崇行, 石井カルロス寿憲, 石黒浩, 石黒浩, 萩田紀博
    電子情報通信学会論文誌 A J92-A (11) 773 - 783 0913-5707 2009/11 [Refereed][Not invited]
  • Thomas Seifried, Michael Haller, Stacey D. Scott, Florian Perteneder, Christian Rendl, Daisuke Sakamoto, Masahiko Inami
    ACM International Conference on Interactive Tabletops and Surfaces, ITS 2009, Banff / Calgary, Alberta, Canada, November 23-25, 2009 ACM 33 - 40 2009/11 [Refereed][Not invited]
  • Daisuke Sakamoto, Kotaro Hayashi, Takayuki Kanda, Masahiro Shiomi, Satoshi Koizumi, Hiroshi Ishiguro, Tsukasa Ogasawara, Norihiro Hagita
    International Journal of Social Robotics 1 (2) 157 - 169 1875-4791 2009/04 [Refereed][Not invited]
     
    This paper reports a method that uses humanoid robots as a communication medium. Even though many interactive robots are being developed, their interactivity remains much poorer than that of humans due to their limited perception abilities. In our approach, the role of interactive robots is limited to a broadcasting medium for exploring the best way to attract people's interest to information provided by robots. We propose using robots as a passive social medium, in which they behave as if they are talking together. We conducted an eight-day field experiment at a train station to investigate the effects of such a passive social medium. © Springer Science & Business Media BV 2009.
  • 複数台ロボットのマルチタッチディスプレイによる操作インタフェース
    加藤 淳, 坂本 大介, 稲見 昌彦, 五十嵐 健夫
    インタラクション2009 インタラクティブ発表 2009/03 [Refereed][Not invited]
  • Daisuke Sakamoto, Hiroshi Ishiguro
    Kansei Engineering International, Japan Society of Kansei Engineering 2009/01 [Refereed][Not invited]
  • Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2 9 - + 2009 [Refereed][Not invited]
     
    The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.
  • Jun Kato, Daisuke Sakamoto, Masahiko Inami, Takeo Igarashi
    Conference on Human Factors in Computing Systems - Proceedings 3443 - 3448 2009 [Refereed][Not invited]
     
    We must give some form of a command to robots in order to have the robots do a complex task. An initial instruction is required even if they do their tasks autonomously. We therefore need interfaces for the operation and teaching of robots. Natural languages, joysticks, and other pointing devices are currently used for this purpose. These interfaces, however, have difficulty in operating multiple robots simultaneously. We developed a multi-touch interface with a top-down view from a ceiling camera for controlling multiple mobile robots. The user specifies a vector field followed by all robots on the view. This paper describes the user interface and its implementation, and future work of the project.
  • Daisuke Sakamoto, Koichiro Honda, Masahiko Inami, Takeo Igarashi
    CHI2009: PROCEEDINGS OF THE 27TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4 197 - 200 2009 [Refereed][Not invited]
     
    Numerous robots have been developed, and some of them are already being used in homes, institutions, and workplaces. Despite the, development of useful robot functions, the focus so far has not been on user interfaces of robots. General users of robots find it hard to understand what the robots are doing and what kind of work they can do. This paper presents an interface for the commanding home robots by using stroke gestures on a computer screen. This interface allows the user to control robots and design their behaviors by sketching the robot's behaviors and actions on a top-down view from ceiling cameras. To convey a feeling of directly controlling the robots, our interface employs the live camera view. In this study, we focused on a house-cleaning task that is typical of home robots, and developed a sketch interface for designing behaviors of vacuuming robots.
  • Masahiro Shiomi, Daisuke Sakamoto, Takayuki Kanda, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita
    HRI 2008 - Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction: Living with Robots 303 - 310 2008 [Refereed][Not invited]
     
    This paper reports an initial field trial with a prototype of a semiautonomous communication robot at a train station. We developed an operator-requesting mechanism to achieve semiautonomous operation for a communication robot functioning in real environments. The operator-requesting mechanism autonomously detects situations that the robot cannot handle by itself a human operator helps by assuming control of the robot. This approach gives semi-autonomous robots the ability to function naturally with minimum human effort. Our system consists of a humanoid robot and ubiquitous sensors. The robot has such basic communicative behaviors as greeting and route guidance. The experimental results revealed that the operator-requesting mechanism correctly requested operator's help in 85% of the necessary situations the operator only had to control 25% of the experiment time in the semi-autonomous mode with a robot system that successfully guided 68% of the passengers. At the same time, this trial provided the opportunity to gather user data for the further development of natural behaviors for such robots operating in real environments. Copyright 2008 ACM.
  • SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ISHIGURO HIROSHI, HAGITA NORIHIRO
    IPSJ journal 一般社団法人情報処理学会 48 (12) 3729 - 3738 1882-7764 2007/12/15 [Not refereed][Not invited]
     
    In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirmed that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human-likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.
  • SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, KANDA TAKAYUKI, ONO TETSUO, ONO TETSUO, ISHIGURO HIROSHI, ISHIGURO HIROSHI, HAGITA NORIHIRO
    情報処理学会論文誌 ACM 48 (12) 3729 - 3738 0387-5806 2007/12 [Refereed][Not invited]
  • Kotaro Hayashi, Daisuke Sakamoto, Takayuki Kanda, Masahiro Shiomi, Satoshi Koizumi, Hiroshi Ishiguro, Tsukasa Ogasawara, Norihiro Hagita
    HRI 2007 - Proceedings of the 2007 ACM/IEEE Conference on Human-Robot Interaction - Robot as Team Member 137 - 144 2007 [Refereed][Not invited]
     
    This paper reports a method that uses humanoid robots as a communication medium. There are many interactive robots under development, but due to their limited perception, their interactivity is still far poorer than that of humans. Our approach in this paper is to limit robots' purpose to a non-interactive medium and to look for a way to attract people's interest in the information that robots convey. We propose using robots as a passive-social medium, in which multiple robots converse with each other. We conducted a field experiment at a train station for eight days to investigate the effects of a passive-social medium. Copyright 2007 ACM.
  • Takayuki Kanda, Masayuki Kamasima, Michita Imai, Tetsuo Ono, Daisuke Sakamoto, Hiroshi Ishiguro, Yuichiro Anzai
    AUTONOMOUS ROBOTS 22 (1) 87 - 100 0929-5593 2007/01 [Refereed][Not invited]
     
    This paper reports the findings for a humanoid robot that expresses its listening attitude and understanding to humans by effectively using its body properties in a route guidance situation. A human teaches a route to the robot, and the developed robot behaves similar to a human listener by utilizing both temporal and spatial cooperative behaviors to demonstrate that it is indeed listening to its human counterpart. The robot's software consists of many communicative units and rules for selecting appropriate communicative units. A communicative unit realizes a particular cooperative behavior such as eye-contact and nodding, found through previous research in HRI. The rules for selecting communicative units were retrieved through our preliminary experiments with a WOZ method. An experiment was conducted to verify the effectiveness of the robot, with the results revealing that a robot displaying cooperative behavior received the highest subjective evaluation, which is rather similar to a human listener. A detailed analysis showed that this evaluation was mainly due to body movements as well as utterances. On the other hand, subjects' utterance to the robot was encouraged by the robot's utterances but not by its body movements.
  • 坂本大介, 小野哲雄
    ヒューマンインタフェース学会論文誌 8 (3) 381 - 390 1344-7262 2006/08 [Refereed][Not invited]
  • KOMATSU TAKANORI, SUZUKI SHOJI, SUZUKI KEIJI, MATSUBARA HITOSHI, ONO TETSUO, SAKAMOTO DAISUKE, SATO TAKAMASA, UCHIMOTO TOMOHIRO, OKADA HAJIME, KITANO ISAMU, MUNEKATA NAGISA, SATO TOMONORI, TAKAHASHI KAZUYUKI, HONMA MASATO, OSADA JUN'ICHI, HATA MASAYUKI, INUI HIDEO
    日本バーチャルリアリティ学会論文誌 11 (2) 213 - 223 1344-011X 2006/06 [Refereed][Not invited]
  • SAKAMOTO DAISUKE, ONO TETSUO
    コンピュータソフトウェア 23 (2) 101 - 107 0289-6540 2006/04 [Refereed][Not invited]
  • Daisuke Sakamoto, Tetsuo Ono
    Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI 2006, Salt Lake City, Utah, USA, March 2-3, 2006 ACM 355 - 356 2006/03 [Refereed][Not invited]
  • Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Masayuki Kamashima, Michita Imai, Hiroshi Ishiguro
    Int. J. Hum.-Comput. Stud. 62 (2) 247 - 265 2005/12 [Refereed][Not invited]
  • KANDA TAKAYUKI, KAMASHIMA MASAYUKI, KAMASHIMA MASAYUKI, IMAI MICHITA, IMAI MICHITA, ONO TETSUO, ONO TETSUO, SAKAMOTO DAISUKE, SAKAMOTO DAISUKE, ISHIGURO HIROSHI, ISHIGURO HIROSHI, ANZAI YUICHIRO
    日本ロボット学会誌 23 (7) 898 - 909 0289-1824 2005/10 [Refereed][Not invited]
  • Masayuki Kamashima, Takayuki Kanda, Michita Imai, Tetsuo Ono, Daisuke Sakamoto, Hiroshi Ishiguro, Yuichiro Anzai
    2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, September 28 - October 2, 2004 IEEE 2506 - 2513 2004/09 [Refereed][Not invited]
  • D Sakamoto, T Kanda, T Ono, M Kamashima, M Imai, H Ishiguro
    RO-MAN 2004: 13TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS 443 - 448 2004 [Refereed][Not invited]
     
    Research on humanoid robots has produced various uses for their body properties in communication. In particular, mutual relationships of body movements between a robot and a human are considered to be important for smooth and natural communication, as they are in human-human communication. We have developed a semi-autonomous humanoid robot system that is capable of cooperative body movements with humans using environment-based sensors and switching communicative units. And we conducted an experiment using this robot system and verified the importance of cooperative behaviors in a route-guidance situation where a human gives directions to the robot. This result indicates that the cooperative body movements greatly enhance the emotional impressions of human in a route-guidance situation. We believe these results will allow us to develop interactive humanoid robots that sociably communicate with humans.

Conference Activities & Talks

MISC

  • Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, Norihiro Hagita  Geminoid Studies: Science and Technologies for Humanlike Teleoperated Androids  39  -56  2018/04  [Not refereed][Not invited]
     
    © Springer Nature Singapore Pte Ltd. 2018. In this study, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants feel a stronger presence of the operator when he talks through the android than when he appears on a video monitor in a video conference system. In addition, participants talk with the robot naturally and evaluate its humanlike-ness as equal to a man on a video monitor. We also discuss a remote-controlled system for telepresence that uses a humanlike android robot as a new telecommunication medium.
  • Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro  Geminoid Studies: Science and Technologies for Humanlike Teleoperated Androids  235  -247  2018/04  [Not refereed][Not invited]
     
    © Springer Nature Singapore Pte Ltd. 2018. The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations simultaneously. This study investigates the influence that the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurements. The persuasive agent advertised a Bluetooth headset. The results show that an android is perceived as being as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants who were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.
  • Saki Sakaguchi, Eunice Ratna Sari, Taku Hachisu, Adi B. Tedjasaputra, Kunihiro Kato, Masitah Ghazali, Kaori Ikematsu, Ellen Yi-Luen Do, Jun Kato, Array,Jun Nishida, Daisuke Sakamoto, Yoshifumi Kitamura, Jinwoo Kim, Anirudha Joshi, Zhengjie Liu  Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, April 21-26, 2018  2018  [Refereed][Not invited]
  • 松村耕平, 尾形正泰, 小野哲雄, 加藤淳, 阪口紗季, 坂本大介, 杉本雅則, 角康之, 中村裕美, 西田健志, 樋口啓太, 安尾萌, 渡邉拓貴  情報処理学会研究報告(Web)  2017-  (HCI-174)  Vol.2017‐HCI‐174,No.13,1‐8 (WEB ONLY)  2017/08/16  [Not refereed][Not invited]
  • 坂本大介  OHM  104-  (2)  19‐20  2017/02/05  [Not refereed][Not invited]
  • 坂本 大介  ヒューマンインタフェース学会誌 = Human interface = Journal of Human Interface Society  19-  (4)  190  -192  2017  [Not refereed][Not invited]
  • Kohei Matsumura, Masa Ogata, Saki Sakaguchi, Takashi Ijiri, Takeshi Nishida, Jun Kato, Hiromi Nakamura, Daisuke Sakamoto, Yoshifumi Kitamura  Conference on Human Factors in Computing Systems - Proceedings  07-12--  3325  -3330  2016/05/07  [Refereed][Not invited]
     
    This symposium showcases the latest work from Japan on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among young researchers and students and create a fresh research community.
  • Jun Kato, Hiromi Nakamura, Yuta Sugiura, Taku Hachisu, Daisuke Sakamoto, Koji Yatani, Yoshifumi Kitamura  Conference on Human Factors in Computing Systems - Proceedings  18-  2321  -2324  2015/04/18  [Refereed][Not invited]
     
    This symposium showcases the latest work from Japan on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among young researchers and students and create a fresh research community. Copyright is held by the author/owner(s).
  • Daisuke Sakamoto  Interactions  22-  (1)  52  -55  2015/01/01  [Refereed][Not invited]
     
    Christoph Bartneck's 2009 scientometric analysis of CHI conference proceedings revealed the number of Asian scientists participating in the event was small as compared with other countries. Bartneck's analysis was more broadly focused on both quantity and quality, including numbers of citations and best paper awards. The analysis focused on the quantitative analysis, including geography, organization, and author statistics due to limited space and resources. Experts employed Christoph Bartneck's idea of credit for the analysis in which one paper equaled one credit to conduct their respective investigations.
  • 坂本 大介  ヒューマンインタフェース学会誌 = Human interface = Journal of Human Interface Society  15-  (4)  289  -294  2013  [Not refereed][Not invited]
  • 濱田健夫, 坂本大介, 稲見昌彦, 五十嵐健夫  日本バーチャルリアリティ学会大会論文集(CD-ROM)  16th-  ROMBUNNO.21C-1  2011/09/20  [Not refereed][Not invited]
  • Michael Haller, Thomas Seifried, Stacey D. Scott, Florian Perteneder, Christian Rendl, Daisuke Sakamoto, Masahiko Inami, Pranav Mistry, Pattie Maes, Seth E. Hunter, David Merrill,Jeevan J. Kalanithi, Susanne Seitinger, Daniel M. Taub, Alex S. Taylor  Interactions  18-  (3)  8  -9  2011  [Refereed][Not invited]
  • 近藤誠, 杉浦裕太, 筧豪太, 坂本大介, 稲見昌彦  ヒューマンインタフェース学会誌  12-  (3)  175  -182  2010/08/25  [Not refereed][Not invited]
  • 坂本 大介  日本バーチャルリアリティ学会誌 = Journal of the Virtual Reality Society of Japan  15-  (1)  44  -45  2010/03/31  [Not refereed][Not invited]
  • SAKAMOTO Daisuke  IPSJ Magazine  50-  (7)  2009/07/15  [Not refereed][Not invited]
  • ONO Tetsuo, SAKAMOTO Daisuke, OGAWA Kohei, KOMAGOME Daisuke  IPSJ SIG Notes. ICS  2008-  (5)  1  -7  2008/01/22  [Not refereed][Not invited]
     
    In this paper, we discuss a methodology of interaction design investigating factors regarding relation between humans and artifacts. Concretely, we introduce the results of our researches, i.e., ITACO system using an agent migration mechanism, Embodied Communication emerging from cooperative gesture, Sociality of Robots based on the balance theory, and RobotMeme applied mutual adaptation between human and robot. We will bring the problems of interaction design through this survey of our researches into relief.
  • 塩見昌裕, 塩見昌裕, 坂本大介, 坂本大介, 神田崇行, 石井カルロス寿憲, 石黒浩, 石黒浩, 萩田紀博  画像ラボ  18-  (4)  23  -27  2007/04/01  [Not refereed][Not invited]
  • Sakamoto Daisuke, Kanda Takayuki, Ono Tetsuo, Ishiguro Hiroshi, Hagita Norihiro  IPSJ SIG Notes. ICS  2006-  (131)  37  -42  2006/12/13  [Not refereed][Not invited]
     
    We developed a remote vision system for robot teleoperators. The system processes the images from 6 cameras and combines them into one big image which is then displayed on a screen. This makes understanding the context of the communication easier. In this paper, we report the test trial of this system in which an operator controls a communication robot from a remote place by using our system in public.
  • Sakamoto Daisuke, Kanda Takayuki, Ono Tetsuo, Ishiguro Hiroshi, Hagita Norihiro  Technical report of IEICE. HCS  106-  (412)  37  -42  2006/12/06  [Not refereed][Not invited]
     
    We developed a remote vision system for robot teleoperators. The system processes the images from 6 cameras and combines them into one big image which is then displayed on a screen. This makes understanding the context of the communication easier. In this paper, we report the test trial of this system in which an operator controls a communication robot from a remote place by using our system in public.
  • 佐藤 崇正, 坂本 大介, 内本 友洋, 北野 勇, 岡田 孟, 本間 正人, 小松 孝徳, 鈴木 昭二, 鈴木 恵二, 小野 哲雄, 松原 仁, 畑 雅之, 乾 英男  2005/09  [Not refereed][Not invited]
     
    第23回日本ロボット学会学術講演会CD-ROM 1I24
  • Suzuki Sho'ji, Sato Takamasa, Honma Masato, Hata Masayuki, Inui Hideo, Suzuki Keiji, Matsubara Hitoshi, Ono Tetsuo, Komatsu Takanori, Uchimoto Tomohiro, Okada Hajime, Kitano Isamu, Sakamoto Daisuke  The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)  2005-  (0)  2005  [Not refereed][Not invited]

Industrial Property Rights

Awards & Honors

  • 2019/10 the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '19) Best Demo Award - People's Choice
     
    受賞者: Kenji Suzuki;Daisuke Sakamoto;Sakiko Nishi;Tetsuo Ono
  • 2018/08 日本ソフトウェア科学会 第22回研究論文賞
     
    受賞者: 小山裕己;坂本大介;五十嵐健夫
  • 2017/12 HAIシンポジウム2017 Outstanding Research Award (優秀論文賞)
     複数ロボットの発話の重なりによって創発する空間の知覚 
    受賞者: 水丸 和樹;坂本 大介;小野 哲雄
  • 2017/12 HAIシンポジウム2017 Impressive Poster Award (優秀ポスター賞)
     ぬいぐるみロボットを用いた休憩タイミング提示システムの提案 
    受賞者: 大西 紗綾;坂本 大介;小野 哲雄
  • 2016/12 日本ソフトウェア科学会 第24回インタラクティブシステムとソフトウェアに関するワークショップ (WISS 2016) 対話発表賞
     自由形状の竹とんぼのインタラクティブデザインシステム 
    受賞者: 中村 守宏;小山 裕己;坂本 大介;五十嵐 健夫
  • 2016/03 情報処理学会インタラクション2016 インタラクティブ発表賞(PC推薦)
     Dollhouse VR: 複数人が空間を多角的に見ながら協調してレイアウトを検討できるシステム 
    受賞者: 杉浦裕太;尉林暉;チョントビー;坂本大介;宮田なつき;多田充徳;大隈隆史;蔵田武志;新村猛;持丸正明;五十嵐健夫
  • 2015/11 ACM the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15) Best Poster Award, Honorable Mention
     Fix and Slide: Caret Navigation with Movable Background 
    受賞者: Kenji Suzuki;Kazumasa Okabe;Ryuuki Sakamoto;Daisuke Sakamoto
  • 2015/06 情報処理学会 学会活動貢献賞
     学術講演の動画中継・アーカイブ活動を通じた学会への貢献 
    受賞者: 坂本大介
  • 2015/04 ACM, the SIGCHI Conference on Human Factors in Computing Systems (CHI '15) Honorable Mention
     AnnoTone: Record-time Audio Watermarking for Context-aware Video Editing 
    受賞者: Ryohei Suzuki;Daisuke Sakamoto;Takeo Igarashi
  • 2015/03 情報処理学会インタラクション2015 インタラクティブ発表賞
     テキスト全体の移動によりキャレットの相対位置を変化させるポインティング手法の提案 
    受賞者: 鈴木 健司;岡部 和昌;坂本 竜基;坂本 大介
  • 2014/11 日本ソフトウェア科学会 インタラクティブシステムとソフトウェアに関するワークショップ(WISS '14) 優秀論文賞
     
    受賞者: 小山 裕己;坂本 大介;五十嵐 健夫
  • 2014/10 international conference on Human-Agent Interaction (iHAI '14) Best Paper Nominee
     
    受賞者: Jun Kato;Daisuke Sakamoto;Takeo Igarashi;Masataka Goto
  • 2014/02 ACM international conference on Intelligent User Interfaces (IUI '14) Best Paper Award
     
    受賞者: Fangzhou Wang;Yang Li;Daisuke Sakamoto;Takeo Igarashi
  • 2013/12 International Conference on Artificial Reality and Telexistence (ICAT '13) Best Paper Award
     
    受賞者: Daniel Saakes;Vipul Choudhary;Daisuke Sakamoto;Masahiko Inami;Takeo Igarashi
  • 2013/10 ACM Symposium on Virtual Reality Software and Technology (VRST '13) Best Paper Award
     
    受賞者: Naoki Sasaki;Hsiang-Ting Chen;Daisuke Sakamoto;Takeo Igarashi
  • 2013/03 ACM/IEEE international conference on Human-robot interaction (HRI2013) Best Demo Honorable Mention Award
     
    受賞者: Yuta Sugiura;Yasutoshi Makino;Daisuke Sakamoto;Masahiko Inami;Takeo Igarashi
  • 2012/10 日本デザイン振興会 2012年度グッドデザイン賞
     
    受賞者: 杉浦裕太;筧豪太;杉本麻樹;坂本大介;稲見昌彦;五十嵐健夫
  • 2010/11 International Conference on Advances in Computer Entertainment Technology (ACE 2010) Best Paper Silver Award
     
    受賞者: Takumi Shirokura;Daisuke Sakamoto;Yuta Sugiura;Tetsuo Ono;Masahiko Inami;Takeo Igarashi
  • 2010/04 Laval Virtual 2010 Grand Prix du Jury
     
    受賞者: Thomas Seifried;Christian Rendl;Florian Perteneder;Jakob Leitner;Michael Haller;Daisuke Sakamoto;Jun Kato;Masahiko Inami;Stacey D. Scott
  • 2010/03 情報処理学会 インタラクション2010 インタラクティブ発表賞
     
    受賞者: 杉浦裕太;筧豪太;Anusha I. Withana;Charith L. Fernando;坂本大介;稲見昌彦;五十嵐健夫
  • 2009/05 情報処理学会 平成20年度論文賞
     
    受賞者: 坂本大介;神田崇行;小野哲雄;石黒浩
  • 2008/03 国際電気通信基礎技術研究所 研究開発表彰 優秀研究賞(所内表彰)
     
    受賞者: 坂本大介
  • 2007/11 神戸ビエンナーレ2007 ロボットメディアアートコンペティション 最優秀賞
     
    受賞者: 坂本大介
  • 2007/03 情報処理学会 インタラクション2007 ベストペーパー賞
     
    受賞者: 坂本大介;神田崇行;小野哲雄;石黒浩;萩田紀博
  • 2007/03 ACM/IEEE International Conference on Human-Robot Interaction (HRI2007) Best Paper Award
     
    受賞者: Kotaro Hayashi;Daisuke Sakamoto;Takayuki Kanda;Masahiro Shiomi;Satoshi Koizumi;Hiroshi Ishiguro;Tsukasa Ogasawara;Norihiro Hagita
  • 2006 情報処理学会関西支部 平成18年度 学生奨励賞
     
    受賞者: 坂本大介
  • 2005/03 電子情報通信学会北海道支部 平成17年度 支部長賞
     
    受賞者: 坂本大介
  • 2004/03 公立はこだて未来大学 未来大学賞
     
    受賞者: 坂本大介
  • 2002/10 日経BP WPC EXPO テーマビジュアルコンテスト 優秀賞
     
    受賞者: 坂本大介;松下勇夫

Research Grants & Projects

  • ハンズフリーインタラクションを実現する高速かつ低負荷な視線インタフェース
    日本学術振興会:科学研究費助成事業 基盤研究(B)
    Date (from‐to) : 2021/04 -2025/03 
    Author : 坂本 大介
  • 人間とロボットの共生のための社会学的ロボット学
    日本学術振興会:科学研究費助成事業 基盤研究(B)
    Date (from‐to) : 2020/04 -2023/03 
    Author : 山崎 晶子, 坂本 大介, 大澤 博隆, 小林 貴訓, 中西 英之, 山崎 敬一
  • 感染症危機管理における位置情報活用に向けた基盤的技術の開発
    日本医療研究開発機構(AMED):ウイルス等感染症対策技術開発事業(基礎研究支援)
    Date (from‐to) : 2020/10 -2021/06 
    Author : 奥村 貴史, 髙橋 邦彦, 坂本 大介, 大向 一輝, 山本 泰智, 河口 信夫, 升井 洋志, 関本 義秀, 江上 周作
  • 心理職の活動を拡げるインターネット版認知行動療法の開発とプログラム評価
    日本学術振興会:科学研究費助成事業 基盤研究(A)
    Date (from‐to) : 2016/04 -2021/03 
    Author : 下山 晴彦, 菅沼 慎一郎, 坂本 大介, 星野 崇宏
     
    2017度には、インターネット版認知行動療法(iCBT)である「うつ・いっぽ・いっぽ」「いっぷく堂」「レジリエンストレーニング」を搭載したポータルサイトである「こころの手帖」の改善を進めた。それと並行して、ポータルサイトの利用者が希望した場合に、インターネットを通じて利用者をサポートする心理職の教育システムの開発を進めた。具体的には、認知行動療法の実践に必要な臨床心理学の知識と技能を学ぶための映像教材を制作した。教材のコンテンツとしては、臨床心理学入門テキスト映像、認知行動療法の基礎から応用までの段階的テキスト映像、さらには利用者には必ず含まれる発達障害の知識等が盛り込まれたテキスト映像を作成した。それらの映像教材を学習した上で、「こころの手帖」の利用者の心理支援を進めるための心理職のガイドマニュアルの製作を開始し、12月までに第1版を完成させた。それと同時に、「こころの手帖」のガイドとなる心理職の教育訓練プログラムを試験的に開始した。まずは、臨床現場での受付対応の研修を受けることで、「こころの手帖」のアセスメントの技能訓練をした。これは、一般社団法人臨床心理iネットが運営する東京認知行動療法センターに協力を得てインターンシップを行った。そのうえで試験的に開始した「こころの手帖」のガイドを担当し、利用者と担当者から意見を聴取し、ガイドマニュアルの改訂を行った。さらに本格的な「こころの手帖」のガイド心理職の訓練を実施する予定の2018年2月から3月までの期間に、公認心理師受験の講習会が始まってしまい、本格的な教育訓練プログラムの実施を2ヶ月延長せざるを得なくなった。なお、iCBTのAI化を並行して進め、株式会社マインドアイルの協力を得て「いっぷく堂」のAIバージョンを開発し、公表した。
  • 3D映像表示におけるユーザごとの拡張情報提示とオブジェクト操作に向けたインタラクション手法に関する共同研究
    NTTサービスエボリューション研究所:
    Date (from‐to) : 2019/08 -2020/03 
    Author : 小野哲雄, 坂本大介
  • 身体的特徴を考慮したボルダリングコース難易度推定とコース作成支援
    ノーステック財団(財団法人 北海道科学技術総合振興センター):研究開発助成事業
    Date (from‐to) : 2019/08 -2020/03 
    Author : 坂本大介, 船戸大輔
  • エリアカーソル法によるロバストな視線入力インタフェースの開発
    ノーステック財団(財団法人 北海道科学技術総合振興センター):研究開発助成事業
    Date (from‐to) : 2019/07 -2020/03 
    Author : 坂本大介
  • 3D映像表示における拡張情報提示のための3Dインタラクション技術に関する共同研究
    NTTサービスエボリューション研究所:
    Date (from‐to) : 2018/08 -2019/03 
    Author : 小野哲雄, 坂本大介
  • Japan Society for the Promotion of Science:Grants-in-Aid for Scientific Research Grant-in-Aid for Young Scientists (B)
    Date (from‐to) : 2012/04 -2014/03 
    Author : SAKAMOTO Daisuke
     
    We create a system that supports domestic robots for performing household tasks by utilizing a crowdsourcing. We consider that this makes it easy to create maps of a house, detect a location of an object, and perform a complex task by utilizing the crowdsourcing. On the other hand, there are some concerns to use the crowdsourcing services, such as 1) real-time operation, 2) privacy issue, and 3) designing an appropriate user interface. In this research, we created a proof-of-concept prototype system and conducted an experiment to investigate the appropriateness of the system.
  • 人-ロボット間相互作用研究のための対話ロボット用OSの開発と評価
    日本学術振興会:科学研究費助成事業 特別研究員奨励費
    Date (from‐to) : 2008 -2009 
    Author : 坂本 大介
     
    研究題目に沿い、本研究においては対話ロボット用のOS、特にロボットに詳しくない者においても簡単にロボット用のアプリケーションを開発することができる環境の開発を行った。ロボットが社会に広く普及するためには、多くのユーザによってそのアプリケーション開発が行われることが重要であり、本研究はこれを実現する環境を提供することを目的としていた。この環境の開発については前年度にプロトタイプの開発が終了しており、本年はこれの評価を行うために、実際にロボットのアプリケーションの開発を行った。 具体的には1)タブレットPCを用いた柔軟な家庭用ロボットへの指示インタフェースの開発を行った。ここで開発された技術はUpper Austria University of Applied Sciences, Media Interaction研究室との共同研究プロジェクトに応用され、実際に使われている。2)家庭用ロボットとの共同作業により、実際に調理を行うことができるロボットシステムの開発を行った。3)ロボットに不慣れなユーザであっても簡単にロボットを用いたエンタテインメントを経験できるような直感的なインタフェースの開発を行った。これは主に二つのプロジェクトで行われ、どちらも実際に国内会議、および国際会議においてデモンストレーションを行った。現在はさらに自然なインタラクションによるロボットとの対話技術の実現に関する研究を行っており、これについても前年度に開発した環境が重要な役割を担っている。これらのアプリケーション開発とデモンストレーションを通して、実社会におけるロボットの役割や存在意義について示していくことは分野発展のために非常に重要であると考えている。 また、これらの開発を通じて得られた結果は、実際に現在の対話ロボット用OSの開発に生かされており、有効なフィードバックが働いていると考えている。

Educational Activities

Teaching Experience

  • ロボット情報学
    開講年度 : 2019
    課程区分 : 学士課程
    開講学部 : 工学部
  • システム工学概論
    開講年度 : 2019
    課程区分 : 学士課程
    開講学部 : 工学部


Copyright © MEDIA FUSION Co.,Ltd. All rights reserved.