研究者データベース

小川 貴弘(オガワ タカヒロ)
情報科学研究院 メディアネットワーク部門 情報メディア学分野
教授

基本情報

通称等の別名

    小川 貴弘

所属

  • 情報科学研究院 メディアネットワーク部門 情報メディア学分野

職名

  • 教授

学位

  • 博士(情報科学)(北海道大学)

ホームページURL

J-Global ID

プロフィール

  • 2003年3月 北海道大学工学部情報工学科 卒業
    2005年3月 北海道大学大学院工学研究科電子情報工学専攻修士課程 修了
    2007年9月 北海道大学大学院情報科学研究科メディアネットワーク専攻博士後期課程 修了
    2007年9月~2008年3月 日本学術振興会特別研究員
    2008年3月~2008年6月 北海道大学大学院情報科学研究科博士研究員
    2008年7月~2016年9月 北海道大学大学院情報科学研究科 助教
    2016年10月~2019年3月 北海道大学大学院情報科学研究科 准教授
    2019年4月~現在北海道大学大学院情報科学研究院 准教授
    画像復元及び映像処理に関する研究に従事.博士(情報科学).
    IEEE,電子情報通信学会,ACM,映像情報メディア学会各会員.


    業績リストHP:


    https://www-lmd.ist.hokudai.ac.jp/member/takahiro-ogawa/

研究キーワード

  • 画像生成   行動解析   マルチスペクトル解析   機械学習   深層学習   CT   PET   X線画像   SNS   電子顕微鏡   信号処理   画像処理   可視化   NIRS   MRI   マルチメディア処理   スポーツ映像   社会基盤   衛星画像   EEG   医用画像   多変量解析   画像符号化   超解像   Webマイニング   ビッグデータ   IoT   人工知能   画質評価   画像復元   画像再構成   テキスト処理   情報検索   音楽   画像検索   意味理解   

研究分野

  • 情報通信 / ヒューマンインタフェース、インタラクション
  • 情報通信 / データベース

担当教育組織

職歴

  • 2023年01月 - 現在 北海道大学 大学院情報科学研究院 教授
  • 2019年04月 - 2023年01月 北海道大学 大学院情報科学研究院 准教授
  • 2016年10月 - 2019年03月 北海道大学 大学院情報科学研究科 准教授
  • 2008年07月 - 2016年09月 北海道大学 大学院情報科学研究科 助教
  • 2008年04月 - 2008年06月 北海道大学大学院情報科学研究科 博士研究員
  • 2005年04月 - 2008年03月 北海道大学大学院情報科学研究科 日本学術振興会特別研究員

学歴

  • 2005年04月 - 2007年09月   北海道大学   大学院情報科学研究科メディアネットワーク専攻博士課程
  • 2003年04月 - 2005年03月   北海道大学   大学院工学研究科学研究科電子情報工学専攻修士課程
  • 1999年04月 - 2003年03月   北海道大学   工学部

所属学協会

  • Association for Computing Machinery   IEEE   映像情報メディア学会   電子情報通信学会   

研究活動情報

論文

  • Yaozong Gan, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Sensors 23 23 9607 - 9607 2023年12月04日 
    Traffic sign recognition is a complex and challenging yet popular problem that can assist drivers on the road and reduce traffic accidents. Most existing methods for traffic sign recognition use convolutional neural networks (CNNs) and can achieve high recognition accuracy. However, these methods first require a large number of carefully crafted traffic sign datasets for the training process. Moreover, since traffic signs differ in each country and there is a variety of traffic signs, these methods need to be fine-tuned when recognizing new traffic sign categories. To address these issues, we propose a traffic sign matching method for zero-shot recognition. Our proposed method can perform traffic sign recognition without training data by directly matching the similarity of target and template traffic sign images. Our method uses the midlevel features of CNNs to obtain robust feature representations of traffic signs without additional training or fine-tuning. We discovered that midlevel features improve the accuracy of zero-shot traffic sign recognition. The proposed method achieves promising recognition results on the German Traffic Sign Recognition Benchmark open dataset and a real-world dataset taken from Sapporo City, Japan.
  • Naoki Saito 0006, Keisuke Maeda, Takahiro Ogawa 0001, Satoshi Asamizu, Miki Haseyama
    Journal of Robotics and Mechatronics 35 5 1321 - 1330 2023年10月
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    International Journal of Computer Assisted Radiology and Surgery 18 10 1841 - 1848 2023年10月
  • Masaki Yoshida, Ren Togo, Takahiro Ogawa, Miki Haseyama
    SENSORS 23 9 4540 - 4540 2023年05月 
    This study proposes a novel off-screen sound separation method based on audio-visual pre-training. In the field of audio-visual analysis, researchers have leveraged visual information for audio manipulation tasks, such as sound source separation. Although such audio manipulation tasks are based on correspondences between audio and video, these correspondences are not always established. Specifically, sounds coming from outside a screen have no audio-visual correspondences and thus interfere with conventional audio-visual learning. The proposed method separates such off-screen sounds based on their arrival directions using binaural audio, which provides us with three-dimensional sensation. Furthermore, we propose a new pre-training method that can consider the off-screen space and use the obtained representation to improve off-screen sound separation. Consequently, the proposed method can separate off-screen sounds irrespective of the direction from which they arrive. We conducted our evaluation using generated video data to circumvent the problem of difficulty in collecting ground truth for off-screen sounds. We confirmed the effectiveness of our methods through off-screen sound detection and separation tasks.
  • He Zhu, Ren Togo, Takahiro Ogawa, Miki Haseyama
    ELECTRONICS 12 10 2023年05月 
    As deep learning research continues to advance, interpretability is becoming as important as model performance. Conducting interpretability studies to understand the decision-making processes of deep learning models can improve performance and provide valuable insights for humans. The interpretability of visual question answering (VQA), a crucial task for human-computer interaction, has garnered the attention of researchers due to its wide range of applications. The generation of natural language explanations for VQA that humans can better understand has gradually supplanted heatmap representations as the mainstream focus in the field. Humans typically answer questions by first identifying the primary objects in an image and then referring to various information sources, both within and beyond the image, including prior knowledge. However, previous studies have only considered input images, resulting in insufficient information that can lead to incorrect answers and implausible explanations. To address this issue, we introduce multiple references in addition to the input image. Specifically, we propose a multimodal model that generates natural language explanations for VQA. We introduce outside knowledge using the input image and question and incorporate object information into the model through an object detection module. By increasing the information available during the model generation process, we significantly improve VQA accuracy and the reliability of the generated explanations. Moreover, we employ a simple and effective feature fusion joint vector to combine information from multiple modalities while maximizing information preservation. Qualitative and quantitative evaluation experiments demonstrate that the proposed method can generate more reliable explanations than state-of-the-art methods while maintaining answering accuracy.
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Comput. Biol. Medicine 158 106877 - 106877 2023年05月
  • 斉藤 直輝, 前田 圭介, 小川 貴弘, 浅水 仁, 長谷山 美紀
    電子情報通信学会論文誌D 情報・システム J106-D 5 337 - 348 2023年05月01日 
    本論文では,画像の感情推定のためのラベル逆量子化を導入した正準相関分析であるSupervised Multi-view Canonical Correlation Analysis via Cyclic Label Dequantization (sMVCCA-CLD)を提案する.ラベルから算出される特徴量(ラベル特徴量)の次元数は他の特徴量と比較して小さいため,従来のCCAでは,構築する空間の次元数の低下により特徴量間の相関関係の表現が困難となる問題点が存在する.そこで,sMVCCA-CLDでは,ラベル特徴量の次元数をラベル逆量子化により増加させながら,特徴量間の相関を最大化することで,次元数の制約を受けない共通潜在空間の構築を可能とする.更に,感情が円環状に配置されることを考慮してラベル逆量子化を行うことで,感情推定に適した共通潜在空間の構築を可能とする.以上で構築された空間に射影された新たな特徴量を利用することで,高精度な感情推定が可能となる.
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Int. J. Comput. Assist. Radiol. Surg. 18 4 715 - 722 2023年04月
  • Takaaki Higashi, Naoki Ogawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 23 3 1657 - 1657 2023年02月
  • He Zhu, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 23 3 1057 - 1057 2023年02月
  • 東孝明, 小川直輝, 前田圭介, 小川貴弘, 長谷山美紀
    AI・データサイエンス論文集(Web) 4 2 2023年
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2307.02799 2023年
  • Yuya Moroto, Rintaro Yanagi, Naoki Ogawa, Kyohei Kamikawa, Keigo Sakurai, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ACM Multimedia 9399 - 9401 2023年
  • Nao Nakagawa, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICLR 2023年
  • Tatsuki Seino, Naoki Saito 0006, Takahiro Ogawa 0001, Satoshi Asamizu, Miki Haseyama
    ICCE-Taiwan 813 - 814 2023年
  • Huaying Zhang, Rintaro Yanagi, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICCE-Taiwan 811 - 812 2023年
  • Masaki Yoshida, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICCE-Taiwan 795 - 796 2023年
  • Ryota Goka, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICCE-Taiwan 793 - 794 2023年
  • Ryota Goka, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICCE-Taiwan 449 - 450 2023年
  • Tsubasa Kunieda, Ren Togo, Noriko Nishioka, Yukie Shimizu, Shiro Watanabe, Kenji Hirata, Keisuke Maeda, Takahiro Ogawa 0001, Kohsuke Kudo, Miki Haseyama
    ICCE-Taiwan 165 - 166 2023年
  • He Zhu, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICCE-Taiwan 163 - 164 2023年
  • Jiahuan Zhang, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 1 - 5 2023年
  • Masaki Yoshida, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 1 - 5 2023年
  • Koshi Watanabe, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 1 - 5 2023年
  • Ryo Shichida, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 1 - 5 2023年
  • Ryosuke Sawata, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 1 - 5 2023年
  • Hiroki Okamura, Keisuke Maeda, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 1 - 5 2023年
  • Ziwen Lan, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 23 10 4798 - 4798 2023年
  • Ryota Goka, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 23 9 4506 - 4506 2023年
  • Rintaro Yanagi, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    IEEE Access 11 88258 - 88264 2023年
  • Ziwen Lan, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    IEEE Access 11 35447 - 35456 2023年
  • Koshi Watanabe, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    IEEE ACCESS 11 31530 - 31540 2023年 
    Dimensionality reduction is widely used to visualize complex high-dimensional data. This study presents a novel method for effective data visualization. Previous methods depend on local distance measurements for data manifold approximation. This leads to unreliable results when a data manifold locally oscillates because of some undesirable effects, such as noise effects. In this study, we overcome this limitation by introducing a dual approximation of a data manifold. We roughly approximate a data manifold with a neighborhood graph and prune it with a global filter. This dual scheme results in local oscillation robustness and yields effective visualization with explicit global preservation. We consider a global filter based on principal component analysis frameworks and derive it with the spectral information of the original high-dimensional data. Finally, we experiment with multiple datasets to verify our method, compare its performance to that of state-of-the-art methods, and confirm the effectiveness of our novelty and results.
  • Koshi Watanabe, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT V 13717 157 - 173 2023年 
    Latent variable models summarize high-dimensional data while preserving its many complex properties. This paper proposes a locality-aware and low-rank approximated Gaussian process latent variable model (LolaGP) that can preserve the global relationship and local geometry in the derivation of the latent variables. We realize the global relationship by imitating the sample similarity non-linearly and the local geometry based on our newly constructed neighborhood graph. Formally, we derive LolaGP from GP-LVM and implement a locality-aware regularization to reflect its adjacency relationship. The neighborhood graph is constructed based on the latent variables, making the local preservation more resistant to noise disruption and the curse of dimensionality than the previous methods that directly construct it from the high-dimensional data. Furthermore, we introduce a new lower bound of a log-posterior distribution based on low-rank matrix approximation, which allows LolaGP to handle larger datasets than the conventional GP-LVM extensions. Our contribution is to preserve both the global and local structures in the derivation of the latent variables using the robust neighborhood graph and introduce the scalable lower bound of the log-posterior distribution. We conducted an experimental analysis using synthetic as well as images with and without highly noise disrupted datasets. From both qualitative and quantitative standpoint, our method produced successful results in all experimental settings.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING 4 1 - 11 2023年 
    Question answering (QA)-based re-ranking methods for cross-modal retrieval have been recently proposed to further narrow down similar candidate images. The conventional QA-based re-ranking methods provide questions to users by analyzing candidate images, and the initial retrieval results are re-ranked based on the user's feedback. Contrary to these developments, only focusing on performance improvement makes it difficult to efficiently elicit the user's retrieval intention. To realize more useful QA-based re-ranking, considering the user interaction for eliciting the user's retrieval intention is required. In this paper, we propose a QA-based re-ranking method with considering two important factors for eliciting the user's retrieval intention: query-image relevance and recallability. Considering the query-image relevance enables to only focus on the candidate images related to the provided query text, while, focusing on the recallability enables users to easily answer the provided question. With these procedures, our method can efficiently and effectively elicit the user's retrieval intention. Experimental results using Microsoft Common Objects in Context and computationally constructed dataset including similar candidate images show that our method can improve the performance of the cross-modal retrieval methods and the QA-based re-ranking methods.
  • He Zhu, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2303.04388 2023年
  • Yuto Watanabe, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    IEEE Access 11 42534 - 42545 2023年
  • Huaying Zhang, Rintaro Yanagi, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    IEEE Access 11 10675 - 10686 2023年
  • Ren Togo, Yuki Honma, Maiku Abe, Takahiro Ogawa, Miki Haseyama
    International Journal of Multimedia Information Retrieval 11 4 731 - 740 2022年08月26日
  • Takaaki Higashi, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    SENSORS 22 16 6148 - 6148 2022年08月 
    Brain decoding is a process of decoding human cognitive contents from brain activities. However, improving the accuracy of brain decoding remains difficult due to the unique characteristics of the brain, such as the small sample size and high dimensionality of brain activities. Therefore, this paper proposes a method that effectively uses multi-subject brain activities to improve brain decoding accuracy. Specifically, we distinguish between the shared information common to multi-subject brain activities and the individual information based on each subject's brain activities, and both types of information are used to decode human visual cognition. Both types of information are extracted as features belonging to a latent space using a probabilistic generative model. In the experiment, an publicly available dataset and five subjects were used, and the estimation accuracy was validated on the basis of a confidence score ranging from 0 to 1, and a large value indicates superiority. The proposed method achieved a confidence score of 0.867 for the best subject and an average of 0.813 for the five subjects, which was the best compared to other methods. The experimental results show that the proposed method can accurately decode visual cognition compared with other existing methods in which the shared information is not distinguished from the individual information.
  • An Wang, Ren Togo, Takahiro Ogawa, Miki Haseyama
    SENSORS 22 6 2330 - 2330 2022年03月 
    In this paper, we present a novel defect detection model based on an improved U-Net architecture. As a semantic segmentation task, the defect detection task has the problems of background-foreground imbalance, multi-scale targets, and feature similarity between the background and defects in the real-world data. Conventionally, general convolutional neural network (CNN)-based networks mainly focus on natural image tasks, which are insensitive to the problems in our task. The proposed method has a network design for multi-scale segmentation based on the U-Net architecture including an atrous spatial pyramid pooling (ASPP) module and an inception module, and can detect various types of defects compared to conventional simple CNN-based methods. Through the experiments using a real-world subway tunnel image dataset, the proposed method showed a better performance than that of general semantic segmentation including state-of-the-art methods. Additionally, we showed that our method can achieve excellent detection balance among multi-scale defects.
  • Takahiko Hariyama, Yasuharu Takaku, Hideya Kawasaki, Masatsugu Shimomura, Chiyo Senoh, Yumi Yamahama, Atsushi Hozumi, Satoru Ito, Naoto Matsuda, Satoshi Yamada, Toshiya Itoh, Miki Haseyama, Takahiro Ogawa, Naoki Mori, Shuhei So, Hidefumi Mitsuno, Masahiro Ohara, Shuhei Nomura, Masao Hirasaka
    Microscopy 71 1 1 - 12 2022年01月29日 
    Abstract This review aims to clarify a suitable method towards achieving next-generation sustainability. As represented by the term ‘Anthropocene’, the Earth, including humans, is entering a critical era; therefore, science has a great responsibility to solve it. Biomimetics, the emulation of the models, systems and elements of nature, especially biological science, is a powerful tool to approach sustainability problems. Microscopy has made great progress with the technology of observing biological and artificial materials and its techniques have been continuously improved, most recently through the NanoSuit® method. As one of the most important tools across many facets of research and development, microscopy has produced a large amount of accumulated digital data. However, it is difficult to extract useful data for making things as biomimetic ideas despite a large amount of biological data. Here, we would like to find a way to organically connect the indispensable microscopic data with the new biomimetics to solve complex human problems.
  • 小川直輝, 前田圭介, 小川貴弘, 長谷山美紀
    AI・データサイエンス論文集(Web) 3 J2 2022年
  • 諸戸祐哉, 前田圭介, 藤後廉, 小川貴弘, 長谷山美紀
    AI・データサイエンス論文集(Web) 3 J2 2022年
  • 櫻井慶悟, 前田圭介, 藤後廉, 小川貴弘, 長谷山美紀
    AI・データサイエンス論文集(Web) 3 J2 2022年
  • 上川恭平, 前田圭介, 藤後廉, 小川貴弘, 長谷山美紀
    AI・データサイエンス論文集(Web) 3 J2 2022年
  • Nozomu Onodera, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Proceedings of the 4th ACM International Conference on Multimedia in Asia(MMAsia) 30 - 5 2022年
  • Yingrui Ye, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Proceedings of the 4th ACM International Conference on Multimedia in Asia(MMAsia) 6 - 7 2022年
  • Yingrui Ye, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    2022 IEEE International Conference on Image Processing(ICIP) 3838 - 3842 2022年
  • Yuhu Feng, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    2022 IEEE International Conference on Image Processing(ICIP) 3828 - 3832 2022年
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    2022 IEEE International Conference on Image Processing(ICIP) 3823 - 3827 2022年
  • Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    2022 IEEE International Conference on Image Processing(ICIP) 3798 - 3802 2022年
  • Ziwen Lan, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    2022 IEEE International Conference on Image Processing(ICIP) 2021 - 2025 2022年
  • Yutaka Yamada, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    11th IEEE Global Conference on Consumer Electronics(GCCE) 891 - 892 2022年
  • Ryota Goka, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    11th IEEE Global Conference on Consumer Electronics(GCCE) 406 - 407 2022年
  • Yuhu Feng, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    11th IEEE Global Conference on Consumer Electronics(GCCE) 272 - 273 2022年
  • Takaaki Higashi, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 22 16 6148 - 6148 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2212.09281 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2212.09276 2022年
  • Zongyao Li, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2212.02785 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2211.00313 2022年
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    MMAsia 44 - 3 2022年
  • Shunya Ohaga, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    MMAsia 25 - 7 2022年
  • Yuto Watanabe, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICIP 1046 - 1050 2022年
  • He Zhu, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 777 - 778 2022年
  • Huaying Zhang, Rintaro Yanagi, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 775 - 776 2022年
  • Masato Kawai, Rintaro Yanagi, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 408 - 409 2022年
  • Yuki Era, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 404 - 405 2022年
  • Ryo Shichida, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 402 - 403 2022年
  • Hiroki Okamura, Keisuke Maeda, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 278 - 279 2022年
  • Tsubasa Kunieda, Ren Togo, Noriko Nishioka, Yukie Shimizu, Shiro Watanabe, Kenji Hirata, Keisuke Maeda, Takahiro Ogawa 0001, Kohsuke Kudo, Miki Haseyama
    GCCE 137 - 138 2022年
  • Kazuki Yamamoto, Keisuke Maeda, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 135 - 136 2022年
  • Zongyao Li, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ECCV (29) 579 - 595 2022年
  • Keisuke Maeda, Ren Togo, Takahiro Ogawa 0001, Shin-ichi Adachi, Fumiaki Yoshizawa, Miki Haseyama
    Sensors 22 23 9496 - 9496 2022年
  • Keisuke Maeda, Saya Takada, Tomoki Haruyama, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 22 22 8932 - 8932 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Comput. Methods Programs Biomed. 227 107189 - 107189 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2209.14743 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2209.14635 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2209.14609 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2209.14603 2022年
  • Nao Nakagawa, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2209.07007 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2206.03012 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    CoRR abs/2206.03009 2022年
  • Saya Takada, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    4th IEEE Global Conference on Life Sciences and Technologies(LifeTech) 614 - 615 2022年
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    4th IEEE Global Conference on Life Sciences and Technologies(LifeTech) 187 - 188 2022年
  • Yaozong Gan, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICME Workshops 1 - 6 2022年
  • Yaozong Gan, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICCE-TW 453 - 454 2022年
  • An Wang, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICCE-TW 305 - 306 2022年
  • 増田毅, 前田圭介, 藤後廉, 小川貴弘, 長谷山美紀
    映像情報メディア学会技術報告 46 6(MMS2022 1-37/ME2022 26-62/AIT2022 1-37) 303 - 304 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 3458 - 3462 2022年
  • Zongyao Li, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 2240 - 2244 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 1371 - 1375 2022年
  • Jiahuan Zhang, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 22 14 5431 - 5431 2022年
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 22 10 3722 - 3722 2022年
  • Zongyao Li, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Pattern Recognit. 132 108911 - 108911 2022年
  • Guang Li 0008, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    Multimedia Tools and Applications 81 22 32287 - 32303 2022年
  • Kazuma Ohtomo, Ryosuke Harakawa, Takahiro Ogawa 0001, Miki Haseyama, Masahiro Iwahashi
    Multimedia Tools and Applications 81 2 2979 - 3003 2022年
  • Yun Liang 0014, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Neurocomputing 495 118 - 128 2022年
  • Yuto Watanabe, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 4818 - 4822 2022年
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 4683 - 4687 2022年
  • Koshi Watanabe, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 4643 - 4647 2022年
  • Nozomu Onodera, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICASSP 3908 - 3912 2022年
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 22 7 2465 - 2465 2022年
  • Taisei Hirakawa, Keisuke Maeda, Takahiro Ogawa 0001, Satoshi Asamizu, Miki Haseyama
    IEEE Access 10 12503 - 12509 2022年
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    ACM Trans. Multim. Comput. Commun. Appl. 18 3 68 - 17 2022年
  • Naoki Ogawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    Sensors 22 1 382 - 382 2022年
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E104A 6 866 - 875 2021年06月 
    Various cross-modal retrieval methods that can retrieve images related to a query sentence without text annotations have been proposed. Although a high level of retrieval performance is achieved by these methods, they have been developed for a single domain retrieval setting. When retrieval candidate images come from various domains, the retrieval performance of these methods might be decreased. To deal with this problem, we propose a new domain adaptive cross-modal retrieval method. By translating a modality and domains of a query and candidate images, our method can retrieve desired images accurately in a different domain retrieval setting. Experimental results for clipart and painting datasets showed that the proposed method has better retrieval performance than that of other conventional and state-of-the-art methods.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    MM '21: ACM Multimedia Conference 3816 - 3825 2021年
  • Nao Nakagawa, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    2021 IEEE International Conference on Image Processing(ICIP) 2473 - 2477 2021年
  • Taisei Hirakawa, Keisuke Maeda, Takahiro Ogawa 0001, Satoshi Asamizu, Miki Haseyama
    ICIP 2688 - 2692 2021年
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICIP 2678 - 2682 2021年
  • Tomoki Haruyama, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICIP 2433 - 2437 2021年
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICIP 1469 - 1473 2021年
  • Kyohei Kamikawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICIP 1209 - 1213 2021年
  • Yun Liang 0014, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICIP 1039 - 1043 2021年
  • Naoki Ogawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    ICIP 1014 - 1018 2021年
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    IEEE International Conference on Consumer Electronics-Taiwan(ICCE-TW) 1 - 2 2021年
  • Naoki Ogawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    IEEE International Conference on Consumer Electronics-Taiwan(ICCE-TW) 1 - 2 2021年
  • Guang Li, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    IEEE International Conference on Consumer Electronics-Taiwan(ICCE-TW) 1 - 2 2021年
  • Guang Li, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 787 - 788 2021年
  • Jiahuan Zhang, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 785 - 786 2021年
  • Yuto Watanabe, Ren Togo, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 661 - 662 2021年
  • LAN Ziwen, 前田圭介, 小川貴弘, 長谷山美紀
    映像情報メディア学会技術報告 46 6(MMS2022 1-37/ME2022 26-62/AIT2022 1-37) 273 - 274 2021年
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 204 - 205 2021年
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 202 - 203 2021年
  • Koshi Watanabe, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 195 - 196 2021年
  • Masaki Yoshida, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 193 - 194 2021年
  • Yingrui Ye, Yuya Moroto, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 191 - 192 2021年
  • Tsuyoshi Masuda, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 54 - 55 2021年
  • Taisei Hirakawa, Keisuke Maeda, Takahiro Ogawa 0001, Satoshi Asamizu, Miki Haseyama
    GCCE 43 - 44 2021年
  • Saya Takada, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 35 - 36 2021年
  • Shunya Ohaga, Ren Togo, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 9 - 10 2021年
  • Nozomu Onodera, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    GCCE 5 - 6 2021年
  • Keisuke Maeda, Naoki Ogawa, Takahiro Ogawa 0001, Miki Haseyama
    Journal of Imaging 7 12 273 - 273 2021年
  • Kyohei Kamikawa, Keisuke Maeda, Takahiro Ogawa 0001, Miki Haseyama
    IEEE Access 9 163843 - 163850 2021年
  • Zongyao Li, Kazuhiro Kitajima, Kenji Hirata, Ren Togo, Junki Takenaka, Yasuo Miyoshi, Kohsuke Kudo, Takahiro Ogawa, Miki Haseyama
    EJNMMI Research 11 1 2021年 
    Background: To improve the diagnostic accuracy of axillary lymph node (LN) metastasis in breast cancer patients using 2-[18F]FDG-PET/CT, we constructed an artificial intelligence (AI)-assisted diagnosis system that uses deep-learning technologies. Materials and methods: Two clinicians and the new AI system retrospectively analyzed and diagnosed 414 axillae of 407 patients with biopsy-proven breast cancer who had undergone 2-[18F]FDG-PET/CT before a mastectomy or breast-conserving surgery with a sentinel lymph node (LN) biopsy and/or axillary LN dissection. We designed and trained a deep 3D convolutional neural network (CNN) as the AI model. The diagnoses from the clinicians were blended with the diagnoses from the AI model to improve the diagnostic accuracy. Results: Although the AI model did not outperform the clinicians, the diagnostic accuracies of the clinicians were considerably improved by collaborating with the AI model: the two clinicians' sensitivities of 59.8% and 57.4% increased to 68.6% and 64.2%, respectively, whereas the clinicians' specificities of 99.0% and 99.5% remained unchanged. Conclusions: It is expected that AI using deep-learning technologies will be useful in diagnosing axillary LN metastasis using 2-[18F]FDG-PET/CT. Even if the diagnostic performance of AI is not better than that of clinicians, taking AI diagnoses into consideration may positively impact the overall diagnostic accuracy.
  • Kazuma Ohtomo, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama, Masahiro Iwahashi
    ITE Transactions on Media Technology and Applications 9 1 54 - 61 2021年 
    Tumblr is a popular micro-blogging service on which users can share posts comprising text and images. This paper presents a method for personalizing post recommendations for each user from a large number of posts. Specifically, we develop a supervised multi-variational auto encoder considering user preference (SMVAE-UP). SMVAE-UP can extract relationships between text and image features by considering class information representing a user’s preference for each post; thus, preference-aware multimodal features can be calculated. Furthermore, for each target user, a network that enables comparison between a user and posts in the same feature space is constructed using the preference-aware multimodal features and metadata on posts. By applying graph convolutional networks (GCNs) to the network constructed for each target user, an accurate recommendation matching each user’s preferred posts becomes feasible. Experimental results for real-world datasets including six users and 99, 844 posts show the effectiveness of our method.
  • Tomoki Haruyama, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    ITE Transactions on Media Technology and Applications 9 1 42 - 53 2021年 
    A new method that generates user-selectable event summaries from unedited raw soccer videos is presented in this paper. Since there are more unedited raw soccer videos than broadcasted/distributed soccer videos and unedited videos have various viewers, it is necessary to analyze these videos for meeting the demands of various viewers. The proposed method introduces a multimodal CNN-BiLSTM architecture for analyzing unedited raw soccer videos. This architecture extracts candidate scenes for event summarization from unedited soccer videos and classifies these scenes into typical events. Finally, our method generates user-selectable event summaries by simultaneously considering the importance of candidate scenes and the event classification results. Experimental results using real unedited raw soccer videos show the effectiveness of our method.
  • Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings of SPIE - The International Society for Optical Engineering 11766 2021年 
    This paper presents a new interior coordination image retrieval method using object-detection-based and color features. Interior coordination requires consideration of objects' positional information and the overall atmosphere of the room simultaneously. However, similar image retrieval methods considering the coordination characteristics have not been proposed. In the proposed method, we extract different types of features from interior coordination images and realize the similar interior coordination image retrieval based on our newly derived features.
  • Yuki Honma, Ren Togo, Maiku Abe, Takahiro Ogawa, Miki Haseyama
    Proceedings of SPIE - The International Society for Optical Engineering 11766 2021年 
    This paper proposes a customer interest estimation method using security camera to meet the demand of the retail industry. In the field of retail industry, it is considered that the understanding of customers' interests in the real store can be used for various marketing activities such as the product development and the layout of the store. Then, it is important to pay attention to customers' behavior in the real store. Their behavior is often recorded by the cameras installed in the store for security purposes. A method for estimating their interests from the videos of the security camera is presented in this paper. The novelty of our method is three-fold. Firstly, the experimental data of subjects in our group were taken by using the security camera already installed in the real store. Secondly, we used a pre-trained posture estimation model and treated the results as the features to be trained by a two-layer neural network model. Finally, a professional have annotated the subjects' interests. The effectiveness of our method was confirmed by comparing with benchmark supervised machine learning models.
  • Taisei Hirakawa, Keisuke Maeda, Takahiro Ogawa, Satoshi Asamizu, Miki Haseyama
    Proceedings of SPIE - The International Society for Optical Engineering 11766 2021年 
    This paper presents cross-domain recommendation based on multilayer graph analysis using subgraph representation. The proposed method constructs two graphs in source and target domains utilizing user-item embedding and trains link relationships between the users' embedding features on each above graph via graph convolutional networks considering subgraph representation. Thus, the proposed method can obtain features with high representation ability, and this is the main contribution of this paper. Then the proposed method can estimate the user's embedding features in the target domain from those in the source domain and recommend items to users by using the estimated features. Experiments on real-world e-commerce datasets verify the effectiveness of the proposed method.
  • Tsuyoshi Masuda, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings of SPIE - The International Society for Optical Engineering 11766 2021年 
    This paper presents a method for action detection based on Temporal Cycle Consistency(TCC) Learning. The proposed method realizes the action detection of flexible length segments based on a frame-level action prediction technique. We enable calculation of similarities for spatio-temporal features based on TCC to detect target actions from input videos. Finally, our method determines temporal segments by smoothing the frame-level action detection result. Experimental results show the validity of the proposed method.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    ICMR 2021 - Proceedings of the 2021 International Conference on Multimedia Retrieval 611 - 614 2021年 
    Image retrieval from a given text query (text-to-image retrieval) is one of the most essential systems, and it is effectively utilized for databases (DBs) on the Web. To make them more versatile and familiar, a retrieval system that is adaptive even for personal DBs such as images in smartphones and lifelogging devices should be considered. In this paper, we present a novel text-to-image retrieval system that is specialized for personal DBs. With the cross-modal scheme and the question-answering scheme, the developed system enables users to obtain the desired image effectively even from personal DBs. Our demo is available at https://sites.google.com/view/ir-questioner/.
  • Yun Liang 0014, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2021-June 4150 - 4154 2021年 
    This paper presents a novel method on image sentiment analysis called cross-domain semi-supervised deep metric learning (CDSS-DML). The proposed method has two contributions. Firstly, since previous researches on image sentiment analysis suffer from the limit of a small amount of well-labeled data, which occurs a decrease in accuracy of classification, CDSS-DML breaks through the limit by training with unlabeled data based on a teacher-student model. Secondly, the proposed method overcomes the difficulty of distribution shift between well-labeled and unlabeled data by jointing three losses. Especially, the proposed method constructs an effective latent space with the joint loss considering the inter-class and the intra-class correlations for image sentiments. From experimental results, the performance improvement with CDSS-DML is confirmed.
  • Kyohei Kamikawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2021-June 4130 - 4134 2021年 
    This paper presents a method of feature integration via semi-supervised ordinally multi-modal Gaussian process latent variable model (Semi-OMGP). The proposed method transforms multimodal features into common latent variables suitable for users’ interest level estimation. For dealing with the multi-modal features, the proposed method newly derives Semi-OMGP. Semi-OMGP has two contributions. First, Semi-OMGP is suitable for integration between heterogeneous modalities with different distributions by assuming that the similarity matrices of these modalities as observations are generated from latent variables. Second, Semi-OMGP can efficiently use label information by introducing an operator considering the ordinal grade into the prior distribution of latent variables when obtained label information is partially given. Semi-OMGP can simultaneously realize the above contributions, and successful multi-modal feature integration becomes feasible. Experimental results show the effectiveness of the proposed method.
  • Masanao Matsumoto, Keisuke Maeda, Naoki Saito 0006, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2021-June 3985 - 3989 2021年 
    This paper presents multi-modal label dequantized Gaussian process latent variable model (mLDGP) for ordinal label estimation. mLDGP is constructed based on a probabilistic generative model via Gaussian process and realizes accurate calculation of common latent space from multi-view features including low-dimensional ordinal label features. Conventional methods have a problem that the dimension of the common latent space was limited to that of the label feature, and an enough expressive latent space cannot be obtained. mLDGP, which is constructed by introducing our novel label dequantization mechanism into the objective function of multi-modal Gaussian process latent variable model (GPLVM), can increase the dimension of label features. Then mLDGP can calculate the effective latent space. Furthermore, mLDGP can estimate projection transforming unknown features of test samples into the common latent space, which was a problem of the conventional GPLVMs. From experimental results obtained by applying our method to the product rating estimation on the online shopping website, it is confirmed that accuracy improvement using mLDGP becomes feasible compared to various methods.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2021-June 2150 - 2154 2021年 
    Unpaired image-to-image (I2I) translation methods have been developed for several years. Present methods do not take into consideration semantic information of the original image, which may perform well on simple datasets of uncomplicated scenes, however, fail in complex datasets of scenes involving abundant objects, such as urban scenes. To tackle this problem, in this paper, we reasonably modify the previous problem setting and present a novel semantic-aware method. Specifically, in training, we use additional semantic label maps of training images, while in the test, no labels are required. We originally adopt a semantic knowledge distillation strategy to acquire semantic information from the labels and construct a particular normalization layer to introduce semantic information. Being aware of the pixel-level semantic information, our method can realize better I2I translation than the previous methods. Experiments are conducted on benchmark datasets of urban scenes to validate the effectiveness of our method.
  • Yusuke Akamatsu, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2021-June 1360 - 1364 2021年 
    Sensor data from wearable devices have been utilized to analyze dif- ferences between experts and novices. Previous studies attempted to classify the expert-novice level from sensor data based on super- vised learning methods. However, these approaches need to col- lect enough training data covering various novices' sensor patterns. In this paper, we propose a semi-supervised anomaly detection ap- proach that requires only sensor data of experts for training and iden- tifies those of novices as anomalies. Our proposed anomaly detec- tion model named conditional multimodal variational autoencoder (CMVAE) has the following two technical contributions: (i) con- sidering action information of persons and (ii) utilizing multimodal sensor data, i:e:; eye tracking data and motion data in this case. The proposed method is evaluated on sensor data measured when expert and novice soccer players were shooting, dribbling, and doing soc- cer ball juggling. Experimental results show that CMVAE can more accurately classify the expert-novice level than previous supervised learning methods and anomaly detection methods using other VAEs.
  • Takaaki Higashi, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2021-June 1335 - 1339 2021年 
    This paper presents a method for estimation of visual features based on brain responses measured when subjects view images. The proposed method estimates visual features of viewed images by using both individual and shared brain information from functional magnetic resonance imaging (fMRI) data when subjects view images. To extract an effective latent space shared by multiple subjects from high dimensional fMRI data, a probabilistic generative model that can provide a prior distribution to the space is introduced into the proposed method. Also, the extraction of a robust feature space with respect to noise for the individual information becomes feasible via the proposed probabilistic generative model. This is the first contribution of our method. Furthermore, the proposed method constructs a decoder transforming brain information into visual features based on collaborative use of both estimated spaces for individual and shared brain information. This is the second contribution of our method. Experimental results show that the proposed method improves the estimation accuracy of the visual features of viewed images.
  • Ryosuke Sawata, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2021-June 1320 - 1324 2021年 
    A method to classify a user's like or dislike musical pieces based on the extraction of his or her music preference is proposed in this paper. New scheme of Canonical Correlation Analysis (CCA), called Deep Time-series CCA (DTCCA), which can consider the correlation between two sets of input features with considering the time-series relation lurked in each input data is exploited to realize the aforementioned classification. One of the most difference between DTCCA and existing other CCAs is enabling to consider the above time-series relation, and thus DTCCA make the individual electroencephalogram (EEG)-based favorite music classification more effective than the methods using one of other CCAs instead of DTCCA since EEG and audio signals are respectively time-series data. Experimental results show that DTCCA-based favorite music classification outperformed not only method using original features without CCA but also methods using other existing CCAs including even state-of-the-art CCA.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    Multim. Tools Appl. 80 15 23091 - 23112 2021年 
    A deterioration level estimation method via neural network maximizing category-based ordinally supervised multi-view canonical correlation is presented in this paper. This paper focuses on real world data such as industrial applications and has two contributions. First, a novel neural network handling multi-modal features transforms original features into features effectively representing deterioration levels in transmission towers, which are one of the infrastructures, with consideration of only correlation maximization. It can be realized by setting projection matrices maximizing correlations between multiple features into weights of hidden layers. That is, since the proposed network has only a few hidden layers, it can be trained from a small amount of training data. Second, since there exist diverse characteristics and an ordinal scale in deterioration levels, the proposed method newly derives category-based ordinally supervised multi-view canonical correlation analysis (Co-sMVCCA). Co-sMVCCA enables estimation of effective projection considering both within-class divergence and the ordinal scale between classes. Experimental results showed that the proposed method realizes accurate deterioration level estimation.
  • Nao Nakagawa, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Access 9 110880 - 110888 2021年 
    We propose a novel method that can learn easy-to-interpret latent representations in real-world image datasets using a VAE-based model by splitting an image into several disjoint regions. Our method performs object-wise disentanglement by exploiting image segmentation and alpha compositing. With remarkable results obtained by unsupervised disentanglement methods for toy datasets, recent studies have tackled challenging disentanglement for real-world image datasets. However, these methods involve deviations from the standard VAE architecture, which has favorable disentanglement properties. Thus, for disentanglement in images of real-world image datasets with preservation of the VAE backbone, we designed an encoder and a decoder that embed an image into disjoint sets of latent variables corresponding to objects. The encoder includes a pre-trained image segmentation network, which allows our model to focus only on representation learning while adopting image segmentation as an inductive bias. Evaluations using real-world image datasets, CelebA and Stanford Cars, showed that our method achieves improved disentanglement and transferability.
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    IEEE Access 9 84971 - 84981 2021年 
    A novel method for detection of important scenes in baseball videos based on correlation maximization between heterogeneous modalities via bidirectional time lag aware deep multiset canonical correlation analysis (BiTl-dMCCA) is presented in this paper. The proposed method enables detection of important scenes by collaboratively using baseball videos and their corresponding tweets. The technical contributions of this paper are twofold. First, since there are time lags between not only 'tweets and corresponding multiple previous events' but also 'events and corresponding multiple following posted tweets', the proposed method considers these bidirectional time lags. Specifically, the representation of such bidirectional time lags into the derivation of their covariance matrices is newly introduced. Second, the proposed method adopts textual, visual and audio features calculated from tweets and videos as multi-modal time series features. Important scenes are detected as abnormal scenes via anomaly detection based on a generative adversarial network using multi-modal features projected by BiTl-dMCCA. The proposed method does not need any training data with annotation. Experimental results obtained by applying the proposed method to actual baseball matches show the effectiveness of the proposed method.
  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    CoRR abs/2104.02864 2021年
  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    CoRR abs/2104.02857 2021年
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    LifeTech 2021 - 2021 IEEE 3rd Global Conference on Life Sciences and Technologies 67 - 68 2021年 
    A human emotion estimation method via feature integration using multi-modal variational autoencoder (MVAE) with time changes is presented in this paper. To utilize multimodal information such as gaze and brain activity data including some noises, the proposed method newly introduces MVAE into the human emotion estimation. Furthermore, the proposed MVAE can consider the changes in bio-signals with time and reduce the effect of noises caused in bio-signals by using the probabilistic variation. Experimental results with that of some state-of-the-art methods indicate that the proposed method is effective.
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    LifeTech 2021 - 2021 IEEE 3rd Global Conference on Life Sciences and Technologies 53 - 54 2021年 
    Spreading of music streaming platforms that use playlists to make recommendations, automatic playlist generation has been actively researched. Recently, it has been reported that playlists that have high diversity and smooth track transitions increase user satisfaction. Our previous method that used a two-dimensional space as a reinforcement learning environment has achieved these demands, but there remains the problem that the content of multi-dimensional acoustic features cannot be retained accurately. To solve this problem, in this paper, we present a new method of music playlist generation based on reinforcement learning using a graph structure constructed from multi-dimensional acoustic features directly. The new playlist generation provides greater diversity and smoother track transitions than the previous method. Experimental results are shown for verifying the effectiveness of the proposal method.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    LifeTech 2021 - 2021 IEEE 3rd Global Conference on Life Sciences and Technologies 51 - 52 2021年 
    We build a model that can estimate what subjects recognize from functional magnetic resonance imaging (fMRI) data via a visual question answering (VQA) model. The VQA model can generate an answer to a question about an image. We convert fMRI signals into image features via an fMRI decoder based on the relationship between the fMRI signals and the image features extracted from the gazed image. Then this allows the VQA model to answer a visual question from the fMRI signals measured while the subject is gazing at the image. Though brain decoding, which interprets what humans recognize, has become overwhelmingly popular in neuroscience, they often suffer from the small datasets of brain activity data. To overcome the small size of datasets of fMRI signals, we introduce an fMRI decoder based on neural networks that have a high expressive ability. Even when we do not have enough fMRI signals, the proposed method derives the answer to what a person is looking at from fMRI signals. Experimental results on several datasets show that our method allows us to answer a question about gazed images from fMRI signals.
  • Naoki Ogawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    IEEE Access 9 65234 - 65245 2021年 
    Distress image retrieval for infrastructure maintenance via self-trained deep metric learning using experts' knowledge is proposed in this paper. Since engineers take multiple images of a single distress part for inspection of road structures, it is necessary to construct a similar distress image retrieval method considering the input of multiple images to support determination of the level of deterioration. Thus, the construction of an image retrieval method while selecting an effective input from multiple images is described in this paper. The proposed method performs deep metric learning by using a small number of effective images labeled by experts' knowledge with information about their effectiveness and a large number of unlabeled images via a self-training approach. Specifically, an end-to-end learning approach that performs retraining of the model by assigning pseudo-labels to these unlabeled images according to the output confidence of the model is achieved. Thus, the proposed method can select an effective image from multiple images that are input at the retrieval as a query image. This is the main contribution of this paper. As a result, the proposed method realizes highly accurate retrieval of similar distress images considering the actual situation of inspection in which multiple images of a distress part are input.
  • Ren Togo, Megumi Kotera, Takahiro Ogawa, Miki Haseyama
    IEEE Access 9 64860 - 64870 2021年 
    A new style transfer-based image manipulation framework combining generative networks and style transfer networks is presented in this paper. Unlike conventional style transfer tasks, we tackle a new task, text-guided image manipulation. We realize style transfer-based image manipulation that does not require any reference style images and generate a style image from the user's input sentence. In our method, since an initial reference input sentence for a content image can automatically be given by an image-to-text model, the user only needs to update the reference sentence. This scheme can help users when they do not have any images representing the desired style. Although this text-guided image manipulation is a new challenging task, quantitative and qualitative comparisons showed the superiority of our method.
  • Ren Togo, Naoki Saito 0006, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Sensors 21 6 2088 - 2088 2021年 
    A method for prediction of properties of rubber materials utilizing electron microscope images of internal structures taken under multiple conditions is presented in this paper. Electron microscope images of rubber materials are taken under several conditions, and effective conditions for the prediction of properties are different for each rubber material. Novel approaches for the selection and integration of reliable prediction results are used in the proposed method. The proposed method enables selection of reliable results based on prediction intervals that can be derived by the predictors that are each constructed from electron microscope images taken under each condition. By monitoring the relationship between prediction results and prediction intervals derived from the corresponding predictors, it can be determined whether the target prediction results are reliable. Furthermore, the proposed method integrates the selected reliable results based on Dempster–Shafer (DS) evidence theory, and this integration result is regarded as a final prediction result. The DS evidence theory enables integration of multiple prediction results, even if the results are obtained from different imaging conditions. This means that integration can even be realized if electron microscope images of each material are taken under different conditions and even if these conditions are different for target materials. This nonconventional approach is suitable for our application, i.e., property prediction. Experiments on rubber material data showed that the evaluation index mean absolute percent error (MAPE) was under 10% by the proposed method. The performance of the proposed method outperformed conventional comparative property estimation methods. Consequently, the proposed method can realize accurate prediction of the properties with consideration of the characteristic of electron microscope images described above.
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Sensors 21 6 2045 - 2045 2021年 
    A new method for the detection of important scenes in baseball videos via a time-lag-aware multimodal variational autoencoder (Tl-MVAE) is presented in this paper. Tl-MVAE estimates latent features calculated from tweet, video, and audio features extracted from tweets and videos. Then, important scenes are detected by estimating the probability of the scene being important from estimated latent features. It should be noted that there exist time-lags between tweets posted by users and videos. To consider the time-lags between tweet features and other features calculated from corresponding multiple previous events, the feature transformation based on feature correlation considering such time-lags is newly introduced to the encoder in MVAE in the proposed method. This is the biggest contribution of the Tl-MVAE. Experimental results obtained from actual baseball videos and their corresponding tweets show the effectiveness of the proposed method.
  • Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE Access 9 26593 - 26606 2021年 
    Decoding a person's cognitive contents from evoked brain activity is becoming important in the field of brain-computer interaction. Previous studies have decoded a perceived image from functional magnetic resonance imaging (fMRI) activity by constructing brain decoding models that were trained with a single subject's fMRI data. However, accurate decoding is still challenging since fMRI data acquired from only a single subject have several disadvantageous characteristics such as small sample size, noisy nature, and high dimensionality. In this article, we propose a method to decode categories of perceived images from fMRI activity using shared information of multi-subject fMRI data. Specifically, by aggregating fMRI data of multiple subjects that contain a large number of samples, we extract a low-dimensional latent representation shared by multi-subject fMRI data. Then the latent representation is nonlinearly transformed into visual features and semantic features of the perceived images to identify categories from various candidate categories. Our approach leverages rich information obtained from multi-subject fMRI data and improves the decoding performance. Experimental results obtained by using two public fMRI datasets showed that the proposed method can more accurately decode categories of perceived images from fMRI activity than previous approaches using a single subject's fMRI data.
  • Masanao Matsumoto, Naoki Saito 0006, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    IEEE Access 9 21810 - 21822 2021年 
    Supervised fractional-order embedding multiview canonical correlation analysis via ordinal label dequantization (SFEMCCA-OLD) for image interest estimation is presented in this paper. SFEMCCA-OLD is a CCA method that realizes accurate integration of features including low-dimensional ordinal label features. In general, since information is lost due to a limitation of the number of classes, i.e., the dimension of ordinal label information is smaller than those of other features, derivation of highly accurate integration of features is difficult. In SFEMCCA-OLD, the dimension of the ordinal label information can be increased by estimation of the canonical correlation between multiview features. We call this approach ordinal label dequantization. In addition, by introducing a fractional-order technique, our method can calculate optimal projections for noisy data such as real data. Experimental results show that the accuracy of SFEMCCA-OLD for image interest estimation is better than that of recent CCA-based methods.
  • 山本健太郎, 藤後廉, 小川貴弘, 長谷山美紀
    土木学会論文集 F3(土木情報学)(Web) 77 1 794 - 795 2021年 [査読有り][通常論文]
  • Rio Doya, Shouta M.M. Nakayama, Hokuto Nakata, Haruya Toyomaki, John Yabe, Kaampwe Muzandu, Yared B. Yohannes, Andrew Kataba, Golden Zyambo, Takahiro Ogawa, Yoshitaka Uchida, Yoshinori Ikenaka, Mayumi Ishizuka
    Environmental Science and Technology 54 22 14474 - 14481 2020年11月17日 
    We investigated the potential effects of different land use and other environmental factors on animals living in a contaminated environment. The study site in Kabwe, Zambia, is currently undergoing urban expansion, while lead contamination from former mining activities is still prevalent. We focused on a habitat generalist lizards (Trachylepis wahlbergii). The livers, lungs, blood, and stomach contents of 224 lizards were analyzed for their lead, zinc, cadmium, copper, nickel, and arsenic concentrations. Habitat types were categorized based on vegetation data obtained from satellite images. Multiple regression analysis revealed that land use categories of habitats and three other factors significantly affected lead concentrations in the lizards. Further investigation suggested that the lead concentrations in lizards living in bare fields were higher than expected based on the distance from the contaminant source, while those in lizards living in green fields were lower than expected. In addition, the lead concentration of lungs was higher than that of the liver in 19% of the lizards, implying direct exposure to lead via dust inhalation besides digestive exposure. Since vegetation reduces the production of dust from surface soil, it is plausible that dust from the mine is one of the contamination sources and that vegetation can reduce exposure to this.
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-Taiwan 2020 2020年09月28日 
    This paper presents an important scene detection method based on anomaly detection using a Long Short-Term Memory (LSTM) for baseball highlight generation. In order to deal with multi-view time series features calculated from tweets and videos, we adopt an anomaly detection method using LSTM. LSTM which can maintain a long-term memory is effective for training such features. Introduction of LSTM into important scene detection of baseball videos is the biggest contribution of this paper. Experimental results show high detection performance by our method.
  • Genki Suzuki, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2020 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-Taiwan 2020 2020年09月28日 
    A novel method estimating candidate regions for superimposing information in soccer videos based on gaze tracking data is presented in this paper. The proposed method generates a likelihood map based on visual attention regions based on the gaze tracking data and detection results of objects such as players and soccer goals in soccer videos. Candidate regions for superimposing information are estimated by using the likelihood map. Experimental results show that the proposed method realizes effective candidate region estimation.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-Taiwan 2020 2020年09月28日 
    This paper presents a method for estimation of person-specific visual attention based on estimated similar persons' visual attention. For improving the estimation performance of person-specific visual attention, the proposed method uses the dataset including the large number of images and corresponding gaze data of many persons not including the target person and trains an estimation model based on deep learning. By using the estimated visual attention of similar persons for the target image, the proposed method estimates the visual attention of the target person with the small amount of gaze data. Experimental results show that the proposed method is effective for estimation of person-specific visual attention.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    World Journal of Gastroenterology 26 25 3650 - 3659 2020年07月07日 
    BACKGROUND The risk of gastric cancer increases in patients with Helicobacter pylori-associated chronic atrophic gastritis (CAG). X-ray examination can evaluate the condition of the stomach, and it can be used for gastric cancer mass screening. However, skilled doctors for interpretation of X-ray examination are decreasing due to the diverse of inspections. AIM To evaluate the effectiveness of stomach regions that are automatically estimated by a deep learning-based model for CAG detection. METHODS We used 815 gastric X-ray images (GXIs) obtained from 815 subjects. The ground truth of this study was the diagnostic results in X-ray and endoscopic examinations. For a part of GXIs for training, the stomach regions are manually annotated. A model for automatic estimation of the stomach regions is trained with the GXIs. For the rest of them, the stomach regions are automatically estimated. Finally, a model for automatic CAG detection is trained with all GXIs for training. RESULTS In the case that the stomach regions were manually annotated for only 10 GXIs and 30 GXIs, the harmonic mean of sensitivity and specificity of CAG detection were 0.955 ± 0.002 and 0.963 ± 0.004, respectively. CONCLUSION By estimating stomach regions automatically, our method contributes to the reduction of the workload of manual annotation and the accurate detection of the CAG.
  • 胃X線画像を用いたAIによるH.pylori感染識別と今後の展望
    藤後 廉, 小川 貴弘, 間部 克裕, 加藤 元嗣, 長谷山 美紀
    日本消化器がん検診学会雑誌 58 2 127 - 127 (一社)日本消化器がん検診学会 2020年03月
  • Keisuke Maeda, Susumu Genma, Takahiro Ogawa, Miki Haseyama
    ITE Transactions on Media Technology and Applications 8 3 140 - 150 2020年 
    A method for image retrieval based on supervised local regression and global alignment (sLRGA) with relevance feedback for insect identification is presented in this paper. Based on the novel sLRGA, which is an extended version of LRGA, the proposed method estimates ranking scores for image retrieval in such a way that the neighborhood structure of a feature space of the database can be optimally preserved with consideration of class information. This is the main contribution of this paper. By measuring the relevance between all of the images and the query image in the database, sLRGA realizes accurate image retrieval. Furthermore, when positive/negative labels to retrieved images are given by users, the proposed method can improve image retrieval performance considering the query relevance information via use of both relevance feedback and sLRGA. This is the second contribution of this paper. Experimental results show the effectiveness of the proposed method.
  • Genki Suzuki, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    ITE Transactions on Media Technology and Applications 8 3 151 - 160 2020年 
    A novel method for player importance prediction from a player network using gaze positions estimated by Long Short-Term Memory (LSTM) in soccer videos is presented in this paper. By newly using an estimation model of gaze positions trained by gaze tracking data of experienced persons, it is expected that the importance of each player can be predicted. First, we generate a player network by utilizing the estimated gaze positions and first-arrival regions representing players' connections, e.g., passes between players. The gaze positions are estimated by LSTM that is newly trained from the gaze tracking data of experienced persons. Second, the proposed method predicts the importance of each player by applying the Hypertext Induced Topic Selection (HITS) algorithm to the constructed network. Consequently, prediction of the importance of each player based on soccer tactic knowledge of experienced persons can be realized without constantly obtaining gaze tracking data.
  • Kazaha Horii, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    ITE Transactions on Media Technology and Applications 8 2 111 - 124 2020年 
    An interpretable convolutional neural network (CNN) including attribute estimation for image classification is presented in this paper. Although CNNs perform highly accurate image classification, the reason for the classification results obtained by the neural networks is not clear. In order to provide interpretation of CNNs, the proposed method estimates attributes, which explain elements of objects, in an intermediate layer of the network. This enables improvement of the interpretability of CNNs, and it is the main contribution of this paper. Furthermore, the proposed method uses the estimated attributes for image classification in order to enhance its accuracy. Consequently, the proposed method not only provides interpretation of CNNs but also realizes improvement in the performance of image classification.
  • Tomoki Haruyama, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    ITE Transactions on Media Technology and Applications 8 2 89 - 99 2020年 
    The details of the matches of soccer can be estimated from visual and audio sequences, and they correspond to the occurrence of important scenes. Therefore, the use of these sequences is suitable for important scene detection. In this paper, a new multimodal method for important scene detection from visual and audio sequences in far-view soccer videos based on a single deep neural architecture is presented. A unique point of our method is that multiple classifiers can be realized by a single deep neural architecture that includes a Convolutional Neural Network-based feature extractor and a Support Vector Machine-based classifier. This approach provides a solution to the problem of not being able to simultaneously optimize different multiple deep neural architectures from a small amount of training data. Then we monitor confidence measures output from this architecture for the multimodal data and enable their integration to obtain the final classification result.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN) 2020年 
    Text-based image retrieval is a fundamental study in the field of information retrieval. Recent text-based image retrieval methods employ deep neural networks (here-inafter referred to as deep neural TBIR) to retrieve a desired image from a sentence query and achieve the state-of-the-art performance in TBIR. To improve the retrieval performance of the deep neural TBIR method further, it is essential to prepare diverse sentence labels in training data. However, it takes a lot of effort to prepare diverse sentence labels in training data. To address this problem, we propose a novel deep neural TBIR method with data augmentation of the sentence labels in training data. Experimental results show the effectiveness of the proposed method.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings of the 2nd ACM International Conference on Multimedia in Asia, MMAsia 2020 37 - 7 2020年 
    Cross-modal retrieval methods retrieve desired images from a query text by learning relationships between texts and images. This retrieval approach is one of the most effective ways in the easiness of query preparation. Recent cross-modal retrieval is convenient and accurate when users input a query text that can uniquely identify the desired image. Meanwhile, users frequently input ambiguous query texts, and these ambiguous queries make it difficult to obtain the desired images. To alleviate these difficulties, in this paper, we propose a novel interactive cross-modal retrieval method based on question answering (QA) with users. The proposed method analyses candidate images and asks users about information that can narrow retrieval candidates effectively. By only answering the questions generated by the proposed method, users can reach their desired images even from an ambiguous query text. Experimental results show the effectiveness of the proposed method.
  • Tomoki Haruyama, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    Proceedings of the 2nd ACM International Conference on Multimedia in Asia, MMAsia 2020 27 - 8 2020年 
    This paper presents a novel method to retrieve similar scenes in soccer videos with weak annotations via multimodal use of bidirectional long short-term memory (BiLSTM). The significant increase in the number of different types of soccer videos with the development of technology brings valid assets for effective coaching, but it also increases the work of players and training staff. We tackle this problem with a nontraditional combination of pre-trained models for feature extraction and BiLSTMs for feature transformation. By using the pre-trained models, no training data is required for feature extraction. Then effective feature transformation for similarity calculation is performed by applying BiLSTM trained with weak annotations. This transformation allows for highly accurate capture of soccer video context from less annotation work. In this paper, we achieve an accurate retrieval of similar scenes by multimodal use of this BiLSTM-based transformer trainable with less human effort. The effectiveness of our method was verified by comparative experiments with state-of-the-art using actual soccer video dataset.
  • 藤後廉, 小川貴弘, 間部克裕, 加藤元嗣, 長谷山美紀
    日本消化器がん検診学会雑誌(Web) 58 2 2020年
  • 前田圭介, 斉藤僚汰, 高橋翔, 小川貴弘, 長谷山美紀
    土木学会論文集 F3(土木情報学)(Web) 76 1 2020年
  • Genki Suzuki, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    LifeTech 2020 - 2020 IEEE 2nd Global Conference on Life Sciences and Technologies 111 - 112 2020年 
    This paper presents quantitative analysis of engineer's skill using wearable sensor data while inspecting a highway bridge. This paper analyzes differences of behavior between experienced and inexperienced engineers by using bio-signals obtained from wearable sensor data during inspections. Specifically, by analyzing each engineer's movement distance and the number of steps during inspections, the differences of behavior patterns between experienced and inexperienced engineers can be revealed.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    LifeTech 2020 - 2020 IEEE 2nd Global Conference on Life Sciences and Technologies 99 - 100 2020年 
    We propose a model for free-form visual question answering (VQA) from human brain activity. The task of VQA is leading to an answer given an image and a question about the image. Given brain activity data measured by functional magnetic resonance imaging (fMRI) and a natural language question in terms of the viewed image, the proposed method can provide an accurate natural language answer with the VQA algorithm. Visual questions selectively approach various areas of an image such as objects and backgrounds. As a result, a more detailed understanding of the image and complex reasoning are typically needed than general image captioning models. In this paper, we propose a method of answering a given question about a viewed image from fMRI data based on the VQA algorithm. We estimate the relation between fMRI data and visual features extracted from viewed images. Based on the relationship, we convert fMRI data into visual features. Finally, the proposed method can answer to a visual question from fMRI data measured while subjects are viewing images. Experimental results show that the proposed method enables accurate answering for questions about viewed images.
  • Naoki Ogawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    LifeTech 2020 - 2020 IEEE 2nd Global Conference on Life Sciences and Technologies 97 - 98 2020年 
    This paper proposes distress level classification of road infrastructures via convolutional neural networks (CNNs) generating an attention map. In the proposed method, performance of distress level classification is improved by utilizing an attention map to emphasize locations contributing to classification results. Furthermore, the attention map is also utilized to explain the reasons for the classification results. Thus, the proposed method realizes both performance improvement of distress level classification and provision of the interpretability for the classification results. Finally, experimental results verify the effectiveness of the proposed method.
  • Masanao Matsumoto, Naoki Saito 0006, Takahiro Ogawa, Miki Haseyama
    LifeTech 2020 - 2020 IEEE 2nd Global Conference on Life Sciences and Technologies 3 - 4 2020年 
    This paper presents a new method to realize estimation performance improvement of user-specific interests for images. The proposed method computes projections which transform visual and text features to eye gaze-based features that reflect user's interests by utilizing discriminative locality preserving canonical correlation analysis (DLPCCA). DLPCCA can calculate projections suitable for interest estimation by considering the class information and locality of data structure. Experimental results are shown for verifying the effectiveness of our method.
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2020 1 - 6 2020年 
    This paper presents multi-view unsupervised generative adversarial network maximizing time-lag aware canonical correlation (Mv-GAN) for baseball highlight generation. MvGAN has the following two contributions. First, MvGAN utilizes textual, visual and audio features calculated from tweets and videos as multi-view features. MvGAN which adopts these multi-view features is the effective work for highlight generation of baseball videos. Second, since there is a temporal difference between posted tweets and the corresponding events, MvGAN introduces a novel feature embedding scheme considering a time-lag between textual features and other features. Specifically, the proposed method newly derives the time-lag aware canonical correlation maximization of these multi-view features. This is the biggest contribution of this paper. Furthermore, since MvGAN is an unsupervised method for highlight generation, a large amount of training data with annotation is not needed. Thus, the proposed method has high applicability to the real world.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2020-October 2521 - 2525 2020年 
    Generation of human cognitive contents based on the analysis of functional magnetic resonance imaging (fMRI) data has been actively researched. Cognitive contents such as viewed images can be estimated by analyzing the relationship between fMRI data and semantic information of viewed images. In this paper, we propose a new method generating captions for viewed images from human brain activity via a novel robust regression scheme. Unlike conventional generation methods using image feature representations, the proposed method makes use of more semantic text feature representations, which are more suitable for the caption generation. We construct a text latent space with unlabeled images not used for the training, and the fMRI data are regressed to the text latent space. Besides, we newly make use of unlabeled images not used for the training phase to improve caption generation performance. Finally, the proposed method can generate captions from the fMRI data measured while subjects are viewing images. Experimental results show that the proposed method enables accurate caption generation for viewed images.
  • Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2020-October 2466 - 2470 2020年 
    We present a new multimodal image-to-image translation model for the generation of gastritis images using X-ray and blood inspection results. In clinical situations, clinicians estimate the prognosis of the target disease by considering multiple inspection results. Similarly, we take a multimodal approach in the task of gastric cancer risk prediction. Visual characteristics of the gastric X-ray image and blood index values are highly related in the evaluation of gastric cancer risk. If we can generate a prediction image from blood index values, it contributes to the clinicians' sophisticated and integrated diagnosis. Hence, we learn a model that can map non-gastritis images to gastritis images based on the blood index values. Although this is a challenging multimodal task in medical image analysis, experimental results showed the effectiveness of our model.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2020-October 2431 - 2435 2020年 
    A new approach that improves text-based image retrieval (hereinafter referred to as TBIR) performance is proposed in this paper. TBIR methods aim to retrieve a desired image related to a query text. Especially, recent TBIR methods allow us to retrieve images considering word relationships by using a sentence as a query. In these TBIR methods, it is necessary to uniquely identify a desired image from similar images using a single query sentence. However, the diverse expressive styles for a query sentence make it difficult to uniquely identify a desired image. In this paper, we propose a novel TBIR method with paraphrasing on multiple representation spaces. Specifically, by paraphrasing a query sentence on lingual and visual representation spaces, the proposed method can retrieve a desired image from various perspectives and then it can uniquely identify a desired image from similar images. Comprehensive experimental results show the effectiveness of the proposed method.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2020-October 2426 - 2430 2020年 
    Unsupervised domain adaptation, which transfers supervised knowledge from a labeled domain to an unlabeled domain, remains a tough problem in the field of computer vision, especially for semantic segmentation. Some methods inspired by adversarial learning and semi-supervised learning have been developed for unsupervised domain adaptation in semantic segmentation and achieved outstanding performances. In this paper, we propose a novel method for this task. Like adversarial learning-based methods using a discriminator to align the feature distributions from different domains, we employ a variational autoencoder to get to the same destination but in a non-adversarial manner. Since the two approaches are compatible, we also integrate an adversarial loss into our method. By further introducing pseudo labels, our method can achieve state-of-the-art performances on two benchmark adaptation scenarios, GTA5-toCITYSCAPES and SYNTHIA-to-CITYSCAPES.
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2020-October 1236 - 1240 2020年 
    This paper presents a new important scene detection method of baseball videos based on correlation maximization between heterogeneous modalities via time-lag aware deep multiset canonical correlation analysis (Tl-dMCCA). The technical contributions of this paper are twofold. First, textual, visual and audio features calculated from tweets and videos are adopted as multi-view time series features. Since Tl-dMCCA which utilizes these features includes the unsupervised embedding scheme via deep networks, the proposed method can flexibly express the relationship between heterogeneous features. Second, since there is the time-lag between posted tweets and the corresponding multiple previous events, Tl-dMCCA considers the time-lag relationships between them. Specifically, we newly introduce the representation of such time-lags into the derivation of their covariance matrices. By considering time-lags via Tl-dMCCA, the proposed method correctly detects important scenes.
  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2020-October 305 - 309 2020年 
    This paper presents a soft-label anonymous gastric X-ray image distillation method based on a gradient descent approach. The sharing of medical data is demanded to construct high-accuracy computeraided diagnosis (CAD) systems. However, the large size of the medical dataset and privacy protection are remaining problems in medical data sharing, which hindered the research of CAD systems. The idea of our distillation method is to extract the valid information of the medical dataset and generate a tiny distilled dataset that has a different data distribution. Different from model distillation, our method aims to find the optimal distilled images, distilled labels and the optimized learning rate. Experimental results show that the proposed method can not only effectively compress the medical dataset but also anonymize medical images to protect the patient's private information. The proposed approach can improve the efficiency and security of medical data sharing.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2020-October 61 - 65 2020年 
    We propose an estimation method for free-form Visual Question Answering (VQA) from human brain activity, brain decoding VQA. The task of VQA in the field of computer vision is generating an answer given an image and a question about its contents. The proposed method can realize answering arbitrary visual questions about images from brain activity measured by functional Magnetic Resonance Imaging (fMRI) while viewing the same images. We enable estimating various information from brain activity via a unique VQA model, which can realize a more detailed understanding of images and complex reasoning. In addition, we newly make use of un-labeled images not used in the training phase to improve the performance of the transformation, since fMRI datasets are generally small. The proposed method can answer a visual question from a little amount of fMRI data measured while subjects are viewing images.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2020-October 46 - 50 2020年 
    This paper presents feature integration via geometrical supervised multi-view multi-label canonical correlation analysis (GSM2CCA) for incomplete label assignment. The problem of incomplete labels is frequently encountered in the multi-label classification problem where the training labels are obtained via crowd-sourcing. In such a situation, consideration of only the label correlation, which is the basic approach, is not suitable for improvement of representation ability of features. For dealing with the incomplete label assignment, GSM2CCA constructs effective feature embedding space providing the discriminant ability by introducing both the multi-label correlation and feature similarity of the original feature space into its objective function. Since novel integrated features with high discriminant ability can be calculated by our GSM2CCA, performance improvement of multi-label classification with the incomplete label assignment is realized. The main contribution of this paper is the realization of the effective feature integration via the adoption of the combination use of label similarity and locality preserving projection of heterogeneous features for solving the problem of the incomplete label assignment. The effectiveness of GSM2CCA by applying GSM2CCA-based feature integration to heterogeneous features calculated from various convolutional neural network models is verified via experimental results.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2020-May 2263 - 2267 2020年 
    Unsupervised domain adaptation, which leverages label information from other domains to solve tasks on a domain without any labels, can alleviate the problem of the scarcity of labels and expensive labeling costs faced by supervised semantic segmentation. In this paper, we utilize adversarial learning and semi-supervised learning simultaneously to solve the task of unsupervised domain adaptation in semantic segmentation. We propose a new approach that trains two segmentation models with the adversarial learning symmetrically and further introduces the consistency between the outputs of the two models into the semi-supervised learning to improve the accuracy of pseudo labels which significantly affect the final adaptation performance. We achieve state-of-the-art semantic segmentation performance on the GTA5-to-Cityscapes scenario, a widely used benchmark setting in unsupervised domain adaptation.
  • Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2020-May 1215 - 1219 2020年 
    Brain decoding studies have demonstrated that viewed image categories can be estimated from human functional magnetic resonance imaging (fMRI) activity. However, there are still limitations with the estimation performance because of the characteristics of fMRI data and the employment of only one modality extracted from viewed images. In this paper, we propose a multi-view Bayesian generative model for multi-subject fMRI data to estimate viewed image categories from fMRI activity. The proposed method derives effective representations of fMRI activity by utilizing multi-subject fMRI data. In addition, we associate fMRI activity with multiple modalities, i.e., visual features and semantic features extracted from viewed images. Experimental results show that the proposed method outperforms existing state-of-the-art methods of brain decoding.
  • Kyohei Kamikawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 944 - 945 2020年 
    This paper presents a method for interest level estimation based on feature integration considering distribution of partially paired user's behavior, videos and posters. The proposed method collaboratively uses videos, their corresponding poster images, and user's behavior to estimate the interest levels. For dealing with the multi-view data, the proposed method newly derives semi-supervised divergence-aware multi-set canonical correlation analysis (SDMCCA). SDMCCA has two contributions. First, since the number of viewable videos is limited, consideration of unviewed videos is necessary, and SDMCCA adopts the semi-supervised approach. Second, information which data have is different due to the difference between similarities to other data. SDMCCA can simultaneously realize the above points and realize successful interest level estimation. Experimental results show the effectiveness of the proposed method.
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 942 - 943 2020年 
    With the spread of streaming services that make playlist-based recommendation, automatic music playlist generation has been actively researched. Traditionally, most of music playlist generation methods have focused on the sounds and the genres of songs that have been listened by a user. However, users are bored and unsatisfied with the music playlist that fits users' preferences overly. In this paper, we propose a new music playlist generation method based on reinforcement learning using an acoustic feature map. Our reinforcement learning-based music playlist generation can overlook the whole songs since an agent explores all of the features on the map. This new playlist generation can achieve high diversity and smooth track transitions. Experimental results show the effectiveness of the proposed method.
  • Yun Liang 0014, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 940 - 941 2020年 
    This paper presents a method that can match images with audio-induced brain activity via modified deep generalized canonical correlation analysis (modified DGCCA). Modified DGCCA that contains the Pearson correlation coefficient in its loss function is newly derived for embedding not only images and audio but also audio-induced brain activity calculated from functional near-infrared spectroscopy (fNIRS) into the same feature space. By inputting audio and its corresponding brain activity into modified DGCCA, the estimation of their best matched images becomes feasible. The experiment result shows the effectiveness of the proposed method. The main contribution of this paper is the introduction of a new method for matching images with audio-induced brain activity.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 745 - 746 2020年 
    An estimation method of user-specific visual attention considering individual tendency toward gazed objects is presented in this paper. For realization the user-specific visual attention for images, it can be effective to use the past gaze tendency of a target user and gaze data obtained from other users. However, the collaborative use of these information is difficult since there may be no gaze data obtained from other users. Then the proposed method focuses on the saliency map and enables the collaborative use of those information for estimation of user-specific visual attention. It is confirmed that the estimation accuracy of the proposed method improved by approximately 20% compared to the state-of-the-art saliency estimation method by performing the experiment.
  • Takaaki Higashi, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 716 - 717 2020年 
    This paper presents a method to estimate viewed images by using brain responses measured when persons view images. The proposed method predicts visual features of viewed images from not only individual brain responses but also shared brain responses of multiple subjects. Specifically, the proposed method simultaneously estimates the following two projection matrices: a matrix that converts individual brain responses into visual features and a matrix that converts shared brain responses into visual features. The collaborative use of two types of brain responses is the biggest contribution of this paper. Experimental results show that the proposed method improves the estimation accuracy in comparison with methods considering only individual or shared brain responses.
  • Taisei Hirakawa, Keisuke Maeda, Takahiro Ogawa, Satoshi Asamizu, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 714 - 715 2020年 
    This paper presents cross-domain recommendation via multi-layer graph analysis using user-item embedding. The proposed method constructs two graphs in source and target domains utilizing user-item embedding, respectively. By training relationship between the user's embedding features in these two graphs, the proposed method realizes the cross-domain recommendation via the multi-layer graphs. This is the main contribution of this paper. Then the proposed method can estimate the user's embedding in the target domain from that in the source domain. Finally, our method can recommend items to users via the estimated user's embedding. Experiments on real-world datasets verify the effectiveness of the proposed method.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 712 - 713 2020年 
    Estimating seen image contents based on the analysis of functional magnetic resonance imaging (fMRI) data has been actively researched. So far, it has been necessary to train a model with individual fMRI data and construct models for each task. In this paper, we propose an estimation method via Visual Question Answering (VQA) from fMRI data from multi-subjects. The task of VQA in the field of computer vision is generating answers when given an image and questions about its contents. The proposed method enables generation accurate answers via the VQA model when given fMRI data and questions about the seen images. Besides, we newly introduce training with multi-subject fMRI data since fMRI datasets are generally small due to the burden for subjects. Then we can realize the estimation of the contents of seen images from fMRI data of a subject whose data are not used in the training phase.
  • Nao Nakagawa, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 692 - 693 2020年 
    In this paper, we propose a novel face synthesis method whose process can be conditioned not only on labels but also on latent variables as 'unknown' labels corresponding to unrevealed factors. We extend Variational Autoencoder (VAE) without any additional networks to introduce conditional generation, disentangled representation, and adversarial learning into one autoencoder. Since previous conditional generative models require the annotation of labels to condition them on, disentanglement, i.e., the unsupervised discovery of generative factors enables users to generate face images more flexibly and more efficiently. Moreover, although generative adversarial networks (GANs) have problems of mode collapse and instability of the learning process, adversarial learning on VAE in an introspective way achieves both the variation of results and the stability of generation. Evaluations on the CelebFaces Attributes Dataset (CelebA) show that our method can generate face images following users' conditioning both on the known and the 'unknown' labels.
  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 667 - 669 2020年 
    Deep convolutional neural networks (DCNNs) have been popular with medical image classification problems in recent years. However, training a DCNN model on the sizeable medical dataset requires repeated manipulation to achieve the desired results and hence is time-consuming. Since there is an inevitable link between DCNN training results and the complexity of the medical dataset, it is essential to accurately evaluate the medical dataset's complexity before training the DCNN models. In this paper, we propose an efficient method to assess the medical dataset's complexity based on spectral clustering. The experimental results show that the medical dataset complexity calculated with our approach is not time-consuming and has a high correlation with DCNN test accuracy.
  • Kaito Hirasawa, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2020 IEEE 9th Global Conference on Consumer Electronics, GCCE 2020 636 - 637 2020年 
    This paper presents a novel important scene prediction method using baseball videos and tweets on Twitter based on a Long Short-Term Memory (LSTM). For considering baseball videos and tweets, the proposed method utilizes textual, visual and audio features. Introducing these multi-modal features into the important scene prediction of baseball videos is the first work. In order to deal with multi-modal time-series features constructed from textual, visual and audio features, the proposed method adopts LSTM which is effective for training such multimodal time-series features by maintaining a long-term memory. The effectiveness of the proposed method is confirmed by experimental results.
  • Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE Trans. Signal Process. 68 5769 - 5781 2020年 
    Brain decoding has shown that viewed image categories can be estimated from evoked functional magnetic resonance imaging (fMRI) activity. Recent studies attempted to estimate viewed image categories that were not used for training previously. Nevertheless, the estimation performance is limited since it is difficult to collect a large amount of fMRI data for training. This paper presents a method to accurately estimate viewed image categories not used for training via a semi-supervised multi-view Bayesian generative model. Our model focuses on the relationship between fMRI activity and multiple modalities, i.e., visual features extracted from viewed images and semantic features obtained from viewed image categories. Furthermore, in order to accurately estimate image categories not used for training, our semi-supervised framework incorporates visual and semantic features obtained from additional image categories in addition to image categories of training data. The estimation performance of the proposed model outperforms existing state-of-the-art models in the brain decoding field and achieves more than 95% identification accuracy. The results also have shown that the incorporation of additional image category information is remarkably effective when the number of training samples is small. Our semi-supervised framework is significant for the brain decoding field where brain activity patterns are insufficient but visual stimuli are sufficient.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Sensors 20 8 2170 - 2170 2020年 
    A few-shot personalized saliency prediction based on adaptive image selection considering object and visual attention is presented in this paper. Since general methods predicting personalized saliency maps (PSMs) need a large number of training images, the establishment of a theory using a small number of training images is needed. To tackle this problem, although finding persons who have visual attention similar to that of a target person is effective, all persons have to commonly gaze at many images. Thus, it becomes difficult and unrealistic when considering their burden. On the other hand, this paper introduces a novel adaptive image selection (AIS) scheme that focuses on the relationship between human visual attention and objects in images. AIS focuses on both a diversity of objects in images and a variance of PSMs for the objects. Specifically, AIS selects images so that selected images have various kinds of objects to maintain their diversity. Moreover, AIS guarantees the high variance of PSMs for persons since it represents the regions that many persons commonly gaze at or do not gaze at. The proposed method enables selecting similar users from a small number of images by selecting images that have high diversities and variances. This is the technical contribution of this paper. Experimental results show the effectiveness of our personalized saliency prediction including the new image selection scheme.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Sensors 20 7 2146 - 2146 2020年 
    The paper proposes a method of visual attention-based emotion classification through eye gaze analysis. Concretely, tensor-based emotional category classification via visual attention-based heterogeneous convolutional neural network (CNN) feature fusion is proposed. Based on the relationship between human emotions and changes in visual attention with time, the proposed method performs new gaze-based image representation that is suitable for reflecting the characteristics of the changes in visual attention with time. Furthermore, since emotions evoked in humans are closely related to objects in images, our method uses a CNN model to obtain CNN features that can represent their characteristics. For improving the representation ability to the emotional categories, we extract multiple CNN features from our novel gaze-based image representation and enable their fusion by constructing a novel tensor consisting of these CNN features. Thus, this tensor construction realizes the visual attention-based heterogeneous CNN feature fusion. This is the main contribution of this paper. Finally, by applying logistic tensor regression with general tensor discriminant analysis to the newly constructed tensor, the emotional category classification becomes feasible. Since experimental results show that the proposed method enables the emotional category classification with the F1-measure of approximately 0.6, and about 10% improvement can be realized compared to comparative methods including state-of-the-art methods, the effectiveness of the proposed method is verified.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Medical Biol. Eng. Comput. 58 6 1239 - 1250 2020年 
    High-quality annotations for medical images are always costly and scarce. Many applications of deep learning in the field of medical image analysis face the problem of insufficient annotated data. In this paper, we present a semi-supervised learning method for chronic gastritis classification using gastric X-ray images. The proposed semi-supervised learning method based on tri-training can leverage unannotated data to boost the performance that is achieved with a small amount of annotated data. We utilize a novel learning method named Between-Class learning (BC learning) that can considerably enhance the performance of our semi-supervised learning method. As a result, our method can effectively learn from unannotated data and achieve high diagnostic accuracy for chronic gastritis. [Figure not available: see fulltext.].
  • Keisuke Maeda, Kazaha Horii, Takahiro Ogawa, Miki Haseyama
    IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 103-A 12 1609 - 1612 2020年 
    A multi-task convolutional neural network leading to high performance and interpretability via attribute estimation is presented in this letter. Our method can provide interpretation of the classification results of CNNs by outputting attributes that explain elements of objects as a judgement reason of CNNs in the middle layer. Furthermore, the proposed network uses the estimated attributes for the following prediction of classes. Consequently, construction of a novel multi-task CNN with improvements in both of the interpretability and classification performance is realized.
  • Takahiro Ogawa, Keisuke Maeda, Miki Haseyama
    IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 103-A 12 1541 - 1551 2020年 
    An inpainting method via sparse representation based on a new phaseless quality metric is presented in this paper. Since power spectra, phaseless features, of local regions within images enable more successful representation of their texture characteristics compared to their pixel values, a new quality metric based on these phaseless features is newly derived for image representation. Specifically, the proposed method enables spare representation of target signals, i.e., target patches, including missing intensities by monitoring errors converged by phase retrieval as the novel phaseless quality metric. This is the main contribution of our study. In this approach, the phase retrieval algorithm used in our method has the following two important roles: (1) derivation of the new quality metric that can be derived even for images including missing intensities and (2) conversion of phaseless features, i.e., power spectra, to pixel values, i.e., intensities. Therefore, the above novel approach solves the existing problem of not being able to use better features or better quality metrics for inpainting. Results of experiments showed that the proposed method using sparse representation based on the new phaseless quality metric outperforms previously reported methods that directly use pixel values for inpainting.
  • Soh Yoshida, Mitsuji Muneyasu, Takahiro Ogawa, Miki Haseyama
    IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 103-A 12 1529 - 1540 2020年 
    In this paper, we address the problem of analyzing topics, included in a social video group, to improve the retrieval performance of videos. Unlike previous methods that focused on an individual visual aspect of videos, the proposed method aims to leverage the "mutual reinforcement" of heterogeneous modalities such as tags and users associated with video on the Internet. To represent multiple types of relationships between each heterogeneous modality, the proposed method constructs three subgraphs: user-tag, video-video, and video-tag graphs. We combine the three types of graphs to obtain a heterogeneous graph. Then the extraction of latent features, i.e., topics, becomes feasible by applying graph-based soft clustering to the heterogeneous graph. By estimating the membership of each grouped cluster for each video, the proposed method defines a new video similarity measure. Since the understanding of video content is enhanced by exploiting latent features obtained from different types of data that complement each other, the performance of visual reranking is improved by the proposed method. Results of experiments on a video dataset that consists of YouTube-8M videos show the effectiveness of the proposed method, which achieves a 24.3% improvement in terms of the mean normalized discounted cumulative gain in a search ranking task compared with the baseline method.
  • Ren Togo, Haruna Watanabe, Takahiro Ogawa, Miki Haseyama
    Comput. Biol. Medicine 123 103903 - 103903 2020年 
    Aim: The aim of this study was to determine whether our deep convolutional neural network-based anomaly detection model can distinguish differences in esophagus images and stomach images obtained from gastric X-ray examinations. Methods: A total of 6012 subjects were analyzed as our study subjects. Since the number of esophagus X-ray images is much smaller than the number of gastric X-ray images taken in X-ray examinations, we took an anomaly detection approach to realize the task of organ classification. We constructed a deep autoencoding gaussian mixture model (DAGMM) with a convolutional autoencoder architecture. The trained model can produce an anomaly score for a given test X-ray image. For comparison, the original DAGMM, AnoGAN, and a One-Class Support Vector Machine (OCSVM) that were trained with features obtained by a pre-trained Inception-v3 network were used. Results: Sensitivity, specificity, and the calculated harmonic mean of the proposed method were 0.956, 0.980, and 0.968, respectively. Those of the original DAGMM were 0.932, 0.883, and 0.907, respectively. Those of AnoGAN were 0.835, 0.833, and 0.834, respectively, and those of OCSVM were 0.932, 0.935, and 0.934, respectively. Experimental results showed the effectiveness of the proposed method for an organ classification task. Conclusion: Our deep convolutional neural network-based anomaly detection model has shown the potential for clinical use in organ classification.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    IEEE Access 8 203358 - 203368 2020年 
    A human-centric emotion estimation method based on correlation maximization with consideration of changes with time in visual attention and brain activity when viewing images is proposed in this paper. Owing to the recent developments of many kinds of biological sensors, many researchers have focused on multimodal emotion estimation using both eye gaze data and brain activity data for improving the quality of emotion estimation. In this paper, a novel method that focuses on the following two points is introduced. First, in order to reduce the burden on users, we obtain brain activity data from users only in the training phase by using a projection matrix calculated by canonical correlation analysis (CCA) between gaze-based visual features and brain activity-based features. Next, for considering the changes with time in both visual attention and brain activity, we obtain novel features based on CCA-based projection in each time unit. In order to include these two points, the proposed method analyzes a fourth-order gaze and image tensor for which modes are pixel location, color channel and the changes with time in visual attention. Moreover, in each time unit, the proposed method performs CCA between gaze-based visual features and brain activity-based features to realize human-centric emotion estimation with a high level of accuracy. Experimental results show that accurate human emotion estimation is achieved by using our new human-centric image representation.
  • Keisuke Maeda, Tetsuya Kushima, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    IEEE Access 8 126109 - 126118 2020年 
    A method for estimating interest levels from behavior features via tensor completion including adaptive similar user selection is presented in this paper. The proposed method focuses on a tensor that is suitable for data containing multiple contexts and constructs a third-order tensor in which three modes are 'products', 'users' and 'user behaviors and interest levels' for these products. By complementing this tensor, unknown interest level estimation of a product for a target user becomes feasible. For further improving the estimation performance, the proposed method adaptively selects similar users for the target user by focusing on converged estimation errors between estimated interest levels and known interest levels in the tensor completion. Furthermore, the proposed method can adaptively estimate the unknown interest from the similar users. This is the main contribution of this paper. Therefore, the influence of users having different interests is reduced, and accurate interest level estimation can be realized. In order to verify the effectiveness of the proposed method, we show experimental results obtained by estimating interest levels of users holding books.
  • Keisuke Maeda, Yoshiki Ito, Takahiro Ogawa, Miki Haseyama
    IEEE Access 8 114340 - 114353 2020年 
    Techniques for integrating different types of multiple features effectively have been actively studied in recent years. Multiset canonical correlation analysis (MCCA), which maximizes the sum of pairwise correlations of inter-view (i.e., between different features), is one of the powerful methods for integrating different types of multiple features, and various MCCA-based methods have been proposed. This work focuses on a supervised MCCA variant in order to construct a novel effective feature integration framework. In this paper, we newly propose supervised fractional-order embedding geometrical multi-view CCA (SFGMCCA). This method constructs not only the correlation structure but also two types of geometrical structures of intra-view (i.e., within each feature) and inter-view simultaneously, thereby realizing more precise feature integration. This method also supports the integration of small sample and high-dimensional data by using the fractional-order technique. We conducted experiments using four types of image datasets, i.e., MNIST, COIL-20, ETH-80 and CIFAR-10. Furthermore, we also performed an fMRI dataset containing brain signals to verify the robustness. As a result, it was confirmed that accuracy improvements using SFGMCCA were statistically significant at the significance level of 0.05 compared to those using conventional representative MCCA-based methods.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Access 8 96777 - 96786 2020年 
    A new approach that drastically improves cross-modal retrieval performance in vision and language (hereinafter referred to as & x201C;vision and language retrieval & x201D;) is proposed in this paper. Vision and language retrieval takes data of one modality as a query to retrieve relevant data of another modality, and it enables flexible retrieval across different modalities. Most of the existing methods learn optimal embeddings of visual and lingual information to a single common representation space. However, we argue that the forced embedding optimization results in loss of key information for sentences and images. In this paper, we propose an effective utilization of representation spaces in a simple but robust vision and language retrieval method. The proposed method makes use of multiple individual representation spaces through text-to-image and image-to-text models. Experimental results showed that the proposed approach enhances the performance of existing methods that embed visual and lingual information to a single common representation space.
  • Yui Matsumoto, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE Access 8 48673 - 48685 2020年 [査読有り][通常論文]
     
    A novel trial for estimating popularity of artists in music streaming services (MSS) is presented in this paper. The main contribution of this paper is to improve extensibility for using multi-modal features to accurately analyze latent relationships between artists. In the proposed method, a novel framework to construct a network is derived by collaboratively using social metadata and multi-modal features via canonical correlation analysis. Different from conventional methods that do not use multi-modal features, the proposed method can construct a network that can capture social metadata and multi-modal features, i.e., a context-aware network. For effectively analyzing the context-aware network, a novel framework to realize popularity estimation of artists is developed based on network analysis. The proposed method enables effective utilization of the network structure by extracting node features via a node embedding algorithm. By constructing an estimator that can distinguish differences between the node features, the proposed method can archive accurate popularity estimation of artists. Experimental results using multiple real-world datasets that contain artists in various genres in Spotify, one of the largest MSS, are presented. Quantitative and qualitative evaluations show that our method is effective for both classifying and regressing the popularity.
  • Tomoki Haruyama, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    MMSports 2019 - Proceedings of the 2nd International Workshop on Multimedia Content Analysis in Sports, co-located with MM 2019 10 - 15 2019年10月15日 [査読有り][通常論文]
     
    © 2019 Association for Computing Machinery. This paper presents a new method for retrieval of similar scenes based on multimodal distance metric learning in far-view soccer videos that broadly capture soccer fields and are not edited. We extract visual features and audio features from soccer video clips, and we extract text features from text data corresponding to these soccer video clips. In addition, distance metric learning based on Laplacian Regularized Metric Learning is performed to calculate the distances for each kind of features. Finally, by determining the final rank by integrating these distances, we realize successful multimodal retrieval of similar scenes from query scenes of soccer video clips. Experimental results show the effectiveness of our retrieval method.
  • K. Hirasawa, K. Maeda, T. Ogawa, M. Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 663 - 664 2019年10月 [査読有り][通常論文]
     
    This paper presents a method of semantic shot classification in baseball videos based on similarities of visual features. Since it is difficult to prepare a large amount of training data with annotation, accurate event detection methods constructed from a small amount of training data are needed. In broadcast baseball video, since view angles of cameras are different for each event, shot change and event change have a close relationship. When visual features from shots are similar, events corresponding to shots are also similar, and a simple distance-based approach only focusing on training data is effective. Therefore, semantic shot classification based on visual features from a small amount of training data can be realized.
  • N. Ogawa, K. Maeda, T. Ogawa, M. Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 764 - 765 2019年10月 [査読有り][通常論文]
     
    This paper presents region-based distress classification of road infrastructures via convolutional neural networks (CNN) without region annotation. Although CNNs are often used for classification tasks recently, CNNs trained from images which contain unnecessary regions cannot perform precise classification. Distress images of road infrastructures contain various unnecessary objects other than the target distress. Although target regions should be provided in order to achieve high performance, it is a time-confusing task for engineers. This paper focuses on removing unnecessary objects in the images without region annotation via an object detection method. Especially, by using a pre-trained object detection model with distress images of road infrastructures, distress regions in the images are detected automatically. Our proposed CNN trained from the obtained distress regions realizes precise distress classification.
  • Megumi Kotera, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 492 - 493 2019年10月 [査読有り][通常論文]
     
    This paper presents a style transfer method combining generative adversarial networks and style transfer networks. In the previous style transfer methods, transformation from one image to another has been proposed. On the other hand, our method enables style transfer from a text to an image. This will be helpful when there are no images that represent the desired style. Experimental results show the effectiveness of our method.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 943 - 944 2019年10月 [査読有り][通常論文]
     
    In this paper, we develop an integrated multimedia information retrieval system. By utilizing text-to-image Generative Adversarial Network and image-to-text model, the developed system enables users to retrieve an objective content utilizing voice as an input with high accuracy. Experimental results show the effectiveness of the developed system.
  • An Wang, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 766 - 767 2019年10月 [査読有り][通常論文]
     
    The maintenance and management of old infrastructures has become an urgent issue nowadays, as an important part of transportation infrastructures, the subway tunnels are facing the same problem as other infrastructures. In this paper, we present a distress detection method in subway tunnels using U-net. As one of the semantic segmentation neural networks, U-net shows promising performance in remote sensing image segmentation and medical image segmentation. We apply this network to a subway tunnel dataset and compare it with other three semantic segmentation methods. From our experiment, it is shown that the proposed approach achieves promising results in our task.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2019-September 4105 - 4109 2019年09月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents emotion label estimation via tensor-based spatiotemporal visual attention analysis. It has been reported in the fields of psychology and neuroscience that human emotions are related to two elements, their visual attention change and objects included in a target image. Therefore, the proposed method focuses on the spatiotemporal change of visual attention of human gazing at objects in the target image and constructs two neural networks which enable the emotion label estimation considering both of the above two elements. Specifically, the proposed method newly constructs a fourth-order tensor, gaze and image tensor (GIT) whose modes correspond to the width, the height and the color channel of the target image and the time axis of visual attention which is used for representing the time change. Then the first network, which consists of general tensor discriminant analysis (GTDA) and extreme learning machine (ELM), estimates the emotion label from the fourth-order GIT with concerning their visual attention change. Furthermore, the second network, which consists of pre-trained convolutoinal neural network-based feature extraction, GTDA and ELM, enables the estimation from the second-order GIT including visual features obtained from objects focused at each time. Finally, the proposed method estimates emotion labels based on decision fusion of the outputs from the two networks. Experimental results show the effectiveness of the proposed method.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2019-September 1825 - 1829 2019年09月 [査読有り][通常論文]
     
    © 2019 IEEE. We present a new scene retrieval method based on text-to-image Generative Adversarial Network (GAN) and its application to query-based video summarization. Text-to-image GAN is a deep learning method that can generate images from their corresponding sentences. In this paper, we reveal a characteristic that deep learning-based visual features extracted from images generated by text-to-image GAN include semantic information sufficiently. By utilizing the generated images as queries, the proposed method achieves higher scene retrieval performance than those of the stateof-the-art methods. In addition, we introduce a novel architecture that can consider order relationship of the input sentences to our method for realizing a target video summarization. Specifically, the proposed method generates multiple images thorough text-to-image GAN from multiple sentences summarizing target videos. Their summarized video can be obtained by performing the retrieval of corresponding scenes from the target videos according to the generated images with considering the order relationship. Experimental results show the effectiveness of the proposed method in the retrieval and summarization performance.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2019-September 1371 - 1375 2019年09月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method for gastritis detection from gastric X-ray images via fine-tuning techniques using a deep convolutional neural network (DCNN). DCNNs can learn parameters to capture high-dimensional features which express semantic contents of images by training on a large number of labeled images. However, lack of gastric X-ray images for training often occurs. To realize accurate detection with a small number of gastric X-ray images, the proposed method adopts fine-tuning techniques and newly introduces simple annotation of stomach regions to gastric X-ray images used for training. The proposed method fine-tunes a pre-trained DCNN with patches and three kinds of patch-level class labels considering not only the image-level ground truth ('gastritis'/'non-gastritis') but also the regions of a stomach since the outside of the stomach is not related to the image-level ground truth. In the test phase, by estimating the patch-level class labels with the fine-tuned DCNN, the proposed method enables the image-level class label estimation which excludes the effect of the unnecessary regions. Experimental results show the effectiveness of the proposed method.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2019-September 919 - 923 2019年09月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a neural network maximizing ordinally supervised multi-view canonical correlation for deterioration level estimation. The contributions of this paper are twofold. First, in order to calculate features representing deterioration levels on transmission towers, which is one of the infrastructures, a novel neural network handling multi-modal features is constructed from a small amount of training data. Specifically, in our method, effective transformation to features with high discriminant ability without using many hidden layers is realized by setting projection matrices maximizing correlation between multiple features into hidden layer's weights. Second, since there exists ordinal scale in deterioration levels, the proposed method newly derives ordinally supervised multi-view canonical correlation analysis (OsMVCCA). OsMVCCA enables estimation of the effective projection considering not only label information but also their ordinal scales. Experimental results show that the proposed method realizes accurate deterioration level estimation.
  • Yui Matsumoto, Shota Hamano, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2019 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-TW 2019 2019年05月 
    A novel method to realize bilingual lexicon learning (BLL) using tagged images is presented in this paper. Different from existing methods that require parallel corpora, the proposed method enables extraction of semantically similar words by utilizing not such corpora but tagged images on image sharing services. The main contribution of this paper is derivation of a novel framework to refine visual features of tagged images based on graph trilateral filter-based smoothing. This enables reduction of the influence of noisy tags that are irrelevant to contents of images. As a result, accurate BLL becomes feasible by nearest neighbor search using the refined visual features.
  • Masanao Matsumoto, Naoki Saito, Takahiro Ogawa, Miki Haseyama
    2019 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-TW 2019 2019年05月 
    This paper presents a Convolutional Sparse Coding (CSC)-based anomalous event detection method in surveillance videos. The proposed method derives new features from reconstruction errors and sparse coefficient maps obtained by CSC, and the anomalous events are detected by a multi-layer network whose inputs are the above new features. Since such events, i.e., anomalous objects, have different characteristics in the sparse coefficient maps and their corresponding reconstruction errors, successful detection can be expected. Experimental results show high detection performance of own method.
  • Y. Moroto, K. Maeda, T. Ogawa, M. Haseyama
    IEEE International Conference on Consumer Electronics – Taiwan (ICCE-TW) 479 - 480 2019年05月 [査読有り][通常論文]
     
    This paper presents a method for user-specific visual attention estimation based on visual similarities and spatial information in images. In order to estimate the user-specific visual attention, the proposed method calculates two kinds of saliency maps. One is constructed as a visual similarity-based saliency map, and the other is constructed by considering spatial information of objects in images. The proposed method performs a fusion of these two maps for considering visual similarities and spatial information. This is the biggest contribution of this paper. Therefore, improvement of the estimation performance of the user-specific visual attention is realized.
  • Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2019, Brighton, United Kingdom, May 12-17, 2019 2019-May 1105 - 1109 IEEE 2019年05月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method to estimate viewed image categories from human brain activity via newly derived semi-supervised fuzzy discriminative canonical correlation analysis (Semi-FDCCA). The proposed method can estimate image categories from functional magnetic resonance imaging (fMRI) activity measured while subjects view images by making fMRI activity and visual features obtained from images comparable through Semi-FDCCA. To realize Semi-FDCCA, we first derive a new supervised CCA called FDCCA that can consider fuzzy class information based on image category similarities obtained from WordNet ontology. Second, we adopt SemiCCA that can utilize additional unpaired visual features in addition to pairs of fMRI activity and visual features in order to prevent overfitting to the limited pairs. Furthermore, Semi-FDCCA can be derived by combining FDCCA with SemiCCA. Experimental results show that Semi-FDCCA enables accurate estimation of viewed image categories.
  • Ren Togo, Nobutake Yamamichi, Katsuhiro Mabe, Yu Takahashi, Chihiro Takeuchi, Mototsugu Kato, Naoya Sakamoto, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    Journal of Gastroenterology 54 4 321 - 329 2019年04月 [査読有り][通常論文]
     
    © 2018, Japanese Society of Gastroenterology. Background: Deep learning has become a new trend of image recognition tasks in the field of medicine. We developed an automated gastritis detection system using double-contrast upper gastrointestinal barium X-ray radiography. Methods: A total of 6520 gastric X-ray images obtained from 815 subjects were analyzed. We designed a deep convolutional neural network (DCNN)-based gastritis detection scheme and evaluated the effectiveness of our method. The detection performance of our method was compared with that of ABC (D) stratification. Results: Sensitivity, specificity, and harmonic mean of sensitivity and specificity of our method were 0.962, 0.983, and 0.972, respectively, and those of ABC (D) stratification were 0.925, 0.998, and 0.960, respectively. Although there were 18 false negative cases in ABC (D) stratification, 14 of those 18 cases were correctly classified into the positive group by our method. Conclusions: Deep learning techniques may be effective for evaluation of gastritis/non-gastritis. Collaborative use of DCNN-based gastritis detection systems and ABC (D) stratification will provide more reliable gastric cancer risk information.
  • Yuji Hirai, Naoto Okuda, Naoki Saito, Takahiro Ogawa, Ryuichiro Machida, Shûhei Nomura, Masahiro Ôhara, Miki Haseyama, Masatsugu Shimomura
    Biomimetics 4 1 2019年03月01日 
    Friction is an important subject for sustainability due to problems that are associated with energy loss. In recent years, micro- and nanostructured surfaces have attracted much attention to reduce friction; however, suitable structures are still under consideration. Many functional surfaces are present in nature, such as the friction reduction surfaces of snake skins. In this study, we focused on firebrats, Thermobia domestica, which temporary live in narrow spaces, such as piled papers, so their body surface (integument) is frequently in contact with surrounding substrates. We speculate that, in addition to optical, cleaning effects, protection against desiccation and enemies, their body surface may be also adapted to reduce friction. To investigate the functional effects of the firebrat scales, firebrat surfaces were observed using a field-emission scanning electron microscope (FE-SEM) and a colloidal probe atomic force microscope (AFM). Results of surface observations by FE-SEM revealed that adult firebrats are entirely covered with scales, whose surfaces have microgroove structures. Scale groove wavelengths around the firebrat's head are almost uniform within a scale but they vary between scales. At the level of single scales, AFM friction force measurements revealed that the firebrat scale reduces friction by decreasing the contact area between scales and a colloidal probe. The heterogeneity of the scales' groove wavelengths suggests that it is difficult to fix the whole body on critical rough surfaces and may result in a "fail-safe" mechanism.
  • Ren Togo, Takahiro Ogawa, Osamu Manabe, Kenji Hirata, Tohru Shiga, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 237 - 238 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method for extracting important regions for deep learning models in the identification of cardiac sarcoidosis using polar map images. Although deep learning-based detection methods have widely studied, they are still often called black boxes. Since high reliability for provided results from computer-aided diagnosis systems is important toward clinical applications, this problem should be solved. In this paper, we try to visualize important regions for deep learning-based models for improvement of understanding to clinicians. We monitor the variance of confidence of a model constructed with a deep learning-based feature and define it as a contribution value toward the estimated label. We visualize important regions for models based on the contribution value.
  • Taiga Matsui, Naoki Saito, Takahiro Ogawa, Satoshi Asamizu, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 194 - 195 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method for estimating emotions evoked by watching images based on multiple visual features considering relationship with gaze information. The proposed method obtains multiple visual features from multiple middle layers of a Convolutional Neural Network. Then the proposed method newly derives their gaze-based visual features maximizing correlation with gaze information by using Discriminative Locality Preserving Canonical Correlation Analysis. The final estimation result is calculated by integrating multiple estimation results obtained from these gaze-based visual features. Consequently, successful emotion estimation becomes feasible by using such multiple estimation results which correspond to different semantic levels of target images.
  • Tetsuya Kushima, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 239 - 240 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a new method for estimation of users' interest levels using tensor completion with SemiCCA. The proposed method extracts new features maximizing correlation between features calculated from partially paired users' behavior and contents with semi-supervised canonical correlation analysis (SemiCCA). By this approach, we can successfully use the contents that users have not viewed for the interest level estimation. Moreover, our method utilizes the tensor completion to estimate unknown interest levels. Consequently, in the proposed method, accurate estimation of interest levels using SemiCCA and the tensor completion is realized. Experimental results are shown to verify the effectiveness of the proposed method by using actual data.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 273 - 274 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. In this paper, we present a deep learning method for classifying subcellular protein patterns in human cells. Our method is mainly based on transfer learning and utilizes a newly proposed loss function named focal loss to deal with the problem of severe class imbalance existing in the task. The performance of our method is evaluated by a MacroF1 score of total 28 classes, and the final MacroF1 score of our method is 0.706.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 229 - 230 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method for estimating visual attention via canonical correlation between visual and gaze-based features. The proposed method estimates user-specific visual attention by comparing a test image with training images including their corresponding individual eye gaze data in a common space. Specifically, canonical correlation analysis can derive projections which enable comparison between visual and gaze-based features in the common space. Therefore, given the new test image, our method projects its visual features to the common space and can estimate visual attention. Experimental results show the effectiveness of the proposed method.
  • Masanao Matsumoto, Naoki Saito, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 231 - 232 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a detection method of chronic gastritis from gastric X-ray images. The conventional method cannot detect chronic gastritis accurately since the number of non-gastritis images is overwhelmingly larger than the number of gastritis images. To deal with this problem, the proposed method performs the detection of chronic gastritis by using Deep Autoencoding Gaussian Mixture Models (DAGMM) which is an anomaly detection approach. DAGMM enables construction of chronic gastritis detection model using only non-gastritis images. In addition, DAGMM is superior to conventional anomaly detection methods since the models of dimensionality reduction and density estimation can be learned simultaneously. Therefore, the proposed method realizes accurate detection of chronic gastritis by utilizing DAGMM.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 196 - 197 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a detection method of gastritis from gastric X-ray images using fine-tuning techniques. With the development of deep convolutional neural networks (DCNNs), DCNN-based methods have achieved more accurate performance than conventional machine learning methods using hand-crafted features in the field of medical image analysis. However, lack of training images often occurs in clinical situations even though DCNNs require a large amount of training images to avoid overfitting. Therefore, the proposed method aims to consider the clinical situations that a limited amount of the training images are available. By fine-tuning a DCNN pre-trained with a large amount of annotated natural images, we avoid overfitting and realize accurate detection of the gastritis with a small amount of the training images.
  • Haruna Watanabe, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 235 - 236 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. In this paper, we propose a method to detect bone metastatic tumors using computed tomography (CT) images. Bone metastatic tumors spread from primary cancer to other organs, and they can cause severe pain. Therefore, it is important to detect metastatic tumors earlier in addition to primary cancer. However, since metastatic tumors are very small, and they emerge from unpredictable regions in the body, collecting metastatic tumor images is difficult compared to primary cancer. In such a case, it can be considered that the idea of anomaly detection is suitable. The proposed method based on a generative adversarial network model trains with only non-metastatic bone tumor images and detects bone metastatic tumor in an unsupervised manner. Then the anomaly score is defined for each test CT image. Experimental results show the anomaly scores between non-metastatic bone tumor images and metastatic bone tumor images are clearly different. The anomaly detection approach may be effective for the detection of bone metastatic tumors in CT images.
  • Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 233 - 234 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method that estimates viewed image categories from functional magnetic resonance imaging (fMRI) data via semi-supervised discriminative canonical correlation analysis (Semi-DCCA). We newly derive Semi-DCCA that enables direct comparison of fMRI data and visual features extracted from viewed images while taking into account the class information and additional visual features to avoid overfitting. The proposed method enables estimation of image categories from fMRI data measured when subjects view images by comparing fMRI data with visual features through Semi-DCCA. Experimental results show that Semi-DCCA can improve estimation performance of the viewed image categories.
  • Akira Toyoda, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 198 - 199 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method to classify videos based on user preferences with soft-bag multiple instance learning (MIL). Our method classifies videos that a user has watched into two classes (preferred and not-preferred) with two-modal features extracted from the videos and brain signals measured while the user is watching the videos. Our method splits videos and brain signals into fixed-length segments and computes features used for classification from only a fixed-number of segments selected based on the idea of soft-bag MIL. By using the features computed from the selected segments, our method makes it possible to classify videos in the case that some videos that a user prefers contain some scenes the user does not prefer, and vice versa. Our main contribution allows methods classifying videos based on user preferences to treat such a case unlike conventional methods.
  • Kentaro Yamamoto, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019 768 - 769 2019年 
    To realize the effective tunnel construction, it is important to grasp the characteristics of ground conditions. This paper presents an estimation method of drilling energy based on online learning from tunnel cutting face images. The proposed method realizes the estimation from a small amount of data by learning the image taken immediately before the target image using online learning since consecutive tunnel cutting faces are related to each other. The experimental results show the effectiveness of the proposed method.
  • Tomoki Haruyama, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019 665 - 666 2019年 
    This paper presents a new multimodal method for retrieval of similar soccer videos based on optimal combination of multiple distance measures. Our method first extracts three types of Convolutional Neural Network-based features focusing the players' actions, the audience's cheers and prompt reports. Then, by applying the optimal distance measure to each feature, we calculate the similarities between a query video and videos in a database. Finally, we realize accurate retrieval of similar soccer videos by integrating these similarities. Experiments on actual soccer videos demonstrate encouraging results.
  • Masanao Matsumoto, Naoki Saito 0006, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019 481 - 482 2019年 
    This paper presents an estimation method of user-specific interests for images. The proposed method computes a projection which maximizes the correlation between 'eye gaze data which are collected while watching images' and 'visual and text features' by utilizing Canonical Correlation Analysis (CCA). Since eye gaze data reflect user's interests, new visual and text features calculated by using obtained projections can be also expected to reflect user's interests. Then accurate estimation of user-specific interests for images via Support Vector Machine (SVM) becomes feasible from these features. Experimental results show the effectiveness of our method.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019 477 - 478 2019年 [査読有り][通常論文]
     
    This paper proposes an estimation of user-specific visual attention based on gaze information of similar users. The proposed method estimates the user-specific visual attention by using the eye gaze data of other similar users. Then the similar users are selected based on the past eye gaze data of the target user. Although introducing the eye gaze data of the similar users into the estimation of user-specific visual attention is a simple approach, it can break the limitation of estimation performance. This approach is the main contribution of this paper. Experimental results show the effectiveness.
  • Yutaka Yamada, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019 229 - 230 2019年 [査読有り][通常論文]
     
    This paper presents a novel performance prediction method of examinees based on matrix completion. The proposed method newly introduces the matrix completion for predicting the performance of the entrance examination from the results of a mock exams. This approach using the matrix completion provides a solution to the problem that there is unknown information on the explanatory variable side. Specifically, we adopt singular value decomposition for solving the problem. Therefore, accurate prediction can be expected by the proposed method. This work is the first trial to realize predicting performance of examinees and this is the main contribution of this paper. Experimental results are shown for verifying the effectiveness of the proposed method.
  • Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019 127 - 128 2019年 [査読有り][通常論文]
     
    Researches for estimating what people view from their brain activity have attracted wide attention. Many existing methods focus on only relationship between brain activity and visual features extracted from viewed images. In this paper, we propose a multi-view Bayesian generative model (MVBGM), which adopts a new view, i.e., category features obtained from viewed images. MVBGM based on automatic feature selection under the Bayesian approach can also avoid overfitting caused by high dimensional features. Experimental results show that MVBGM can estimate viewed image categories from brain activity more accurately than existing methods.
  • Ryosuke Sawata, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 8th Global Conference on Consumer Electronics, GCCE 2019 15 - 16 2019年 [査読有り][通常論文]
     
    This paper presents a novel method to extract individual music preference. The novel CCA, named Deep Time-series Canonical Correlation Analysis (DTCCA), is proposed to realize the aforementioned extraction. In contrast with standard CCA, our novel DTCCA can deal with not only the correlation between the input features but also the time-series relation in each input those simultaneously. A DTCCA-based latent space of projected features effectively reflects relationship between his/her EEG features and corresponding audio features rather than standard CCA's one since the audio and EEG signals have time-series relation respectively.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 479 - 480 2019年 [査読有り][通常論文]
     
    Reconstruction of human cognitive contents based on analyzing of functional Magnetic Resonance Imaging (fMRI) signals has been actively researched. Cognitive contents such as seen images can be reconstructed by estimating the relation between fMRI signals and deep neural network (DNN) features extracted from seen images. In order to reconstruct seen images with high accuracy, translation fMRI signals into meaningful features is an important task. In this paper, we validate the reconstruction accuracy of seen images by using visual features with some DNN feature extraction models. Recent works for image reconstruction used VGG19 to extract visual features. However, newer models such as Inception-v3 and ResNet50 have been proposed and these models perform general object recognition with higher accuracy. Thus it is expected the accuracy of image reconstruction is improved when using features extracted by these newer models. Experimental results for images of five categories show the effectiveness of the use of visual features from newer DNN models.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 13 - 14 2019年 [査読有り][通常論文]
     
    Scene retrieval from a video database is a fundamental study in computer vision. Traditionally, content based retrieval methods can retrieve objective scenes with high accuracy by utilizing visual features. However, users cannot utilize content based retrieval methods when they cannot prepare query contents. To solve this problem, in this paper, we propose a novel content based scene retrieval method focusing on text-to-image Generative Adversarial Network and image-to-text model. By utilizing the proposed method, we can retrieve objective scenes in visual feature space with high accuracy even though it only utilizes a sentence as an input. Experimental results show the effectiveness of the proposed method.
  • Takahiro Ogawa, Kento Sugata, Ren Togo, Miki Haseyama
    ITE Transactions on Media Technology and Applications 7 1 36 - 44 2019年 [査読有り][通常論文]
     
    A novel method that integrates brain activity-based classifications obtained from multiple users is presented in this paper. The proposed method performs decision-level fusion (DLF) of the classifications using a kernelized version of extended supervised learning from multiple experts (KESLME), which is newly derived in this paper. In this approach, feature-level fusion of multiuser electroencephalogram (EEG) features is performed by multiset supervised locality preserving canonical correlation analysis (MSLPCCA). In the proposed method, the multiple classification results are obtained by classifiers separately constructed for the multiuser EEG features. Then DLF of these classification results becomes feasible based on KESLME, which can provide the final decision with consideration of the relationship between the MSLPCCA-based integrated EEG features and each classifier’s performance. In this way, a new multi-classifier decision technique, which depends only on users’ brain activities, is realized, and the performance in an image classification task becomes comparable to that of Inception-v3, one of the state-of-the-art deep convolutional neural networks.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Access 7 169920 - 169930 2019年 [査読有り][通常論文]
     
    In this paper, we propose a novel scene retrieval and re-ranking method based on a text-to-image Generative Adversarial Network (GAN). The proposed method generates an image from an input query sentence based on the text-to-image GAN and then retrieves a scene that is the most similar to the generated image. By utilizing the image generated from the input query sentence as a query, we can control semantic information of the query image at the text level. Furthermore, we introduce a novel interactive re-ranking scheme to our retrieval method. Specifically, users can consider the importance of each word within the first input query sentence. Then the proposed method re-generates the query image that reflects the word importance provided by users. By updating the generated query image based on the word importance, it becomes feasible for users to revise retrieval results through this re-ranking process. In experiments, we showed that our retrieval method including the re-ranking scheme outperforms recently proposed retrieval methods.
  • Ren Togo, Naoki Saito 0006, Takahiro Ogawa, Miki Haseyama
    IEEE Access 7 162395 - 162404 2019年 [査読有り][通常論文]
     
    A method for estimating regions of deterioration in electron microscope images of rubber materials is presented in this paper. Deterioration of rubber materials is caused by molecular cleavage, external force, and heat. An understanding of these characteristics is essential in the field of material science for the development of durable rubber materials. Rubber material deterioration can be observed by using on electron microscope but it requires much effort and specialized knowledge to find regions of deterioration. In this paper, we propose an automated deterioration region estimation method based on deep learning and anomaly detection techniques to support such material development. Our anomaly detection model, called Transfer Learning-based Deep Autoencoding Gaussian Mixture Model (TL-DAGMM), uses only normal regions for training since obtaining training data for regions of deterioration is difficult. TL-DAGMM makes use of extracted high representation features from a pre-trained deep learning model and can automatically learn the characteristics of normal rubber material regions. Regions of deterioration are estimated at the pixel level by calculated anomaly scores. Experiments on real rubber material electron microscope images demonstrated the effectiveness of our model.
  • Genki Suzuki, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    IEEE Access 7 153238 - 153248 2019年 [査読有り][通常論文]
     
    A novel method for estimating team tactics in soccer videos based on a Deep Extreme Learning Machine (DELM) and unique characteristics of tactics is presented in this paper. The proposed method estimates the tactics of each team from players' formations and enables successful training from a limited amount of training data. Specifically, the estimation of tactics consists of two stages. First, by utilizing two DELMs corresponding to the two teams, the proposed method estimates the provisional tactics of each team. Second, the proposed method updates the team tactics based on unique characteristics of soccer tactics, the relationship between tactics of the two teams and information on ball possession. Consequently, since the proposed method estimates the team tactics that satisfy these characteristics, accurate estimation results can be obtained. In an experiment, the proposed method is applied to actual soccer videos to verify its effectiveness.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Access 7 153183 - 153193 2019年 [査読有り][通常論文]
     
    Scene retrieval from input descriptions has been one of the most important applications with the increasing number of videos on the Web. However, this is still a challenging task since semantic gaps between features of texts and videos exist. In this paper, we try to solve this problem by utilizing a text-to-image Generative Adversarial Network (GAN), which has become one of the most attractive research topics in recent years. The text-to-image GAN is a deep learning model that can generate images from their corresponding descriptions. We propose a new retrieval framework, Query is GAN, based on the text-to-image GAN that drastically improves scene retrieval performance by simple procedures. Our novel idea makes use of images generated by the text-to-image GAN as queries for the scene retrieval task. In addition, unlike many studies on text-to-image GANs that mainly focused on the generation of high-quality images, we reveal that the generated images have reasonable visual features suitable for the queries even though they are not visually pleasant. We show the effectiveness of the proposed framework through experimental evaluation in which scene retrieval is performed from real video datasets.
  • Tetsuya Kushima, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    IEEE Access 7 148576 - 148585 2019年 [査読有り][通常論文]
     
    A novel method for interest level estimation based on tensor completion via feature integration for partially paired users' behavior and videos is presented in this paper. The proposed method defines a novel canonical correlation analysis (CCA) framework that is suitable for interest level estimation, which is a hybrid version of semi-supervised CCA (SemiCCA) and supervised locality preserving CCA (SLPCCA) called semi-supervised locality preserving CCA (S2LPCCA). For partially paired users' behavior and videos in actual shops and on the Internet, new integrated features that maximize the correlation between partially paired samples by the principal component analysis (PCA)-mixed CCA framework are calculated. Then videos that users have not watched can be used for the estimation of users' interest levels. Furthermore, local structures of partially paired samples in the same class are preserved for accurate estimation of interest levels. Tensor completion, which can be applied to three contexts, videos, users and 'canonical features and interest levels,' is used for estimation of interest levels. Consequently, the proposed method realizes accurate estimation of users' interest levels based on S2LPCCA and the tensor completion from partially paired training features of users' behavior and videos. Experimental results obtained by applying the proposed method to actual data show the effectiveness of the proposed method.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) 2019-May 1 - 5 2019年 [査読有り][通常論文]
     
    Text-to-image Generative Adversarial Network (GAN) is a deep learning model that generates an image from an input sentence. It is expressly attracting attentions because of its applicability of the generated images. However, many existing studies have still focused on generation of high-quality images, and there are few studies focusing on application of the generated images since text-to-image GANs still cannot produce visually pleasing images in the complicated tasks. In this paper, we apply a text-to-image GAN as a generator of query images for a scene retrieval task to show availability of the visually non-pleasant images. The proposed method utilizes a low-resolution generated image that focuses on a sentence and a high-resolution generated image that focuses on each word of the sentence to retrieve a desired scene. With this mechanism, the proposed method realizes a high-accuracy scene retrieval from a sentence input. Experimental results show the effectiveness of our method.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE International Symposium on Circuits and Systems, ISCAS 2019, Sapporo, Japan, May 26-29, 2019 2019-May 1 - 5 IEEE 2019年 [査読有り][通常論文]
     
    This paper presents a method of semi-supervised learning based on tri-training for gastritis classification using gastric X-ray images. The proposed method is constructed based on the tri-training architecture, and the strategies of label smoothing regularization and random erasing augmentation are utilized in the method to enhance the performance. Although the task of gastritis classification is challenging, we report that the proposed semi-supervised learning method using only a small number of labeled data achieves 0.888 harmonic mean of sensitivity and specificity on test data composed of 615 patients.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE International Symposium on Circuits and Systems, ISCAS 2019, Sapporo, Japan, May 26-29, 2019 2019-May 1 - 5 IEEE 2019年 [査読有り][通常論文]
     
    With the development of convolutional neural networks (CNNs), CNN-based methods for medical image analysis have achieved more accurate performance than conventional machine learning methods using hand-crafted features. Although these methods utilize a large number of training images and realize high performance, lack of the training images often occurs in medical image analysis due to several reasons. This paper presents a novel image generation method to construct a dataset for gastritis detection from gastric X-ray images. The proposed method effectively utilizes two kinds of training images (gastritis and non-gastritis images) to generate images of each domain by introducing label conditioning into a generative model. Experimental results using real-world gastric X-ray images show the effectiveness of the proposed method.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2019, Brighton, United Kingdom, May 12-17, 2019 2019-May 3936 - 3940 IEEE 2019年 [査読有り][通常論文]
     
    This paper presents multi-feature fusion based on supervised multi-view multi-label canonical correlation projection (sM2CP). The proposed method applies sM2CP-based feature fusion to multiple features obtained from various convolutional neural networks (CNNs) whose characteristics are different. Since new fused features with high representation ability can be obtained, performance improvement of multi-label classification is realized. Specifically, in order to tackle the multi-label problem, sM2CP introduces a label similarity information of label vectors into the objective function of supervised multi-view canonical correlation analysis. Thus, sM2CP can deal with complex label information such as multi-label annotation. The main contribution of this paper is the realization of feature fusion of multiple CNN features for the multi-label problem by introducing multi-label similarity information into the canonical correlation analysis-based feature fusion approach. Experimental results show the effectiveness of sM2CP, which enables effective fusion of multiple CNN features.
  • Ryosuke Sawata, Takahiro Ogawa, Miki Haseyama
    IEEE Trans. Affect. Comput. 10 3 430 - 444 2019年 [査読有り][通常論文]
     
    A novel audio feature projection using Kernel Discriminative Locality Preserving Canonical Correlation Analysis (KDLPCCA)-based correlation with electroencephalogram (EEG) features for favorite music classification is presented in this paper. The projected audio features reflect individual music preference adaptively since they are calculated by considering correlations with the user’s EEG signals during listening to musical pieces that the user likes/dislikes via a novel CCA proposed in this paper. The novel CCA, called KDLPCCA, can consider not only a non-linear correlation but also local properties and discriminative information of each class sample, namely, music likes/dislikes. Specifically, local properties reflect intrinsic data structures of the original audio features, and discriminative information enhances the power of the final classification. Hence, the projected audio features have an optimal correlation with individual music preference reflected in the user’s EEG signals, adaptively. If the KDLPCCA-based projection that can transform original audio features into novel audio features is calculated once, our method can extract projected audio features from a new musical piece without newly observing individual EEG signals. Our method therefore has a high level of practicability. Consequently, effective classification of user’s favorite musical pieces via a Support Vector Machine (SVM) classifier using the new projected audio features becomes feasible. Experimental results show that our method for favorite music classification using projected audio features via the novel CCA outperforms methods using original audio features, EEG features and even audio features projected by other state-of-the-art CCAs.
  • Takahiro Ogawa, Shingo Yamaguchi
    IEEE Consumer Electron. Mag. 8 3 6 - 7 2019年 [査読有り][通常論文]
  • Ren Togo, Kenji Hirata, Osamu Manabe, Hiroshi Ohira, Ichizo Tsujino, Keiichi Magota, Takahiro Ogawa, Miki Haseyama, Tohru Shiga
    Comput. Biol. Medicine 104 81 - 86 2019年 [査読有り][通常論文]
     
    Aims: The aim of this study was to determine whether deep convolutional neural network (DCNN)-based features can represent the difference between cardiac sarcoidosis (CS) and non-CS using polar maps. Methods: A total of 85 patients (33 CS patients and 52 non-CS patients) were analyzed as our study subjects. One radiologist reviewed PET/CT images and defined the left ventricle region for the construction of polar maps. We extracted high-level features from the polar maps through the Inception-v3 network and evaluated their effectiveness by applying them to a CS classification task. Then we introduced the ReliefF algorithm in our method. The standardized uptake value (SUV)-based classification method and the coefficient of variance (CoV)-based classification method were used as comparative methods. Results: Sensitivity, specificity and the harmonic mean of sensitivity and specificity of our method with the ReliefF algorithm were 0.839, 0.870 and 0.854, respectively. Those of the SUVmax-based classification method were 0.468, 0.710 and 0.564, respectively, and those of the CoV-based classification method were 0.655, 0.750 and 0.699, respectively. Conclusion: The DCNN-based high-level features may be more effective than low-level features used in conventional quantitative analysis methods for CS classification.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    Comput. Aided Civ. Infrastructure Eng. 34 8 654 - 676 2019年 [査読有り][通常論文]
     
    © 2019 Computer-Aided Civil and Infrastructure Engineering This paper presents a convolutional sparse coding (CSC)-based deep random vector functional link network (CSDRN) for distress classification of road structures. The main contribution of this paper is the introduction of CSC into a feature extraction scheme in the distress classification. CSC can extract visual features representing characteristics of target images because it can successfully estimate optimal convolutional dictionary filters and sparse features as visual features by training from a small number of distress images. The optimal dictionaries trained from distress images have basic components of visual characteristics such as edge and line information of distress images. Furthermore, sparse feature maps estimated on the basis of the dictionaries represent both strength of the basic components and location information of regions having their components, and these maps can represent distress images. That is, sparse feature maps can extract key components from distress images that have diverse visual characteristics. Therefore, CSC-based feature extraction is effective for training from a limited number of distress images that have diverse visual characteristics. The construction of a novel neural network, CSDRN, by the use of a combination of CSC-based feature extraction and the DRN classifier, which can also be trained from a small dataset, is shown in this paper. Accurate distress classification is realized via the CSDRN.
  • Ryosuke Harakawa, Shoji Takimura, Takahiro Ogawa, Miki Haseyama, Masahiro Iwahashi
    IEEE Access 7 116207 - 116217 2019年 [査読有り][通常論文]
     
    Although Twitter has become an important source of information, the number of accessible tweets is too large for users to easily find their desired information. To overcome this difficulty, a method for tweet clustering is proposed in this paper. Inspired by the reports that network representation is useful for multimedia content analysis including clustering, a network-based approach is employed. Specifically, a consensus clustering method for tweet networks that represent relationships among the tweets' semantics and sentiment are newly derived. The proposed method integrates multiple clustering results obtained by applying successful clustering methods to the tweet networks. By integrating complementary clustering results obtained based on semantic and sentiment features, the accurate clustering of tweets becomes feasible. The contribution of this work can be found in the utilization of the features, which differs from existing network-based consensus clustering methods that target only the network structure. Experimental results for a real-world Twitter dataset, which includes 65 553 tweets of 25 datasets, verify the effectiveness of the proposed method.
  • Yui Matsumoto, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE Access 7 104155 - 104167 2019年 [査読有り][通常論文]
     
    A novel method for music video recommendation is presented in this paper. The contributions of this paper are two-fold. (i) The proposed method constructs a network, which not only represents relationships between music videos and users but also captures multi-modal features of music videos. This enables collaborative use of multi-modal features such as audio, visual, and textual features, and multiple social metadata that can represent relationships between music videos and users on video hosting services. (ii) A novel scheme for link prediction considering local and global structures of the network (LP-LGSN) is newly derived by fusing multiple link prediction scores based on both local and global structures. By using the LP-LGSN to predict the degrees to which users desire music videos, the proposed method can recommend users' desired music videos. The experimental results for a real-world dataset constructed from YouTube-8M show the effectiveness of the proposed method.
  • Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Access 7 87448 - 87457 2019年 [査読有り][通常論文]
     
    © 2013 IEEE. In this paper, a novel synthetic gastritis image generation method based on a generative adversarial network (GAN) model is presented. Sharing medical image data is a crucial issue for realizing diagnostic supporting systems. However, it is still difficult for researchers to obtain medical image data since the data include individual information. Recently proposed GAN models can learn the distribution of training images without seeing real image data, and individual information can be completely anonymized by generated images. If generated images can be used as training images in medical image classification, promoting medical image analysis will become feasible. In this paper, we targeted gastritis, which is a risk factor for gastric cancer and can be diagnosed by gastric X-ray images. Instead of collecting a large amount of gastric X-ray image data, an image generation approach was adopted in our method. We newly propose loss function-based conditional progressive growing generative adversarial network (LC-PGGAN), a gastritis image generation method that can be used for a gastritis classification problem. The LC-PGGAN gradually learns the characteristics of gastritis in gastric X-ray images by adding new layers during the training step. Moreover, the LC-PGGAN employs loss function-based conditional adversarial learning so that generated images can be used as the gastritis classification task. We show that images generated by the LC-PGGAN are effective for gastritis classification using gastric X-ray images and have clinical characteristics of the target symptom.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    Advanced Engineering Informatics 37 79 - 87 2018年08月01日 [査読有り][通常論文]
     
    This paper presents distress classification of class-imbalanced inspection data via correlation-maximizing weighted extreme learning machine (CMWELM). For distress classification, it is necessary to extract semantic features that can effectively distinguish multiple kinds of distress from a small amount of class-imbalanced data. In recent machine learning techniques such as general deep learning methods, since effective feature transformation from visual features to semantic features can be realized by using multiple hidden layers, a large amount of training data are required. However, since the amount of training data of civil structures becomes small, it becomes difficult to perform successful transformation by using these multiple hidden layers. On the other hand, CMWELM consists of two hidden layers. The first hidden layer performs feature transformation, which can directly extract the semantic features from visual features, and the second hidden layer performs classification with solving the class-imbalanced problem. Specifically, in the first hidden layer, the feature transformation is realized by using projections obtained by maximizing the canonical correlation between visual and text features as weight parameters of the hidden layer without designing multiple hidden layers. Furthermore, the second hidden layer enables successful training of our classifier by using weighting factors concerning the class-imbalanced problem. Consequently, CMWELM realizes accurate distress classification from a small amount of class-imbalanced data.
  • Strategy to develop convolutional neural network-based classifier for diagnosis of whole-body FDG PET images
    Keisuke Kawauchi, Kenji Hirata, Seiya Ichikawa, Osamu Manabe, Kentaro Kobayashi, Shiro Watanabe, Miki Haseyama, Takahiro Ogawa, Ren Togo, Tohru Shiga, Chietsugu Katoh
    Society of Nuclear Medicine and Molecular Imaging Annual Meeting (SNMMI) 2018年06月 [査読有り][通常論文]
  • Use of deep convolutional neural network-based features for detection of cardiac sarcoidosis from polar map
    Ren Togo, Kenji Hirata, Osamu Manabe, Hiroshi Ohira, Ichizo Tsujino, Takahiro Ogawa, Miki Haseyama, Tohru Shiga
    Society of Nuclear Medicine and Molecular Imaging Annual Meeting (SNMMI) 2018年06月 [査読有り][通常論文]
  • Soh Yoshida, Takahiro Ogawa, Miki Haseyama, Mitsuji Muneyasu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E101D 5 1430 - 1440 2018年05月 [査読有り][通常論文]
     
    Video reranking is an effective way for improving the retrieval performance of text-based video search engines. This paper proposes a graph-based Web video search reranking method with local and global consistency analysis. Generally, the graph-based reranking approach constructs a graph whose nodes and edges respectively correspond to videos and their pairwise similarities. A lot of reranking methods are built based on a scheme which regularizes the smoothness of pairwise relevance scores between adjacent nodes with regard to a user's query. However, since the overall consistency is measured by aggregating only the local consistency over each pair, errors in score estimation increase when noisy samples are included within query-relevant videos' neighbors. To deal with the noisy samples, the proposed method leverages the global consistency of the graph structure, which is different from the conventional methods. Specifically, in order to detect this consistency, the propose method introduces a spectral clustering algorithm which can detect video groups, in which videos have strong semantic correlation, on the graph. Furthermore, a new regularization term, which smooths ranking scores within the same group, is introduced to the reranking framework. Since the score regularization is performed by both local and global aspects simultaneously, the accurate score estimation becomes feasible. Experimental results obtained by applying the proposed method to a real-world video collection show its effectiveness.
  • Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    IEEE International Conference on Consumer Electronics – Taiwan (ICCE-TW) 2018年05月 [査読有り][通常論文]
     
    This paper presents an anonymous gastritis image generation method for improving gastritis recognition performance. We realize the generation of realistic gastritis images by considering label information. Experimental results showed that anonymous images generated by our method had a potential for a gastritis recognition task. Concretely, the recognition performance of a classifier constructed with the anonymous images outperformed the performance of the conventional image generation method-based classifier.
  • Ryosuke Harakawa, Daichi Takehara, Takahiro Ogawa, Miki Haseyama
    Multimedia Tools and Applications 77 14 1 - 19 2018年03月29日 [査読有り][通常論文]
     
    For realizing quick and accurate access to desired information and effective advertisements or election campaigns, personalized tweet recommendation is highly demanded. Since multimedia contents including tweets are tools for users to convey their sentiment, users’ interest in tweets is strongly influenced by sentiment factors. Therefore, successful personalized tweet recommendation can be realized if sentiment in tweets can be estimated. However, sentiment factors were not taken into account in previous works and the performance of previous methods may be limited. To overcome the limitation, a method for sentiment-aware personalized tweet recommendation through multimodal Field-aware Factorization Machines (FFM) is newly proposed in this paper. Successful personalized tweet recommendation becomes feasible through the following three contributions: (i) sentiment factors are newly introduced into personalized tweet recommendation, (ii) users’ interest is modeled by deriving multimodal FFM that enables collaborative use of multiple factors in a tweet, i.e., publisher, topic and sentiment factors, and (iii) the effectiveness of using sentiment factors as well as publisher and topic factors is clarified from results of experiments using real-world datasets related to worldwide hot topics, “#trump”, “#hillaryclinton” and “#ladygaga”. In addition to showing the effectiveness of the proposed method, the applicability of the proposed method to other tasks such as advertisement and social analysis is discussed as a conclusion and future work of this paper.
  • Susumu Gerund, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2017- 3978 - 3982 2018年02月20日 [査読有り][通常論文]
     
    This paper presents an image retrieval method based on local regression and global alignment (LRGA) algorithm and relevance feedback for insect identification. Based on LRGA algorithm, the proposed method enables estimation of ranking scores for image retrieval in such a way that the neighborhood structure of the database can be optimally preserved. This is the biggest contribution of this paper. Then our method measures relevance between the query image and all the images in the database and realizes retrieval of images based on the measured relevance. Furthermore, if positively labeled images obtained by a user are available, they are used as the query relevance information for the relevance feedback to improve the retrieval results. Experimental results show the effectiveness of our method.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2017- 2379 - 2383 2018年02月20日 [査読有り][通常論文]
     
    This paper presents an automatic estimation method of deterioration levels on transmission towers via Deep Extreme Learning Machine based on Local Receptive Field (DELM-LRF). Although Convolutional Neural Network (CNN) requires a large number of training images, it is difficult to prepare a sufficient number of training images of transmission towers. Thus, we generate a novel estimation method which enables training from a small number of training images. Specifically, we automatically extract image features based on Local Receptive Field (LRF) which combines convolution and pooling without using hand-craft features and estimate deterioration levels via Deep Extreme Learning Machine (DELM), which is a part of efficient deep learning methods. The derivation of DELM-LRF is the biggest contribution of this paper, and it can be trained from less training images compared to CNN. Experimental results show the effectiveness of DELM-LRF for the estimation of deterioration levels on transmission towers. Consequently, the proposed method makes it possible to approach challenging tasks with high expertise having difficulty in preparing enough images.
  • Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2017- 2055 - 2059 2018年02月20日 [査読有り][通常論文]
     
    This paper presents a novel detection method of gastric cancer risk from X-ray images using the patch-based Convolutional Neural Network (CNN). Our method enables the training of the patch-based CNN which can accurately detect gastric cancer risk even though there is only the image-level ground truth. Furthermore, the proposed method can extract a feature vector that can represent the whole of symptoms associated with the presence or absence of the risk. Specifically, the proposed method selects the patches related to their true risk via the CNN, and it is the most innovative contribution of our method. Moreover, we extract the feature vector by applying the Bag-of-Feature representation to the output values from the CNN's intermediate layer obtained from the selected patches. Finally, the detection of gastric cancer risk is performed by inputting the extracted feature vector into Support Vector Machine. Experimental results confirm that the proposed method outperforms a previously reported method that combines the detection results obtained from X-ray images taken from multiple angles even though the proposed method only uses an X-ray image taken from a single angle, and we can achieve a higher performance than that of doctors.
  • Shota Hamano, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2017- 1327 - 1331 2018年02月20日 [査読有り][通常論文]
     
    This paper presents a novel method for tag refinement using multilingual sources of tagged images in an image folksonomy. The proposed method enables accurate tag refinement by effectively leveraging multilingual sources of tags and considering the hierarchical structure of tags in the following way. First, synonymous tags across different languages are detected based on similarities between tagged images. In this stage, the proposed method utilizes visual similarities to effectively detect synonymous tags since the visual features extracted from images should be similar if they are assigned tags with the same meaning in different languages. Then hierarchical structure of the tags are extracted based on the similarity between the detected synonymous tags. The hierarchical structure provides hypernymous and hyponymous tags of the target tags, which are important for considering the relevance between tags and images. Consulting the hierarchical structure enables removal of irrelevant tags from the images and assignment of relevant tags to the images. The proposed method effectively utilizes tags in various languages in an image folksonomy. Experimental results show the effectiveness of introducing multilingual sources of tagged images for accuracy improvement in tag refinement.
  • Akira Toyoda, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2017- 635 - 639 2018年02月20日 [査読有り][通常論文]
     
    This paper presents a new method to estimate users' video preferences using complementary properties of features via Multiview Local Fisher Discriminant Analysis (MvLFDA). The proposed method first extracts multiple visual features from video frames and electroencephalogram (EEG) features from users' EEG signals recorded during watching video. Then we calculate EEG-based visual features by applying Locality Preserving Canonical Correlation Analysis (LPCCA) to each visual feature and EEG features. The EEG-based visual features reflect users' preferences since the correlation between visual features and EEG features which reflect users' preferences is maximized. Next, MvLFDA, which is newly derived in this paper, integrates multiple EEG-based visual features. Since MvLFDA explores complementary properties of different features, it can be expected that the features obtained by integrating multiple EEG-based visual features are more effective for users' preference estimation than each EEG-based visual feature. The biggest contribution of this paper is the new derivation of MvLFDA. Then successful estimation of users' video preferences becomes feasible using features obtained by MvLFDA.
  • Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Proceedings - International Conference on Image Processing, ICIP 2017- 435 - 439 2018年02月20日 [査読有り][通常論文]
     
    This paper presents an automatic Martian dust storm detection via decision level fusion (DLF) based on deep extreme learning machine (DELM). Since Martian images are taken in multi-wavelength bands, DLF techniques which output a final classification result by integrating multiple classification results are necessary. Furthermore, since the number of Martian images taken by satellites is different for each region, the number of the classification results to be integrated is different. Thus, we present a new DLF framework based on confidence values of the classification results. Specifically, we generate multiple extreme learning machines with kernel classifiers to obtain their classification results. Moreover, we monitor the classification results as confidence values and select the same number of the classification results with high confidence for each region. Finally, these selected results can be integrated by using a DLF based on DELM, which is a multilayered ELM. This integration framework is the biggest contribution of our method. Experimental results show the effectiveness of the DLF based on DELM.
  • Ren Togo, Kenta Ishihara, Katsuhiro Mabe, Harufumi Oizumi, Takahiro Ogawa, Mototsugu Kato, Naoya Sakamoto, Shigemi Nakajima, Masahiro Asaka, Miki Haseyama
    World Journal of Gastrointestinal Oncology 10 2 62 - 70 2018年02月15日 [査読有り][通常論文]
     
    AIM To perform automatic gastric cancer risk classification using photofluorography for realizing effective mass screening as a preliminary study. METHODS We used data for 2100 subjects including X-ray images, pepsinogen ? and ? levels, PG?/PG? ratio, Helicobacter pylori (H. pylori ) antibody, H. pylori eradication history and interview sheets. We performed two-stage classification with our system. In the first stage, H. pylori infection status classification was performed, and H. pylori -infected subjects were automatically detected. In the second stage, we performed atrophic level classification to validate the effectiveness of our system. RESULTS Sensitivity, specificity and Youden index (YI) of H. pylori infection status classification were 0.884, 0.895 and 0.779, respectively, in the first stage. In the second stage, sensitivity, specificity and YI of atrophic level classification for H. pylori -infected subjects were 0.777, 0.824 and 0.601, respectively. CONCLUSION Although further improvements of the system are needed, experimental results indicated the effectiveness of machine learning techniques for estimation of gastric cancer risk.
  • Yuma Sasaka, Takahiro Ogawa, Miki Haseyama
    IEEE Access 6 8340 - 8350 2018年02月09日 [査読有り][通常論文]
     
    A reliable method to estimate viewer interest is highly sought after for human-centered video information retrieval. A method that estimates viewer interest while users are watching Web videos is presented in this paper. The method uses a framework for anomaly detection based on collaborative use of facial expression and biological signals such as electroencephalogram (EEG) signals. To the best of our knowledge, there have been no studies that have taken into account two actual mechanisms of the behavior of users while they are watching Web videos. First, whereas most Web videos garner very little attention, a small number attract millions of views. Therefore, a framework for anomaly detection is newly applied to facial expression and EEG in order to model the imbalanced distribution of popularity. Second, since the number of Web videos that are labeled by users as interesting/not interesting is generally too small to estimate viewer interest by a supervised approach, the proposed method utilizes parametric techniques for anomaly detection, which estimates viewer interest in an unsupervised way. Unlike some related studies for estimating viewer interest, our method takes into account actual mechanisms of the behavior of users while they are watching Web videos by utilizing parametric techniques for anomaly detection. Then viewer interest can be estimated on the basis of an anomaly score calculated from our proposed method. Consequently, successful estimation of viewer interest based on a framework for anomaly detection, via collaborative use of facial expression and biological signals, becomes feasible.
  • Yoshiki Ito, Takahiro Ogawa, Miki Haseyama
    IEICE Transactions on Information and Systems E101D 2 481 - 490 2018年02月01日 [査読有り][通常論文]
     
    A method for accurate estimation of personalized video preference using multiple users' viewing behavior is presented in this paper. The proposed method uses three kinds of features: a video, user's viewing behavior and evaluation scores for the video given by a target user. First, the proposed method applies Supervised Multiview Spectral Embedding (SMSE) to obtain lower-dimensional video features suitable for the following correlation analysis. Next, supervised Multi-View Canonical Correlation Analysis (sMVCCA) is applied to integrate the three kinds of features. Then we can get optimal projections to obtain new visual features, "canonical video features" reflecting the target user's individual preference for a video based on sMVCCA. Furthermore, in our method, we use not only the target user's viewing behavior but also other users' viewing behavior for obtaining the optimal canonical video features of the target user. This unique approach is the biggest contribution of this paper. Finally, by integrating these canonical video features, Support Vector Ordinal Regression with Implicit Constraints (SVORIM) is trained in our method. Consequently, the target user's preference for a video can be estimated by using the trained SVORIM. Experimental results show the effectiveness of our method.
  • Naoki Saito, Takahiro Ogawa, Satoshi Asamizu, Miki Haseyama
    Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, ICMR 2018, Yokohama, Japan, June 11-14, 2018 493 - 496 ACM 2018年 [査読有り][通常論文]
     
    A new tourism category classification method through estimation of existence of reliable classification results is presented in this paper. The proposed method obtains two kinds of classification results by applying a convolutional neural network to tourism images and applying a Fuzzy K-nearest neighbor algorithm to geotags attached to the tourism images. Then the proposed method estimates existence of reliable classification results in the above two results. If the reliable result is included, the result is selected as the final classification result. If any reliable result is not included, the final result is obtained by another approach based on a multiple annotator logistic regression model. Consequently, the proposed method enables accurate classification based on the new estimation scheme.
  • Kazaha Horii, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2018 IEEE International Conference on Image Processing, ICIP 2018, Athens, Greece, October 7-10, 2018 2366 - 2370 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents a human-centered neural network model with discriminative locality preserving canonical correlation analysis (DLPCCA) for image classification. Although construction of multiple hidden layers adopted in recent deep learning methods is effective for extracting semantic features, a large amount of training images is required. In order to extract effective features for image classification successfully from a small amount of training images, the proposed method transforms visual features by using biological information obtained from image viewers as auxiliary information. The proposed method consists of two hidden layers. By constructing the first hidden layer, which can maximize canonical correlation between visual features and features based on biological information, the effective feature transformation can be realized. Specifically, the proposed method uses DLPCCA, which considers label information and preserves local structures. The second hidden layer constructed based on Extreme Learning Machine (ELM) enables classification. Consequently, since the first hidden layer performs the effective feature transformation, the proposed neural network model realizes accurate image classification from a quite small amount of training images.
  • Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    2018 IEEE International Conference on Image Processing, ICIP 2018, Athens, Greece, October 7-10, 2018 2082 - 2086 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents an anonymous gastritis image generation method based on a generative adversarial network approach. Since clinical individual data include highly confidential information, they must be handled carefully. Although data sharing is demanded to construct large-scale medical image datasets for deep learning-based recognition tasks, managing and annotating these data have been conducted manually. The proposed method enables the generation of anonymous images by an adversarial learning approach. Experimental results show that generated images by our method contribute to a gastritis recognition task. This will be helpful for constructing large-scale medical image datasets effectively.
  • Array,Takahiro Ogawa, Miki Haseyama
    Multimedia Tools Appl. 77 16 20297 - 20324 2018年 [査読有り][通常論文]
     
    A method to track topic evolution via salient keyword matching with consideration of semantic broadness for Web video discovery is presented in this paper. The proposed method enables users to understand the evolution of topics over time for discovering Web videos in which they are interested. A framework that enables extraction and tracking of the hierarchical structure, which contains Web video groups with various degrees of semantic broadness, is newly derived as follows: Based on network analysis using multimodal features, i.e., features of video contents and metadata, our method extracts the hierarchical structure and salient keywords that represent contents of each Web video group. Moreover, salient keyword matching, which is newly developed by considering salient keyword distribution, semantic broadness of each Web video group and initial topic relevance, is applied to each hierarchical structure obtained in different time stamps. Unlike methods in previous works, by considering the semantic broadness as well as the salient keyword distribution, our method can overcome the problem of the desired semantic broadness of topics being different depending on each user. Also, the initial topic relevance enables correction of the gap from an initial topic at the start of tracking. Consequently, it becomes feasible to track the evolution of topics over time for finding Web videos in which the users are interested. Experimental results for real-world datasets containing YouTube videos verify the effectiveness of the proposed method.
  • Array,Sho Takahashi, Array,Array
    J. Sel. Topics Signal Processing 12 4 633 - 644 2018年 [査読有り][通常論文]
     
    This paper presents estimation of deterioration levels of transmission towers via deep learning maximizing the canonical correlation between heterogeneous features. In the proposed method, we newly construct a correlation-maximizing deep extreme learning machine (CMDELM) based on a local receptive field (LRF). For accurate deterioration level estimation, it is necessary to obtain semantic information that effectively represents deterioration levels. However, since the amount of training data for transmission towers is small, it is difficult to perform feature transformation by using many hidden layers such as general deep learning methods. In CMDELM-LRF, one hidden layer, which maximizes the canonical correlation between visual features and text features obtained from inspection text data, is newly inserted. Specifically, by using projections obtained by maximizing the canonical correlation as weight parameters of the hidden layer, feature transformation for extracting semantic information is realized without designing many hidden layers. This is the main contribution of this paper. Consequently, CMDELM-LRF realizes accurate deterioration level estimation from a small amount of training data.
  • Megumi Takezawa, Hirofumi Sanada, Takahiro Ogawa, Miki Haseyama
    IEICE Transactions 101-A 6 900 - 903 2018年 [査読有り][通常論文]
     
    In this paper, we propose a highly accurate method for estimating the quality of images compressed using fractal image compression. Using an iterated function system, fractal image compression compresses images by exploiting their self-similarity, thereby achieving high levels of performance; however, we cannot always use fractal image compression as a standard compression technique because some compressed images are of low quality. Generally, sufficient time is required for encoding and decoding an image before it can be determined whether the compressed image is of low quality or not. Therefore, in our previous study, we proposed a method to estimate the quality of images compressed using fractal image compression. Our previous method estimated the quality using image features of a given image without actually encoding and decoding the image, thereby providing an estimate rather quickly; however, estimation accuracy was not entirely sufficient. Therefore, in this paper, we extend our previously proposed method for improving estimation accuracy. Our improved method adopts a new image feature, namely lacunarity. Results of simulation showed that the proposed method achieves higher levels of accuracy than those of our previous method.
  • Array,Array,Miki Haseyama
    IEEE Access 6 32481 - 32492 2018年 [査読有り][通常論文]
     
    In this paper, we propose a novel method for estimating human emotion using functional brain images. The final goal of our study is contribution to affective brain computer interfaces (aBCIs), which use neuropsychological signals. In the proposed method, we newly derive multiview general tensor discriminant analysis (MvGTDA) in order to reveal significant brain regions and accurately estimate human emotion evoked by visual stimuli. This is because it is important to find activation of multiple brain regions for estimating emotional states. Since we regard a 'Brodmann area' as a 'view' and introduce $L-{1}$-norm regularization for these views, MvGTDA can eliminate non-crucial Brodmann areas and select significant ones. Moreover, in general studies on functional brain images based on machine learning methodologies, there is an overfitting problem caused by a small sample size. Therefore, revealing significant Brodmann areas based on MvGTDA has another important role, i.e., solving the overfitting problem. By inputting estimation results respectively obtained from the significant areas and the MvGTDA-based feature, tensor-based supervised decision-level fusion (TS-DLF) integrates them and outputs the final estimation result of the user's emotion. In experiments, we showed the effectiveness of our method by using actual functional brain images and we revealed the significant brain regions in emotional states.
  • Yui Matsumoto, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2018 IEEE International Conference on Multimedia and Expo, ICME 2018, San Diego, CA, USA, July 23-27, 2018 2018-July 1 - 6 IEEE Computer Society 2018年 [査読有り][通常論文]
     
    To help users seek desired music videos and create attractive music videos, many methods that realize applications such as music video recommendation, captioning and generation have been proposed. In this paper, a novel method that realizes these applications simultaneously on the basis of heterogeneous network analysis via latent link estimation is proposed. To the best of our knowledge, this work is the first attempt to realize music video recommendation, captioning and generation simultaneously. The proposed method enables latent link estimation with consideration of multimodal information and multiple social metadata obtained from music videos via Laplacian multiset canonical correlation analysis. Thus, it becomes feasible to construct a heterogeneous network that enables direct comparison of audio, visual and textual information of music videos and user information on the same feature space. Furthermore, link prediction on the obtained heterogeneous network enables association with (i) user information and their desired audio information; (ii) audio information and textual information that describes contents of musical pieces; and (iii) audio information and visual information that represents contents of musical pieces visually. As a result, support for (i) music video recommendation; (ii) captioning; and (iii) generation becomes feasible, respectively. Experimental results for a real-world dataset constructed by using YouTube-8M show the effectiveness of the proposed method.
  • Tetsuya Kushima, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2018 IEEE International Conference on Multimedia and Expo, ICME 2018, San Diego, CA, USA, July 23-27, 2018 2018-July 1 - 6 IEEE Computer Society 2018年 [査読有り][通常論文]
     
    This paper presents a novel method for interest level estimation of items via matrix completion based on adaptive user matrix construction. The proposed method introduces a new criterion for adaptively constructing a user matrix that consists of user behavior features and interest levels, which are evaluated by target users and similar users. In the estimation, the matrix completion via rank minimization using the truncated nuclear norm is applied to the constructed matrix. The proposed method enables both of the interest level estimation of the target users and the selection of the similar users suitable for the estimation by monitoring errors caused in the matrix completion algorithm. The caused errors indicate the minimum differences between the estimated interest levels and true ones, and they can be regarded as the criterion for both of the optimal estimation and the adaptive selection. Furthermore, the proposed method uses weight matrices for decreasing an influence of missing data on the estimation. Consequently, accurate estimation of the interest levels becomes feasible by using the adaptively constructed matrix. Experimental results obtained by applying the proposed method to users' behavior and interest data show the effectiveness of the proposed method.
  • Yoshiki Ito, Takahiro Ogawa, Miki Haseyama
    2018 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2018, Calgary, AB, Canada, April 15-20, 2018 2018-April 3086 - 3090 IEEE 2018年 [査読有り][通常論文]
     
    In this paper, we present supervised fractional-order embedding multiview canonical correlation analysis (SFEMCCA). SFEMCCA is a CCA method realizing the following three points: (1) learning noisy data with small number of samples and large number of dimensions, (2) multiview learning that can integrate three or more kinds of features, and (3) supervised learning using labels corresponding to the samples. In real data, it is necessary to deal with high dimensional noisy data with limited number of samples, and there are many cases where three or more kinds of multimodal and supervised data are treated in order to calculate more accurate projections. Therefore, SFEMCCA, which takes the above advantages (1)-(3) into account, is effective for data obtained from real environments. From experimental results, it was confirmed that accuracy improvements using SFEMCCA were statistically significant compared to the several conventional methods of supervised multiview CCA.
  • Akira Toyoda, Takahiro Ogawa, Miki Haseyama
    2018 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2018, Calgary, AB, Canada, April 15-20, 2018 2018-April 891 - 895 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents a new method to estimate user preferences for videos based on multiple feature fusion via semi-supervised Multiview Local Fisher Discriminant Analysis (sMvLFDA). The proposed method first extracts multiple visual features from videos and functional near-infrared spectroscopy (fNIRS) features from fNIRS signals recorded during watching videos. Next, we apply Locality Preserving Canonical Correlation Analysis (LPCCA) to each visual feature and fNIRS features and project each visual feature to the new feature spaces (fNIRS-based visual feature spaces). Consequently, since the correlation between each visual feature and fNIRS features which reflect user preferences is maximized, we can transform visual features into features which also reflect user preferences. In addition, we newly introduce sMvLFDA and fuse multiple fNIRS-based visual features via sMvLFDA. sMvLFDA fuses features while using labeled samples and unlabeled samples simultaneously to reduce overfitting to the labeled samples. Furthermore, sMvLFDA adequately uses complementary properties in multiple features. Therefore, it can be expected that the fused features are more effective for estimation of user preferences than each fNIRS-based visual feature. The main contribution of this paper is the new derivation of sMvLFDA. Consequently, by using the fused features, it becomes feasible to estimate user preferences for videos successfully.
  • Takahiro Ogawa, Sho Takahashi, Naofumi Wada, Akira Tanaka, Miki Haseyama
    IEICE Transactions 101-A 11 1776 - 1785 2018年 [査読有り][通常論文]
     
    Binary sparse representation based on arbitrary quality metrics and its applications are presented in this paper. The novelties of the proposed method are twofold. First, the proposed method newly derives sparse representation for which representation coefficients are binary values, and this enables selection of arbitrary image quality metrics. This new sparse representation can generate quality metric-independent subspaces with simplification of the calculation procedures. Second, visual saliency is used in the proposed method for pooling the quality values obtained for all of the parts within target images. This approach enables visually pleasant approximation of the target images more successfully. By introducing the above two novel approaches, successful image approximation considering human perception becomes feasible. Since the proposed method can provide lower-dimensional subspaces that are obtained by better image quality metrics, realization of several image reconstruction tasks can be expected. Experimental results showed high performance of the proposed method in terms of two image reconstruction tasks, image inpainting and super-resolution.
  • Array,Array,Miki Haseyama
    IEEE Access 6 63833 - 63842 2018年 [査読有り][通常論文]
     
    This paper presents a novel method for favorite video estimation based on multiview feature integration via kernel multiview local fisher discriminant analysis (KMvLFDA). The proposed method first extracts electroencephalogram (EEG) features from users' EEG signals recorded while watching videos and multiple visual features from videos. Then, multiple EEG-based visual features are obtained by applying locality preserving canonical correlation analysis to EEG features and each visual feature. Next, KMvLFDA, which is newly derived in this paper, explores the complementary properties of different features and integrates the multiple EEG-based visual features. In addition, by using KMvLFDA, between-class scatter is maximized and within-class scatter is minimized in the integrated feature space. Consequently, it can be expected that the new features that are obtained by the above integration are more effective than each of the EEG-based visual features for the estimation of users' favorite videos. The main contribution of this paper is the new derivation of KMvLFDA. Successful estimation of users' favorite videos becomes feasible by using the new features obtained via KMvLFDA.
  • Array,Array, Keisuke Maeda, Miki Haseyama
    IEEE Access 6 61401 - 61409 2018年 [査読有り][通常論文]
     
    Video classification based on the user's preference (information of what a user likes: WUL) is important for realizing human-centered video retrieval. A better understanding of the rationale of WUL would greatly contribute to the support for successful video retrieval. However, a few studies have shown the relationship between information of what a user watches and WUL. A new method that classifies videos on the basis of WUL using video features and electroencephalogram (EEG) signals collaboratively with a multimodal bidirectional Long Short-Term Memory (Bi-LSTM) network is presented in this paper. To the best of our knowledge, there has been no study on WUL-based video classification using video features and EEG signals collaboratively with LSTM. First, we newly apply transfer learning to the WUL-based video classification since the number of labels (liked or not liked) attached to videos by users is small, and it is difficult to classify videos based on WUL. Furthermore, we conduct a user study for showing that the representation of psychophysiological signals calculated from Bi-LSTM is effective for the WUL-based video classification. Experimental results showed that our deep neural network feature representations can distinguish WUL for each subject.
  • Tomoki Haruyama, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 710 - 711 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents a novel method for estimating important scenes in soccer videos based on collaborative use of audio-visual Convolutional Neural Network (CNN) features. In soccer games, since game situations influence not only players' movements but also audiences' cheers, analyses of their audio and visual sequences are useful for the estimation of important scenes. In our method, such scenes are estimated from audio and visual CNN features via support vector machine (SVM) in each feature. Furthermore, by applying weighted majority voting based on confidences defined from the SVM-based estimation results, accurate estimation of important scenes becomes feasible. Experimental results show the effectiveness of our method.
  • Shoji Takimura, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 204 - 205 IEEE 2018年 [査読有り][通常論文]
     
    A method for Twitter followee recommendation based on multimodal field-aware factorization machines considering social relations (MFFM-SR) is presented. MFFM-SR enables collaborative use of textual and visual features and social relations unlike conventional methods. Specifically, for distinguishing users' interest, visual features are extracted from images in their tweets and icons as well as textual features and social relations. Furthermore, to construct a model that accurately represents users' interest, MFFM-SR that enables calculation of high-level features via estimation of latent relationships among the obtained features and social relations is derived. By using the constructed model, successful followee recommendation becomes feasible.
  • Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 202 - 203 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents a method to estimate viewed image categories via canonical correlation analysis (CCA) using human brain activity measured by functional magnetic resonance imaging (fMRI). The proposed method enables estimation of image categories that a subject viewed by using only the subject's brain activity. Specifically, the proposed method calculates the projection matrices that enable direct comparison between human brain activity and images that subjects viewed through CCA. After projecting the human brain activity and the viewed images on the same latent space, k-Nearest Neighbor (k-NN) is performed to estimate the viewed image categories from only human brain activity. Through the projection matrices, the proposed method can increase training data for k-NN even if a large number of pairs of human brain activity and images cannot be prepared. Experimental results for ten subjects show the effectiveness of the proposed method.
  • Masanao Matsumoto, Naoki Saito, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 200 - 201 IEEE 2018年 [査読有り][通常論文]
     
    A novel method for missing image data estimation is presented in this paper. The proposed method realizes accurate estimation of missing image data by iterating dictionary learning and Convolutional Sparse Coding (CSC). Specifically, our method iterates estimation of missing image data via CSC by using a dictionary that is constructed from a target image, and reconstruction of the dictionary by using the obtained estimation results. As the main contribution of our paper, the proposed method enables the missing image data estimation by using more suitable dictionaries obtained by this iterative scheme. Experimental results show high missing image data estimation performance by the proposed method.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2018 IEEE 7TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE 2018) 198 - 199 2018年 [査読有り][通常論文]
     
    Image retrieval plays an important role in the information society. Many studies have been conducted to improve accuracy of the image retrieval. However, there exists a major limitation in their input methods. For example, if users only have a vague description that does not include detailed information such as its name and do not have an appropriate input image, it is difficult to retrieve their desired images. To solve this problem, we propose a novel image retrieval method that enables retrieval of a desired image from a vague description. In the proposed method, we generate a query image from a vague description through an Attentional Generative Adversarial Network. By using the generated query image, the proposed method enables users to retrieve images even if they do not have a clear retrieval description as an input. Experimental results show the effectiveness of our method.
  • Ken Kawakami, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 196 - 197 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents a new method to detect deformed Photoplethysmogram (PPG) waveforms for sufficient accuracy of signal processing. The PPG waveforms have been applied in many health indicators, such as blood pressure, blood viscosity and blood vessel elasticity. Usually, the measurements using a sensitive signals require user awareness so that all the PPG waveforms are kept accurate. Namely, accuracy of the calculated indicators becomes lower when the PPG waveform is deformed due to motion artifacts. In particular, detection methods of deformed PPG waveforms are important for incorporating the health indicators into general fitness trackers to find the correct waveform or to remove deformed PPG waveforms from the measurement. Therefore, we propose a new method which detects a badly formed PPG waveform by monitoring a ratio of average accelerations. Experimental results show the effectiveness of the method for detecting a deformed PPG waveform.
  • Ken Kawakami, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 194 - 195 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents a new index for monitoring transition of blood circulation from Photoplethysmogram (PPG) signals for thermal comfort evaluation of users. Heat dissipation reaction through the dilation of blood vessels is person's intrinsic ability to control the thermal comfort. When body temperature is higher than normal temperature, blood circulation changes according to the dilation of blood vessels in distal end of the extremities. Blood circulation is often evaluated by an index of peripheral resistance corresponding to changes in blood flow velocity such as systolic/diastolic ratio S/D of flow velocities, resistance index and the pulsatility index. Unfortunately, such an index cannot be utilized in daily life for healthcare with using fitness trackers since the blood flow velocity is measured by either an ultrasonic Doppler blood flowmeter (UDF) or a Laser-Doppler flowmeter. Therefore, we propose a new index which is easily acquirable from PPG signals. First, a couple of variables correlating to the blood flow velocity is calculated from a rate of volumetric strain of Photoplethysmogram signals. Then the new index can be obtained as difference of these variables. Experimental results show the effectiveness of this index by confirming high correlation with S/D of UDF.
  • Genki Suzuki, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 116 - 117 IEEE 2018年 [査読有り][通常論文]
     
    A method of team tactics estimation in soccer videos is presented in this paper. Our method enables estimation of basic tactics in each team on the basis of the Deep-Extreme Learning Machine (DELM) by using features of players formation. In the soccer games, team tactics relate to each other team. Therefore, the proposed method obtains final estimation results by utilizing two DELMs of each team and their relationship. Since the proposed method takes into consideration the relevance of the estimated tactics in each team, we realize accurate tactics estibimation. Experimental results using actual soccer videos showed the effectiveness of our method.
  • Yuya Moroto, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, Nara, Japan, October 9-12, 2018 73 - 74 IEEE 2018年 [査読有り][通常論文]
     
    This paper presents a method for estimating user-centric visual attention based on the relationship between image and eye gaze data. The proposed method focuses on relationship between visual features calculated from images and saliency values calculated from eye gaze data. Specifically, our method calculates the saliency map of each training image by using individual eye gaze data obtained from only these images. Furthermore, from the pairs of visual features and the gaze-based saliency, the estimation of user-centric saliency from a new test image becomes feasible. Our contribution is the construction of a simple but successful estimation model which can train the relationship from limited amount of individual eye gaze data. Experimental results show the effectiveness of the proposed method.
  • Shota Hamano, Takahiro Ogawa, Miki Haseyama
    IEEE Access 6 2930 - 2942 2017年12月21日 [査読有り][通常論文]
     
    This paper presents a language-independent ontology (LION) construction method that uses tagged images in an image folksonomy. Existing multilingual frameworks that construct an ontology deal with concepts translated on the basis of parallel corpora, which are not always available however, the proposed method enables LION construction without parallel corpora by using visual features extracted from tagged images as the alternative. In the proposed method, visual similarities in tagged images are leveraged to aggregate synonymous concepts across languages. The aggregated concepts take on intrinsic semantics of themselves, while they also hold distinct characteristics in different languages. Then relationships between concepts are extracted on the basis of visual and textual features. The proposed method constructs a LION whose nodes and edges correspond to the aggregated concepts and relationships between them, respectively. The LION enables successful image retrieval across languages since each of the aggregated concepts can be referred to in different languages. Consequently, the proposed method removes the language barriers by providing an easy way to access a broader range of tagged images for users in the folksonomy, regardless of the language they use.
  • Akira Toyoda, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 2017- 1 - 2 2017年12月19日 [査読有り][通常論文]
     
    This paper presents a new method for video preference estimation using functional near-infrared spectroscopy signals (fNIRS signals). The proposed method first computes fNIRS features from fNIRS signals recorded while users are watching videos and multiple visual features from these videos. Next, by applying Locality Preserving Canonical Correlation Analysis to fNIRS features and each visual feature, we can obtain multiple new visual features. In addition, Multiview Local Fisher Discriminant Analysis fuses multiple new visual features and optimizes within and between class scatter in the fused feature space while using complementary properties in these features. Consequently, we can realize video preference estimation by using the fused features.
  • Shoji Takimura, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 2017- 1 - 2 2017年12月19日 [査読有り][通常論文]
     
    A novel method for personalized tweet recommendation based on Field-aware Factorization Machines (FFMs) with adaptive field organization is presented in this paper. The proposed method realizes accurate recommendation of tweets in which users are interested by the following two contributions. First, sentiment factors such as opinions, thoughts and feelings included in tweets are newly introduced into FFMs in addition to their publisher and topic factors. Second, the proposed method newly enables adaptive organization of fields via canonical correlation analysis for multiple features extracted from each tweet. Experimental results for real-world datasets confirm the performance improvement of personalized tweet recommendation through the two contributions.
  • Yui Matsumoto, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 2017- 1 - 2 2017年12月19日 [査読有り][通常論文]
     
    A novel method to construct a network based on heterogeneous features obtained from music videos and social metadata for music video recommendation is presented in this paper. The proposed method enables construction of the network that can accurately associate users with music videos corresponding to their preference by the collaborative use of audio and textual features obtained from music videos and social metadata 'related videos', 'tags', and 'keywords' through sub-sampled canonical correlation analysis. By performing link prediction on the obtained network, our method enables users to obtain desired music videos that are not linked to each other in the network but corresponding to users' preference, that is, music video recommendation becomes feasible. Experimental results for real-world datasets show the effectiveness of our method.
  • Tetsuya Kushima, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 2017- 1 - 2 2017年12月19日 [査読有り][通常論文]
     
    This paper presents a novel method for interest level estimation based on matrix completion via rank minimization. The proposed method estimates interest levels of target objects from human behavior features which are extracted during selecting these objects. Specifically, by adopting matrix completion via rank minimization, unknown interest levels can be estimated. Furthermore, the proposed method can also estimate unknown interest levels with some missing behavior features which are not correctly extracted by sensors. Experimental results show the effectiveness of the proposed method.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 2017- 1 - 2 2017年12月19日 [査読有り][通常論文]
     
    Aesthetic quality assessment plays an important role in how people organize large image collections. Many studies on aesthetic quality assessment are based on design of hand-crafted features without considering whether attributes conveyed by images can actually affect image aesthetics. This paper presents an aesthetic quality assessment method which uses new visual features. The proposed method utilizes Supervised Locality Preserving Canonical Correlation Analysis (SLPCCA) to derive the new features which maximize correlation between attributes and visual features. Finally, by applying ridge regression to the SLPCCA-based features, successful aesthetic quality assessment is realized.
  • Yoshiki Ito, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 2017- 1 - 2 2017年12月19日 [査読有り][通常論文]
     
    This paper presents a personalized preference estimation method for video recommendation. Our method not only uses deep convolutional neural network (DCNN)-based video features but also transforms them based on user's viewing behavior in order to improve accuracy of preference estimation for a video. Specifically, we adopt supervised multi-view canonical correlation analysis (sMVCCA) in order to calculate 'canonical video features', which have a maximal correlation between the following three kinds of features: a video, user's viewing behavior and user's evaluation scores for the video. By using the canonical video features, our method can estimate the user's personalized preference for a video more accurately than using only the DCNN-based video features. Experimental results show the effectiveness of our method.
  • Kazaha Horii, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 2017- 1 - 2 2017年12月19日 [査読有り][通常論文]
     
    This paper presents a novel method of image classification for trend prediction based on integration of visual and fNIRS features. It is expected that classification of images in the same object category in terms of generation enables trend prediction. However, since images in the same object category have similar visual features, a limit of accuracy exists for image classification by using only visual features. To overcome this problem, we utilize fNIRS features which represent brain activity in addition to visual features. Specifically, we apply Discriminative Locality Preserving Canonical Correlation Analysis (DLPCCA) to fNIRS and visual features for utilizing them collaboratively. The main contribution of this paper is the improvement of classification performance of images in the same object category for trend prediction by using the visual features projected to the DLPCCA-based space.
  • Takahiro Ogawa, Akira Tanaka, Miki Haseyama
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E100D 10 2614 - 2626 2017年10月 [査読有り][通常論文]
     
    A Wiener-based inpainting quality prediction method is presented in this paper. The proposed method is the first method that can predict inpainting quality both before and after the intensities have become missing even if their inpainting methods are unknown. Thus, when the target image does not include any missing areas, the proposed method estimates the importance of intensities for all pixels, and then we can know which areas should not be removed. Interestingly, since this measure can be also derived in the same manner for its corrupted image already including missing areas, the expected difficulty in reconstruction of these missing pixels is predicted, i.e., we can know which missing areas can be successfully reconstructed. The proposed method focuses on expected errors derived from the Wiener filter, which enables least-squares reconstruction, to predict the inpainting quality. The greatest advantage of the proposed method is that the same inpainting quality prediction scheme can be used in the above two different situations, and their results have common trends. Experimental results show that the inpainting quality predicted by the proposed method can be successfully used as a universal quality measure.
  • Daichi Takehara, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    MULTIMEDIA TOOLS AND APPLICATIONS 76 19 20249 - 20272 2017年10月 [査読有り][通常論文]
     
    A novel scheme for retrieving users' desired contents, i.e., contents with topics in which users are interested, from multiple social media platforms is presented in this paper. In existing retrieval schemes, users first select a particular platform and then input a query into the search engine. If users do not specify suitable platforms for their information needs and do not input suitable queries corresponding to the desired contents, it becomes difficult for users to retrieve the desired contents. The proposed scheme extracts the hierarchical structure of content groups (sets of contents with similar topics) from different social media platforms, and it thus becomes feasible to retrieve desired contents even if users do not specify suitable platforms and do not input suitable queries. This paper has two contributions: (1) A new feature extraction method, Locality Preserving Canonical Correlation Analysis with multiple social metadata (LPCCA-MSM) that can detect content groups without the boundaries of different social media platforms is presented in this paper. LPCCA-MSM uses multiple social metadata as auxiliary information unlike conventional methods that only use content-based information such as textual or visual features. (2) The proposed novel retrieval scheme can realize hierarchical content structuralization from different social media platforms. The extracted hierarchical structure shows various abstraction levels of content groups and their hierarchical relationships, which can help users select topics related to the input query. To the best of our knowledge, an intensive study on such an application has not been conducted; therefore, this paper has strong novelty. To verify the effectiveness of the above contributions, extensive experiments for real-world datasets containing YouTube videos and Wikipedia articles were conducted.
  • Kohei Tateno, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E100D 9 2005 - 2016 2017年09月 [査読有り][通常論文]
     
    A novel dimensionality reduction method, Fisher Discriminant Locality Preserving Canonical Correlation Analysis (FDLPCCA), for visualizing Web images is presented in this paper. FDLP-CCA can integrate two modalities and discriminate target items in terms of their semantics by considering unique characteristics of the two modalities. In this paper, we focus onWeb images with text uploaded on Social Networking Services for these two modalities. Specifically, text features have high discriminate power in terms of semantics. On the other hand, visual features of images give their perceptual relationships. In order to consider both of the above unique characteristics of these two modalities, FDLPCCA estimates the correlation between the text and visual features with consideration of the cluster structure based on the text features and the local structures based on the visual features. Thus, FDLP-CCA can integrate the different modalities and provide separated manifolds to organize enhanced compactness within each natural cluster.
  • Miki Haseyama, Takahiro Ogawa, Sho Takahashi, Shuhei Nomura, Masatsugu Shimomura
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E100D 8 1563 - 1573 2017年08月 [査読有り][通常論文]
     
    Biomimetics is a new research field that creates innovation through the collaboration of different existing research fields. However, the collaboration, i.e., the exchange of deep knowledge between different research fields, is difficult for several reasons such as differences in technical terms used in different fields. In order to overcome this problem, we have developed a new retrieval platform, "Biomimetics image retrieval platform," using a visualization-based image retrieval technique. A biological database contains a large volume of image data, and by taking advantage of these image data, we are able to overcome limitations of text-only information retrieval. By realizing such a retrieval platform that does not depend on technical terms, individual biological databases of various species can be integrated. This will allow not only the use of data for the study of various species by researchers in different biological fields but also access for a wide range of researchers in fields ranging from materials science, mechanical engineering and manufacturing. Therefore, our platform provides a new path bridging different fields and will contribute to the development of biomimetics since it can overcome the limitation of the traditional retrieval platform.
  • Deterioration Level Estimation on Transmission Towers via Extreme Learning Machine based on Combination Use of Local Receptive Field and Principal Component Analysis
    K. Maeda, S. Takahashi, T. Ogawa, M. Haseyama
    International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC) 457 - 458 2017年07月 [査読有り][通常論文]
  • Effectiveness Evaluation of Imaging Direction for Estimation of Gastritis Regions on Gastric X-ray Images
    Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    International Technical Conference on Circuits, Systems, Computers, and Communications (ITC-CSCC) 459 - 460 2017年05月 [査読有り][通常論文]
  • Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    COMPUTERS IN BIOLOGY AND MEDICINE 84 69 - 78 2017年05月 [査読有り][通常論文]
     
    In this paper, a fully automatic method for detection of Helicobacter pylori (H. pylori) infection is presented with the aim of constructing a computer-aided diagnosis (CAD) system. In order to realize a CAD system with good performance for detection of H. pylori infection, we focus on the following characteristic of stomach X-ray examination. The accuracy of X-ray examination differs depending on the symptom of H. pylori infection that is focused on and the position from which X-ray images are taken. Therefore, doctors have to comprehensively assess the symptoms and positions. In order to introduce the idea of doctors' assessment into the CAD system, we newly propose a method for detection of H. pylori infection based on the combined use of feature fusion and decision fusion. As a feature fusion scheme, we adopt Multiple Kernel Learning (MKL). Since MKL can combine several features with determination of their weights, it can represent the differences in symptoms. By constructing an MKL classifier for each position, we can obtain several detection results. Furthermore, we introduce confidence-based decision fusion, which can consider the relationship between the classifier's performance and the detection results. Consequently, accurate detection of H. pylori infection becomes possible by the proposed method. Experimental results obtained by applying the proposed method to real X-ray images show that our method has good performance, close to the results of detection by specialists, and indicate that the realization of a CAD system for determining the risk of H. pylori infection is possible.
  • Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2016 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2016 - Proceedings 1238 - 1242 2017年04月19日 [査読有り][通常論文]
     
    This paper presents a novel method to track the hierarchical structure of Web video groups on the basis of salient keyword matching including semantic broadness estimation. To the best of our knowledge, this paper is the first work to perform extraction and tracking of the hierarchical structure simultaneously. Specifically, the proposed method first extracts the hierarchical structure of Web video groups and salient keywords of them on the basis of an improved scheme of our previously reported method. Moreover, to calculate similarities between Web video groups obtained in different time stamps, salient keyword matching is newly developed by considering both co-occurrences of the salient keywords and semantic broadness of each Web video group. Consequently, tracking of the hierarchical structure over time becomes feasible to easily understand popularity trends of many Web videos for realizing effective retrieval.
  • Distress Classification of Class Imbalanced Data for Maintenance Inspection of Road Structures in Express Way
    K. Maeda, S. Takahashi, T. Ogawa, M. Haseyama
    International Conference on Civil and Building Engineering Informatics in conjunction with Conference on Computer Applications in Civil and Hydraulic Engineering (ICCBEI & CCACHE) 182 - 185 2017年04月 [査読有り][通常論文]
  • Takahiro Ogawa, Yoshiaki Yamaguchi, Satoshi Asamizu, Miki Haseyama
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E100D 2 409 - 412 2017年02月 [査読有り][通常論文]
     
    This paper presents human-centered video feature selection via mRMR-SCMMCCA (minimum Redundancy and Maximum Relevance-Specific Correlation Maximization Multiset Canonical Correlation Analysis) algorithm for preference extraction. The proposed method derives SCMMCCA, which simultaneously maximizes two kinds of correlations, correlation between video features and users' viewing behavior features and correlation between video features and their corresponding rating scores. By monitoring the derived correlations, the selection of the optimal video features that represent users' individual preference becomes feasible.
  • OGAWA Takahiro, YAMAGUCHI Yoshiaki, ASAMIZU Satoshi, HASEYAMA Miki
    IEICE Transactions on Information and Systems E100.D 2 409 - 412 2017年 [査読無し]
     
    This paper presents human-centered video feature selection via mRMR-SCMMCCA (minimum Redundancy and Maximum Relevance-Specific Correlation Maximization Multiset Canonical Correlation Analysis) algorithm for preference extraction. The proposed method derives SCMMCCA, which simultaneously maximizes two kinds of correlations, correlation between video features and users' viewing behavior features and correlation between video features and their corresponding rating scores. By monitoring the derived correlations, the selection of the optimal video features that represent users' individual preference becomes feasible.
  • Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    IEEE ACCESS 5 16963 - 16973 2017年 [査読有り][通常論文]
     
    Sentiment in multimedia contents has an influence on their topics, since multimedia contents are tools for social media users to convey their sentiment. Performance of applications such as retrieval and recommendation will be improved if sentiment in multimedia contents can be estimated; however, there have been few works in which such applications were realized by utilizing sentiment analysis. In this paper, a novel method for extracting the hierarchical structure of Web video groups based on sentiment-aware signed network analysis is presented to realize Web video retrieval. First, the proposed method estimates latent links between Web videos by using multimodalfeatures of contents and sentiment features obtained from texts attached to Web videos. Thus, our method enables construction of a signed network that reflects not only similarities but also positive and negative relations between topics of Web videos. Moreover, an algorithm to optimize a modularity-based measure, which can adaptively adjust the balance between positive and negative edges, was newly developed. This algorithm detects Web video groups with similar topics at multiple abstraction levels; thus, successful extraction of the hierarchical structure becomes feasible. By providing the hierarchical structure, users can obtain an overview of many Web videos and it becomes feasible to successfully retrieve the desired Web videos. Results of experiments using a new benchmark dataset, YouTube-8M, validate the contributions of this paper, i.e., 1) the first attempt to utilize sentiment analysis for Web video grouping and 2) a novel algorithm for analyzing a weighted signed network derived from sentiment and multimodal features.
  • Yoshiki Ito, Takahiro Ogawa, Miki Haseyama
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 3006 - 3010 2017年 [査読有り][通常論文]
     
    This paper presents a novel method for personalized video preference estimation based on early fusion using multiple users' viewing behavior. The proposed method adopts supervised Multi-View Canonical Correlation Analysis (sMVCCA) to estimate correlation between different types of features. Specifically, we estimate optimal projections maximizing the correlation between three features of video, target user's viewing behavior and evaluation scores for video. Then novel video features (canonical video features), which reflect the target user's individual preference, are obtained by the estimated projections. Furthermore, our method computes sMVCCA-based canonical video features by using multiple users' viewing behavior and a target user's evaluation scores. This non-conventional approach using the multiple users' viewing behavior for the preference estimation of the target user is the biggest contribution of our method, and it enables early fusion of the canonical video features. Consequently, successful video recommendation that reflects the users' individual preference can be expected via the evaluation score prediction from the integrated canonical video features. Experimental results show the effectiveness of our method.
  • Takahiro Ogawa, Miki Haseyama
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 1827 - 1831 2017年 [査読有り][通常論文]
     
    This paper presents an exemplar-based image completion via a new quality measure based on phaseless texture features. The proposed method derives a new quality measure obtained by monitoring errors caused in power spectra, i.e., errors of phaseless texture features, converged through phase retrieval. Even if a target patch includes missing pixels, this measure enables selection of the best matched patch including the most similar texture features for realizing the exemplar-based image completion. Furthermore, since the phaseless texture features are robust to various changes such as spatial gaps and luminance changes, the new quality measure successfully provides the best matched patch from few training examples. Then, by solving an optimization problem that retrieves the phase of the target patch from the phaseless texture features of the best matched patch, its missing areas can be reconstructed. Consequently, accurate image completion using the new quality measure becomes feasible. Subjective and quantitative experimental results are shown to verify the effectiveness of our method using the new quality measure.
  • Kento Sugata, Takahiro Ogawa, Miki Haseyama
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 999 - 1003 2017年 [査読有り][通常論文]
     
    This paper presents a novel method that estimates human emotion based on tensor-based supervised decision-level fusion (TS-DLF) from multiple Brodmann areas (BAs). From multiple brain data corresponding to these BAs captured by functional magnetic resonance imaging (fMRI), our method performs general tensor discriminant analysis (GTDA) to obtain features which can reflect the user's emotion. Furthermore, since the dimension of the obtained features becomes lower, this can avoid overfitting in the following training procedure of estimators. Next, by separately using the transformed BA data obtained after GTDA, we obtain multiple estimation results of the user's emotion based on logistic tensor regression (LTR). Then our method realizes the decision of the final result based on TS-DLF from the multiple estimation results. This approach, i.e., the integration of the multiple BAs' results for the whole-brain data, is the biggest contribution of this paper. TS-DLF successfully integrates the multiple estimation results with considering the performance of the LTR-based estimator constructed for each BA. Experimental results show that our method outperforms state-of-the-art approaches, and the effectiveness of our method can be confirmed.
  • Takahiro Ogawa, Miki Haseyama
    IEEE TRANSACTIONS ON IMAGE PROCESSING 25 12 5971 - 5986 2016年12月 [査読有り][通常論文]
     
    This paper presents adaptive subspace-based inverse projections via division into multiple sub-problems (ASIP-DIMSs) for missing image data restoration. In the proposed method, a target problem for estimating missing image data is divided into multiple sub-problems, and each sub-problem is iteratively solved with the constraints of other known image data. By projection into a subspace model of image patches, the solution of each sub-problem is calculated, where we call this procedure "subspace-based inverse projection" for simplicity. The proposed method can use higher dimensional subspaces for finding unique solutions in each sub-problem, and successful restoration becomes feasible, since a high level of image representation performance can be preserved. This is the biggest contribution of this paper. Furthermore, the proposed method generates several subspaces from known training examples and enables derivation of a new criterion in the above framework to adaptively select the optimal subspace for each target patch. In this way, the proposed method realizes missing image data restoration using ASIP-DIMS. Since our method can estimate any kind of missing image data, its potential in two image restoration tasks, image inpainting and super-resolution, based on several methods for multivariate analysis is also shown in this paper.
  • Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    MULTIMEDIA TOOLS AND APPLICATIONS 75 24 17059 - 17079 2016年12月 [査読有り][通常論文]
     
    In this paper, we propose a Web video retrieval method that uses hierarchical structure of Web video groups. Existing retrieval systems require users to input suitable queries that identify the desired contents in order to accurately retrieve Web videos; however, the proposed method enables retrieval of the desired Web videos even if users cannot input the suitable queries. Specifically, we first select representative Web videos from a target video dataset by using link relationships between Web videos obtained via metadata "related videos" and heterogeneous video features. Furthermore, by using the representative Web videos, we construct a network whose nodes and edges respectively correspond to Web videos and links between these Web videos. Then Web video groups, i.e., Web video sets with similar topics are hierarchically extracted based on strongly connected components, edge betweenness and modularity. By exhibiting the obtained hierarchical structure of Web video groups, users can easily grasp the overview of many Web videos. Consequently, even if users cannot write suitable queries that identify the desired contents, it becomes feasible to accurately retrieve the desired Web videos by selecting Web video groups according to the hierarchical structure. Experimental results on actual Web videos verify the effectiveness of our method.
  • Takahiro Ogawa, Akihiro Takahashi, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E99A 11 1971 - 1980 2016年11月 [査読有り][通常論文]
     
    In this paper, an insect classification method using scanning electron microphotographs is presented. Images taken by a scanning electron microscope (SEM) have a unique problem for classification in that visual features differ from each other by magnifications. Therefore, direct use of conventional methods results in inaccurate classification results. In order to successfully classify these images, the proposed method generates an optimal training dataset for constructing a classifier for each magnification. Then our method classifies images using the classifiers constructed by the optimal training dataset. In addition, several images are generally taken by an SEM with different magnifications from the same insect. Therefore, more accurate classification can be expected by integrating the results from the same insect based on Dempster-Shafer evidence theory. In this way, accurate insect classification can be realized by our method. At the end of this paper, we show experimental results to confirm the effectiveness of the proposed method.
  • Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    COMPUTERS IN BIOLOGY AND MEDICINE 77 9 - 15 2016年10月 [査読有り][通常論文]
     
    Since technical knowledge and a high degree of experience are necessary for diagnosis of chronic gastritis, computer-aided diagnosis (CAD) systems that analyze gastric X-ray images are desirable in the field of medicine. Therefore, a new method that estimates salient regions related to chronic gastritis/non-gastritis for supporting diagnosis is presented in this paper. In order to estimate salient regions related to chronic gastritis/non-gastritis, the proposed method monitors the distance between a target image feature and Support Vector Machine (SVM)-based hyperplane for its classification. Furthermore, our method realizes removal of the influence of regions outside the stomach by using positional relationships between the stomach and other organs. Consequently, since the proposed method successfully estimates salient regions of gastric X-ray images for which chronic gastritis and non-gastritis are unknown, visual support for inexperienced clinicians becomes feasible. (C) 2016 Elsevier Ltd. All rights reserved.
  • 斉藤 直輝, 小川 貴弘, 浅水 仁, 長谷山 美紀
    電子情報通信学会論文誌D 情報・システム J99-D 9 848 - 860 2016年09月01日 [査読有り]
     
    本論文では,画像共有サービスに投稿される画像の観光名所に関するカテゴリー(観光カテゴリー)の分類手法を提案する.提案手法では,画像共有サービスにおいて画像とともに投稿される様々なデータの中で,位置座標が観光カテゴリーの分類に最も効果的であることに注目する.更に,このデータに基づいた分類において誤りが発生する場合を,分類結果から導出される確信度に基づいて判定する.誤分類と判定された場合,提案手法では,画像及びタグの特徴に基づいた分類結果を求め,それらを各々の分類精度に注目して統合することで,最終的な分類結果を高精度に推定可能とする.このとき,位置座標による分類結果に基づいて分類候補の観光カテゴリーを限定することで,多クラス分類問題におけるクラス数を減少させ,より正確な分類を可能とする.以上の提案手法によって,異なる種類のデータの分類精度が互いに大きく異なる場合に従来手法では困難であった高精度な最終分類結果の取得が可能となる.
  • Takahiro Ogawa, Yuta Igarashi, Miki Haseyama
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 26 5 855 - 867 2016年05月 [査読有り][通常論文]
     
    In this paper, a novel spectral reflectance estimation method from image pairs including near-infrared (NIR) components based on nonnegative matrix factorization (NMF) is presented. The proposed method enables estimation of spectral reflectance from only two kinds of input images: 1) an image including both visible light components and NIR components and 2) an image including only NIR components. These two images can be easily obtained using a general digital camera without an infrared-cut filter and one with a visible light-cut filter, respectively. Since RGB values of these images are obtained according to spectral sensitivity of the image sensor, the spectrum power distribution of the light source and the spectral reflectance, we have to solve the inverse problem for estimating the spectral reflectance. Therefore, our method approximates spectral reflectance by a linear combination of several bases obtained by applying NMF to a known spectral reflectance data set. Then estimation of the optimal solution to the above problem becomes feasible based on this approximation. In the proposed method, NMF is used for obtaining the bases used in this approximation from a characteristic that the spectral reflectance is a nonnegative component. Furthermore, the proposed method realizes simple approximation of the spectrum power distribution of the light source with direct and scattered light components. Therefore, estimation of spectral reflectance becomes feasible using the spectrum power distribution of the light source in our method. In the last part of this paper, we show some simulation results to verify the performance of the proposed method. The effectiveness of the proposed method is also shown using the method for several applications that are closely related to spectral reflectance estimation. Although our method is based on a simple scheme, it is the first method that realizes the estimation of the spectral reflectance and the spectrum power distribution of the light source from the above two kinds of images taken by general digital cameras and provides breakthroughs to several fundamental applications.
  • Yasutaka Hatakeyama, Takahiro Ogawa, Hironori Ikeda, Miki Haseyama
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E99D 3 763 - 768 2016年03月 [査読有り][通常論文]
     
    In this paper, we propose a method to estimate the most resource-consuming disease from electronic claim data based on Labeled Latent Dirichlet Allocation (Labeled LDA). The proposed method models each electronic claim from its medical procedures as a mixture of resource-consuming diseases. Thus, the most resource-consuming disease can be automatically estimated by applying Labeled LDA to the electronic claim data. Although our method is composed of a simple scheme, this is the first trial for realizing estimation of the most resource-consuming disease.
  • Distress Classification of Road Structures via Multiple Classifier-based Bayesian Network
    K. Maeda, S. Takahashi, T. Ogawa, M. Haseyama
    International Workshop on Advanced Image Technology (IWAIT) 1 - 4 2016年 [査読有り][通常論文]
  • Genki Suzuki, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2016 IEEE 5TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS 1 - 2 2016年 [査読有り][通常論文]
     
    A decision-level fusion (DLF)-based team tactics estimation method in soccer videos is newly presented. In our method, tactics estimation based on audio-visual and formation features is newly adopted since the tactics of the soccer game are closely related to the audio-visual sequences and player positions. Therefore, by using these features, we classify the tactics via Support. Vector Machine (SVM). Furthermore, by applying DIA' to the SVM-based classification results, the two modalities are integrated to obtain more accurate tactics estimation results. Some results of experiments verify the superiority of our method.
  • Kento Sugata, Takahiro Ogawa, Miki Haseyama
    2016 IEEE 5TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS 1 - 2 2016年 [査読有り][通常論文]
     
    This paper presents a method that estimates human emotion evoked by visual stimuli using functional magnetic resonance imaging (fMRI) data. First, in our method, preprocessing and masking procedures are applied to the fMRI data. These procedures provide the multiple brain data corresponding to Brodmann areas (BA). In most cases, the dimensionality of fMRI data and the BA data is larger than the number of observations, and this results in overfilling. Thus, in order to reduce the dimensionality, we apply general tensor discriminant analysis (GTDA), which can take into account the information related to the users' emotion. Then multiple estimation results of the users' emotion are obtained from support vector machine by separately using the multiple BA data obtained after the dimensionality reduction via GTDA. Furthermore, our method obtains the final estimation result from effective supervised decision-level fusion of the above estimation results.
  • Ryota Saito, Sho Takahashi, Takahiro Ogawa, Miki Hasayama
    2016 IEEE 5TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS 1 - 2 2016年 [査読有り][通常論文]
     
    This paper presents a retrieval method of similar inspection records in road structures based on metric learning using experienced inspectors' evaluation. Inspection records of road structures include images and text-based information such as category of distress, damaged parts and degree of damage. The proposed method calculates distances from query inspection records, and rank lists of retrieval results are obtained for each feature. In this approach, the distance quantification are updated on the basis of experienced inspectors' evaluation. Finally, the proposed method obtains retrieval results by integrating the multiple rank lists. The experimental results show the effectiveness of the proposed method.
  • Naoki Saito, Takahiro Ogawa, Satoshi Asamizu, Miki Haseyama
    2016 IEEE 5TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS 1 - 2 2016年 [査読有り][通常論文]
     
    In this paper, we propose a tourism category classification method based on estimation of reliable decision. The proposed method performs tourism category classification using location, visual, and textual tag features obtained from tourism images in image sharing services. As the biggest contribution of this paper, the proposed method performs successful classification based on two classification results obtained from a fuzzy K-nearest neighbor algorithm using the location features and a decision level fusion approach using the visual and textual tag features. The proposed method enables estimation of reliable decision from above two classifiers.
  • Yoshiki Ito, Takahiro Ogawa, Miki Haseyama
    2016 IEEE 5TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS 1 - 2 2016年 [査読有り][通常論文]
     
    This paper presents novel video feature-based favorite video estimation method. In the proposed method, we use three features, videos, users' viewing behavior and users' evaluation scores for these videos. In order to calculate the novel video features, Multiset Canonical Correlations Analysis (MCCA) is applied to these features to integrate the different types of features. Specifically, MCCA maximizes the sum of three kinds of correlations between three pairs of these features. Then the novel video features that represent the users' individual preference can be obtained by using the projection maximizing the three correlations. Finally, Supported Vector Ordinal Regression (SVOR) is trained by using the novel video features to estimate favorite videos. Experimental results show the effectiveness of our method.
  • Shota Hamano, Takahiro Ogawa, Mild Haseyama
    2016 IEEE 5TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS 1 - 2 2016年 [査読有り][通常論文]
     
    This paper presents a method for associating tags in one language with the tags representing the same meaning in another language. Since recent image search and sharing services highly rely on annotations like tags with images for obtaining the desired images, the proposed method utilizes the visual features extracted from images with tags. In the proposed method, mutual information between tags and visual features are calculated. Tag similarity is then calculated based on the mutual information. Mutual information takes into consideration the relevance between tags and visual features. Therefore, the similarity based on the mutual information represents tag-to-tag relationships more effectively than direct use of the visual features. Experimental results show the effectiveness of the proposed method in associating English tags with Japanese tags representing the same meanings.
  • Susumu Genma, Takahiro Ogawa, Miki Haseyama
    2016 IEEE 5TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS 1 - 2 2016年 [査読有り][通常論文]
     
    This paper presents an image retrieval method for insect identification based on saliency map and distance metric learning. First, the proposed method extracts regions of insects from target images by using saliency map and calculates visual features from the extracted insect regions. Next, in order to realize accurate retrieval of insects based on the calculated features, distance metric learning is newly adopted. Consequently, through users' evaluation in the retrieval, optimal distance can be obtained for the calculated visual features to obtain successful retrieval results, and the identification of insects becomes feasible. Experimental results show the effectiveness of our method.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2016 IEEE INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP) 2016 DSP 589 - 593 2016年 [査読有り][通常論文]
     
    A distress classification method of road structures via decision level fusion is presented in this paper. In order to classify various kinds of distresses accurately, the proposed method integrates multiple classification results with considering their performance, and this is the biggest contribution of this paper. By introducing this approach, it becomes feasible to adaptively integrate the multiple classification results based on the accuracy of each classifier for a target sample. Consequently, realization of the accurate distress classification can be expected. Experimental results show that our method outperforms existing methods.
  • Yuma Sasaka, Takhiro Ogawa, Miki Haseyama
    MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE 387 - 391 2016年 [査読有り][通常論文]
     
    This paper presents a method which estimates interest level while watching videos, based on collaborative use of facial expression and biological signals such as electroencephalogram (EEG) and electrocardiogram (ECG). To the best of our knowledge, no studies have been carried out on the collaborative use of facial expression and biological signals for estimating interest level. Since training data, which is used for estimating interest level, is generally small and imbalanced, Variational Bayesian Mixture of Robust Canonical Correlation Analysis (VBMRCCA) is newly applied to facial expression and biological signals, which are obtained from users while they are watching the videos. Unlike some related works, VBMRCCA is used to obtain the posterior distributions which represent the latent correlation between facial expression and biological signals in our method. Then, the users' interest level can be estimated by comparing the posterior distributions of the positive class data with those of the negative. Consequently, successful interest level estimation, via collaborative use of facial expression and biological signals, becomes feasible.
  • Yasutaka Hatakeyama, Takahiro Ogawa, Hirokazu Tanaka, Miki Haseyama
    PROCEEDINGS OF 2016 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS (ISITA 2016) 126 - 130 2016年 [査読有り][通常論文]
     
    In this paper, we propose a mortality prediction method based on decision-level fusion (DLF) of existing intensive unit care (ICU) scoring systems. First, the proposed method obtains severity scores from the existing ICU scoring systems. Furthermore, we construct classifiers that categorize patients into survivors or non-survivors. Next, patient feature vectors are extracted based on the mortality rates that are estimated from the obtained severity scores by using a non-linear least squares method to obtain other types of classification results. In order to obtain the final severity score for each patient, we integrate the obtained multiple classification results based on DLF that can estimate the final severity scores. Finally, we performed the proposed method to actual ICU patient data and verified the effectiveness of the proposed method. Thus, the proposed method can realize accurate mortality prediction without any additional work by using the existing ICU scoring systems.
  • Soh Yoshida, Takahiro Ogawa, Miki Haseyaina
    2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME) 2016-August 1 - 6 2016年 [査読有り][通常論文]
     
    This paper proposes a graph-based Web video search reranking method through consistency analysis using spectral clustering. Graph-based reranking is effective for refining text-based video search results. Generally, this approach constructs a graph where the vertices are videos and the edges reflect their pairwise similarities. A lot of reranking methods are built based on a scheme which regularizes the smoothness of pairwise ranking scores between adjacent nodes. However, since the overall consistency is measured by aggregating the individual consistency over each pair, errors in score estimation increase when noisy samples are included within their neighbors. To deal with the noisy samples, different from the conventional methods, the proposed method models the global consistency of the graph structure. Specifically, in order to detect this consistency, the propose method introduces a spectral clustering algorithm which can detect video groups, whose videos have strong semantic correlation, on the graph. Furthermore, a new regularization term, which smooths ranking scores within the same group, is introduced to the reranking framework. Since score regularization is performed by both local and global aspect simultaneously, the accurate score estimation becomes feasible. Experimental results obtained by applying the proposed method to a real-world video collection show its effectiveness.
  • Daichi Takehara, Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2016-August 479 - 483 2016年 [査読有り][通常論文]
     
    This paper presents a method for hierarchical content group detection from different social media platforms, which can reveal hierarchical structure of content groups. In this paper, content groups are defined as sets of contents with similar topics. Based on the revealed hierarchical structure, our method enables users to efficiently find the desired contents from large amount of contents placed in diversified social media platforms. The main contributions of this paper are twofold. First, effective latent features for comparing the contents placed in different social media platforms can be extracted by the combination use of the correlation between features obtained from different social media platform and the Web link structure. Second, the hierarchical structure of the content groups, which captures their various abstraction levels, can be revealed by hierarchically detecting their content groups. Experimental results on the real-world dataset containing YouTube videos and Wikipedia articles show the effectiveness of our method.
  • Ryosuke Sawata, Takahiro Ogawa, Miki Haseyama
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS 2016 ICASSP 759 - 763 2016年 [査読有り][通常論文]
     
    This paper presents a novel method of favorite music classification using EEG-based optimal audio features. To select audio features related to user's music preference, our method utilizes a relationship between EEG features obtained from the user's EEG signals during listening to music and their corresponding audio features since EEG signals of human reflect his/her music preference. Specifically, cross-loadings, whose components denote the degree of the relationship, are calculated based on Kernel Discriminative Locality Preserving Canonical Correlation Analysis (KDLPCCA) which is newly derived in the proposed method. In contrast with standard CCA, KDLPCCA can consider (1) non-linear correlation, (2) class information and (3) local structures of input EEG and audio features, simultaneously. Therefore, KDLPCCA-based cross-loadings can reflect best correlation between the user's EEG and corresponding audio signals. Then an optimal set of audio features related to his/her music preference can be obtained by employing the cross-loadings as novel criteria for feature selection. Consequently, our method realizes favorite music classification successfully by using the EEG-based optimal audio features.
  • Zaixing He, Takahiro Ogawa, Sho Takahashi, Miki Haseyama, Xinyue Zhao
    NEUROCOMPUTING 173 P3 1898 - 1907 2016年01月 [査読有り][通常論文]
     
    This paper presents a new method for improving video coding efficiency based on a sparse contractive mapping approach. The proposed method introduces a new sparse contractive mapping approach to replace the traditional intra prediction in the video coding standards such as H.264/AVC. Specifically, the intra- and its following inter-frame are respectively approximated by the sparse representation, satisfying contractive mapping. Then these two frames can be reconstructed from an arbitraryinitial image by utilizing a few representation coefficients. With this advantage, the proposed method reduces the total amount of bits by removing MBs in the target I frame, whose approximation performance is higher than the others in the encoder. Furthermore, by transmitting the representation coefficients of the removed MBs, these MBs can be accurately reconstructed in the decoder. Since the reconstruction performance is better than that of the conventional approach, the proposed method can remove more MBs from the target video sequences, and reduction of total amount of bits can be feasible. Therefore, the proposed method realizes the improvement of the video coding efficiency. Some experimental results are shown to verify the superior performance of the proposed method to that of H.264/AVC. The results also demonstrate that the bit-saving performance of the proposed method is comparable to that of H.2651 HEVC. (C) 2015 Elsevier B.V. All rights reserved.
  • Alameen Najjar, Takahiro Ogawa, Miki Haseyama
    International Journal of Multimedia Information Retrieval 4 4 247 - 259 2015年12月01日 [査読有り][通常論文]
     
    In this paper, we propose a novel feature-space local pooling method for the commonly adopted architecture of image classification. While existingmethods partition the feature space based on visual appearance to obtain pooling bins, learning more accurate space partitioning that takes semantics into account boosts performance even for a smaller number of bins. To this end, we propose partitioning the feature space over clusters of visual prototypes common to semantically similar images (i.e., images belonging to the same category). The clusters are obtained by Bregman co-clustering applied offline on a subset of training data. Therefore, being aware of the semantic context of the input image, our features have higher discriminative power than do those pooled from appearance-based partitioning. Testing on four datasets (Caltech-101, Caltech-256, 15 Scenes, and 17 Flowers) belonging to three different classification tasks showed that the proposed method outperforms methods in previous works on local pooling in he feature space for less feature dimensionality. Moreover, when implemented within a spatial pyramid, our method achieves comparable results on three of the datasets used.
  • Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E98A 8 1709 - 1717 2015年08月 [査読有り][通常論文]
     
    Perceptually optimized missing texture reconstruction via neighboring embedding (NE) is presented in this paper. The proposed method adopts the structural similarity (SSIM) index as a measure for representing texture reconstruction performance of missing areas. This provides a solution to the problem of previously reported methods not being able to perform perceptually optimized reconstruction. Furthermore, in the proposed method, a new scheme for selection of the known nearest neighbor patches for reconstruction of target patches including missing areas is introduced. Specifically, by monitoring the SSIM index observed by the proposed NE-based reconstruction algorithm, selection of known patches optimal for the reconstruction becomes feasible even if target patches include missing pixels. The above novel approaches enable successful reconstruction of missing areas. Experimental results show improvement of the proposed method over previously reported methods.
  • Takuya Kawakami, Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP) 957 - 961 2015年 
    This paper presents a novel image classification method based on integration of EEG and visual features. In the proposed method, we obtain classification results by separately using EEG and visual features. Furthermore, we merge the above classification results based on a kernelized version of Supervised learning from multiple experts and obtain the final classification result. In order to generate feature vectors used for the final image classification, we apply Multiset supervised locality preserving canonical correlation analysis (MSLPCCA), which is newly derived in the proposed method, to EEG and visual features. Our method realizes successful multimodal classification of images by the object categories that they contain based on MSLPCCA-based feature integration.
  • Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP) 1628 - 1632 2015年 
    A missing intensity restoration method via adaptive selection of perceptually optimized subspaces is presented in this paper. In order to realize adaptive and perceptually optimized restoration, the proposed method generates several subspaces of known textures optimized in terms of the structural similarity (SSIM) index. Furthermore, the SSIM-based missing intensity restoration is performed by a projection onto convex sets (POCS) algorithm whose constraints are the obtained subspace and known intensities within the target image. In this approach, a non-convex maximization problem for calculating the projection onto the subspace is reformulated as a quasi-convex problem, and the restoration of the missing intensities becomes feasible. Furthermore, the selection of the optimal subspace is realized by monitoring the SSIM index converged in the POCS algorithm, and the adaptive restoration becomes feasible. Experimental results show that our method outperforms existing methods.
  • Maeda Keisuke, Ogawa Takahiro, Haseyama Miki
    Information and Media Technologies 10 3 473 - 477 Information and Media Technologies Editorial Board 2015年 
    This paper presents automatic Martian dust storm detection from multiple wavelength data based on decision level fusion. In our proposed method, visual features are first extracted from multiple wavelength data, and optimal features are selected for Martian dust storm detection based on the minimal-Redundancy-Maximal-Relevance algorithm. Second, the selected visual features are used to train the Support Vector Machine classifiers that are constructed on each data. Furthermore, as a main contribution of this paper, the proposed method integrates the multiple detection results obtained from heterogeneous data based on decision level fusion, while considering each classifiers detection performance to obtain accurate final detection results. Consequently, the proposed method realizes successful Martian dust storm detection.
  • Kento Sugata, Takahiro Ogawa, Miki Haseyama
    2015 IEEE 4TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 513 - 514 2015年 [査読有り][通常論文]
     
    This paper presents a novel image classification based on the integration of EEG and visual features. In the proposed method, we first obtain classification results by separately using EEG and visual features. Then we merge the above classification results based on kernelized version of Supervised Learning from Multiple Experts (KSLME) via Multiset Supervised Locality Preserving Canonical Correlation Analysis (MSLPCCA) to obtain final classification results. It should be noted that when the number of samples is fewer than the dimension of a sample data used in MSLPCCA, we have to reduce the dimension. Therefore, we propose MSLPCCA based on Local Fisher Discriminant Analysis (LFDA) which can take class information into account. Then the integration of all of the classifications results becomes feasible by MSLPCCA based on LFDA.
  • Kouhei Tateno, Takahiro Ogawa, Miki Haseyama
    2015 IEEE 4TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 254 - 255 2015年 [査読有り][通常論文]
     
    This paper presents a Web image visualization considering image content based on visual and tag features. In this paper, we focus on tagged images on social media websites. Since these tags represent the image content according to the subjectivity of the user, using these tags is efficient for the image visualization. Thus, by using visual and tag features, the proposed method can take account of the semantic contents. Specifically, the proposed method applies Locality Preserving Canonical Correlation Analysis (LPCCA) to these two features to obtain the dimensionality reduction results, i.e., the visualization result.
  • Yuma Sasaka, Takahiro Ogawa, Miki Haseyama
    2015 IEEE 4TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 250 - 251 2015年 [査読有り][通常論文]
     
    In this paper, we propose an efficient video genre estimation method based on the relationship between facial features and motion features. In the proposed method, we utilize supervised locality preserving canonical correlation analysis (SLPCCA), which is derived in the proposed method, to maximize the correlation between facial features and motion features. Moreover, by using SLPCCA, we can consider not only the correlation but also class information. Finally, by applying Support Vector Machine (SVM) to the SLPCCA-based feature vectors, we realize a successful video genre estimation. Experimental results show the effectiveness of our method.
  • Yuma Tanaka, Takahiro Ogawa, Miki Haseyama
    2015 IEEE 4TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 221 - 222 2015年 [査読有り][通常論文]
     
    This paper presents a method for missing texture reconstruction via power spectrum-based sparse representation. We reconstruct missing areas based on minimizing the mean square error between power spectra (P-MSE). In our method, missing areas are reconstructed by embedding some known patches. Mathematically, we obtain the optimal linear combination of measurement patches by P-MSE minimization. The optimization can be solved as a combinatorial problem based on sparse representaion. In this way, the optimal approximation which minimizes the P-MSE is obtained and we embed it in the missing area. Experimental results show effectiveness of our method for reconstructing texture images.
  • Shohei Kinoshita, Takahiro Ogawa, Miki Haseyama
    2015 IEEE 4TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 215 - 216 2015年 [査読有り][通常論文]
     
    This paper presents a Latent Dirichlet Allocation (LDA)-based music recommendation method with collaborative filtering (CF)-based similar user selection. By applying LDA to music, we can estimate latent topics of music. However, we have to effectively reduce the size of the target dataset applied to LDA in order to recommend music from a large dataset. Hence, we use CF techniques, which recommend items using evaluation information of users who have similar tastes to a target user. Therefore, the proposed method limits the size of the dataset by using information of similar users and enables the recommendation of music considering latent topics of music. By using the idea of CF, our method can use LDA for music recommendation. Experimental results show the effectiveness of our method.
  • Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    2015 IEEE 4TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 204 - 205 2015年 [査読有り][通常論文]
     
    This paper presents the performance improvement of Helicobacter pylori (H.pylori) infection detection using Kernel Local Fisher Discriminant Analysis (KLFDA)-based decision fusion. As the biggest contribution of this paper, the proposed method extracts more discriminative features based on KLFDA for the decision fusion. Since the decision fusion employed in this paper can consider not only the detection results but also the visual features, by calculating more discriminative features via KLFDA, more accurate decision fusion becomes feasible. Furthermore, experimental results show the effectiveness of the proposed method.
  • Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    IPSJ Transactions on Computer Vision and Applications 7 79 - 83 2015年 [査読有り][通常論文]
     
    This paper presents automatic Martian dust storm detection from multiple wavelength data based on decision level fusion. In our proposed method, visual features are first extracted from multiple wavelength data, and optimal features are selected for Martian dust storm detection based on the minimal-Redundancy-Maximal-Relevance algorithm. Second, the selected visual features are used to train the Support Vector Machine classifiers that are constructed on each data. Furthermore, as a main contribution of this paper, the proposed method integrates the multiple detection results obtained from heterogeneous data based on decision level fusion, while considering each classifier's detection performance to obtain accurate final detection results. Consequently, the proposed method realizes successful Martian dust storm detection.
  • Tetsushi Kaburagi, Masashi Kurose, Takahiro Ogawa, Hiroki Kuroiwa, Tomoyuki Iwasawa
    International Journal of Automation Technology 9 1 10 - 18 2015年 [査読有り][通常論文]
     
    Injection molding has faults, or sinks, caused by the shrinkage ofmaterials. Sinks should be inhibited since they greatly affect the dimensions of moldings. In this study, a mold that allows visual observation is employed, and the sink initiation process is analyzed and predicted based on the results of that analysis. This mold has two sections, one flat and one curved. The difference between the deformations in the flat and curved sections is investigated. Methods of inhibiting sinks are considered from the results of the analysis and injection molding experiments. Packing pressure is found to have a great effect on sinks.
  • Soh Yoshida, Takahiro Ogawa, Miki Haseyama
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE 871 - 874 2015年 [査読有り][通常論文]
     
    Graph-based reranking is effective for refining text-based video search results by making use of the social network structure. Unlike previous works which only focus on an individual video graph, the proposed method leverages the mutual reinforcement of heterogeneous graphs, such as videos and their associated tags obtained by social influence mining. Specifically, propagation of information relevancy across different modalities is performed by exchanging information of inter- and intra-relations among heterogeneous graphs. The proposed method then formulates the video search reranking as an optimization problem from the perspective of Bayesian framework. Furthermore, in order to model the consistency over the modified video graph topology, a local learning regularization with a social community detection scheme is introduced to the framework. Since videos within the same social community have strong semantic correlation, the consistency score estimation becomes feasible. Experimental results obtained by applying the proposed method to a real-world video collection show its effectiveness.
  • Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2015-December 4728 - 4732 2015年 [査読有り][通常論文]
     
    This paper presents a detection method of Helicobacter pylori (H. pylori) infection from multiple gastric X-ray images based on combination use of Support Vector Machine (SVM) and Multiple Kernel Learning (MKL). The proposed method firstly computes some types of visual features from multiple gastric X-ray images taken in several specific directions in order to represent the characteristics of X-ray images with H. pylori infection. Second, based on the minimal-Redundancy-Maximal-Relevance algorithm, we select the effective features for H. pylori infection detection from each type of visual feature and all visual features. The selected features are used to train the SVM classifier and the MKL classifier for each direction of gastric X-ray images. Finally, the proposed method integrates multiple detection results based on a late fusion scheme considering the detection performance of each classifier. Experimental results obtained by applying the proposed method to real X-ray images prove its effectiveness.
  • Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2015-December 2246 - 2250 2015年 [査読有り][通常論文]
     
    This paper presents automatic detection of Martian dust storms from heterogeneous data (raw data, reflectance data and background subtraction data of the reflectance data) based on decision level fusion. Specifically, the proposed method first extracts image features from these data and selects optimal features for dust storm detection based on the minimal-Redundancy-Maximal-Relevance algorithm. Second, the selected image features are used to train the Support Vector Machine classifier that is constructed on each data. Furthermore, as a main contribution of this paper, the proposed method combines the multiple detection results obtained from the heterogeneous data based on decision level fusion with considering each classifier's detection performance to obtain accurate final detection results. Consequently, the proposed method realizes automatic and accurate detection of Martian dust storms.
  • Ryosuke Harakawa, Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2015-December 1021 - 1025 2015年 [査読有り][通常論文]
     
    In this paper, we present a method for extraction of hierarchical structure of Web communities including salient keyword estimation for Web video retrieval. The following two contributions of the proposed method enable retrieval of the desired Web videos even if users cannot input suitable queries that identify the desired contents. First, our method realizes the extraction of hierarchical structure of Web communities, i.e., Web video sets with similar topics by using heterogeneous features of Web videos and link relationships between Web videos obtained via metadata "related videos". Second, we can estimate salient keywords to identify the contents of each obtained Web community at a glance based on text attached to Web videos such as title, the heterogeneous features of Web videos and the link relationships between Web videos. Experimental results on actual Web videos verify that our method can realize accurate retrieval of the desired Web videos via the hierarchical structure of Web communities with their salient keywords.
  • Ryosuke Sawata, Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP) 2015-September 818 - 822 2015年 [査読有り][通常論文]
     
    This paper presents a human-centered method for favorite music estimation using EEG-based audio features. In order to estimate user's favorite musical pieces, our method utilizes his/her EEG signals for calculating new audio features suitable for representing the user's music preference. Specifically, projection, which transforms original audio features into the features reflecting the preference, is calculated by applying kernel Canonical Correlation Analysis (CCA) to the audio features and the EEG features which are extracted from the user's EEG signals during listening to favorite musical pieces. By using the obtained projection, the new EEG-based audio features can be derived since this projection provides the best correlation between the user's EEG signals and their corresponding audio signals. Thus, successful estimation of user's favorite musical pieces via a Support Vector Machine (SVM) classifier using the new audio features becomes feasible. Since our method does not need acquisition of EEG signals for obtaining new audio features from new musical pieces after calculating the projection, this indicates the high practicability of our method. Experimental results show that our method outperforms methods using original audio features or EEG features.
  • Yuma Tanaka, Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP) 2015-September 618 - 622 2015年 [査読有り][通常論文]
     
    Sparse representation is an idea to approximate a target signal by a linear combination of a small number of sample signals, and it is utilized in various research fields. In this paper, we evaluate the approximation error of signals by the mean square error of power spectrograms (P-MSE). Specifically, we propose a P-MSE minimization algorithm for sparse representation. Our method minimizes the P-MSE by an iterative approach. Specifically, in each iteration, we find the optimal sample signal and optimize the corresponding coefficients by a gradient-based method. In this approach, our method can utilize the result of the previous iteration for fast and stable convergence in the optimization of the coefficients. Based on this algorithm, the sparse representation which minimizes the P-MSE becomes feasible. Experimental results show the effectiveness of our method in terms of the P-MSE minimization.
  • Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP) 2015-August 1628 - 1632 2015年 [査読有り][通常論文]
     
    A missing intensity restoration method via adaptive selection of perceptually optimized subspaces is presented in this paper. In order to realize adaptive and perceptually optimized restoration, the proposed method generates several subspaces of known textures optimized in terms of the structural similarity (SSIM) index. Furthermore, the SSIM-based missing intensity restoration is performed by a projection onto convex sets (POCS) algorithm whose constraints are the obtained subspace and known intensities within the target image. In this approach, a non-convex maximization problem for calculating the projection onto the subspace is reformulated as a quasi-convex problem, and the restoration of the missing intensities becomes feasible. Furthermore, the selection of the optimal subspace is realized by monitoring the SSIM index converged in the POCS algorithm, and the adaptive restoration becomes feasible. Experimental results show that our method outperforms existing methods.
  • Takuya Kawakami, Takahiro Ogawa, Miki Haseyama
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP) 2015-August 957 - 961 2015年 [査読有り][通常論文]
     
    This paper presents a novel image classification method based on integration of EEG and visual features. In the proposed method, we obtain classification results by separately using EEG and visual features. Furthermore, we merge the above classification results based on a kernelized version of Supervised learning from multiple experts and obtain the final classification result. In order to generate feature vectors used for the final image classification, we apply Multiset supervised locality preserving canonical correlation analysis (MSLPCCA), which is newly derived in the proposed method, to EEG and visual features. Our method realizes successful multimodal classification of images by the object categories that they contain based on MSLPCCA-based feature integration.
  • Zaixing He, Xinyue Zhao, Shuyou Zhang, Takahiro Ogawa, Miki Haseyama
    NEUROCOMPUTING 145 160 - 173 2014年12月 [査読有り][通常論文]
     
    In compressed sensing and sparse representation-based pattern recognition, random projection with a dense random transform matrix is widely used for information extraction. However, the complicated structure makes dense random matrices computationally expensive and difficult in hardware implementation. This paper considers the simplification of the random projection method. First, we propose a simple random method, random combination, for information extraction to address the issues of dense random methods. The theoretical analysis and the experimental results show that it can provide comparable performance to those of dense random methods. Second, we analyze another simple random method, random choosing, and give its applicable occasions. The comparative analysis and the experimental results show that it works well in dense cases but worse in sparse cases. Third, we propose a practical method for measuring the effectiveness of the feature transform matrix in sparse representation-based pattern recognition. A matrix satisfying the Representation Residual Restricted Isometry Property can provide good recognition results. (C) 2014 Elsevier B.V. All rights reserved.
  • Takahiro Ogawa, Mild Haseyama
    SIGNAL PROCESSING 103 69 - 83 2014年10月 [査読有り][通常論文]
     
    This paper presents an adaptive missing texture reconstruction method based on kernel cross-modal factor analysis (KCFA) with a new evaluation criterion. The proposed method estimates the latent relationship between two areas, which correspond to a missing area and its neighboring area, respectively, from known parts within the target image and realizes reconstruction of the missing textures. In order to obtain this relationship, KCFA is applied to each cluster containing similar known textures, and the optimal cluster is used for reconstructing each target missing area. Specifically, a new criterion obtained by monitoring errors caused in the latent space enables selection of the optimal cluster. Then each missing texture is adaptively estimated by the optimal cluster's latent relationship, which enables accurate reconstruction of similar textures. In our method, the above criterion is also used for estimating patch priority, which determines the reconstruction order of missing areas within the target image. Since patches, whose textures are accurately modeled by our KCFA-based method, can be selected by using the new criterion, it becomes feasible to perform successful reconstruction of the missing areas. Experimental results show improvements of our KCFA-based reconstruction method over previously reported methods. (C) 2013 Elsevier B.V. All rights reserved.
  • Kazuya Iwai, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E97D 7 1885 - 1892 2014年07月 [査読有り][通常論文]
     
    In this paper, an accurate player tracking method in far-view soccer videos based on a composite energy function is presented. In far-view soccer videos, player tracking methods that perform processing based only on visual features cannot accurately track players since each player region becomes small, and video coding causes color bleeding between player regions and the soccer field. In order to solve this problem, the proposed method performs player tracking on the basis of the following three elements. First, we utilize visual features based on uniform colors and player shapes. Second, since soccer players play in such a way as to maintain a formation, which is a positional pattern of players, we use this characteristic for player tracking. Third, since the movement direction of each player tends to change smoothly in successive frames of soccer videos, we also focus on this characteristic. Then we adopt three energies: a potential energy based on visual features, an elastic energy based on formations and a movement direction-based energy. Finally, we define a composite energy function that consists of the above three energies and track players by minimizing this energy function. Consequently, the proposed method achieves accurate player tracking in far-view soccer videos.
  • Takahiro Ogawa, Shintaro Takahashi, Sho Takahashi, Miki Haseyama
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2014 1 115 - 115 2014年07月 [査読有り][通常論文]
     
    This paper presents a new method for estimating error degrees in numerical weather prediction via multiple kernel discriminant analysis (MKDA)-based ordinal regression. The proposed method tries to estimate how large prediction errors will occur in each area from known observed data. Therefore, ordinal regression based on KDA is used for estimating the prediction error degrees. Furthermore, the following points are introduced into the proposed approach. Since several meteorological elements are related to each other based on atmospheric movements, the proposed method merges such heterogeneous features in the target and neighboring areas based on a multiple kernel algorithm. This approach is based on the characteristics of actual meteorological data. Then, MKDA-based ordinal regression for estimating the prediction error degree of a target meteorological element in each area becomes feasible. Since the amount of training data obtained from known observed data becomes very large in the training stage of MKDA, the proposed method performs simple sampling of those training data to reduce the number of samples. We effectively use the remaining training data for determining the parameters of MKDA to realize successful estimation of the prediction error degree.
  • Marie Katsurai, Takahiro Ogawa, Miki Haseyama
    IEEE TRANSACTIONS ON MULTIMEDIA 16 4 1059 - 1074 2014年06月 [査読有り][通常論文]
     
    This paper presents a cross-modal approach for extracting semantic relationships between concepts using tagged images. In the proposed method, we first project both text and visual features of the tagged images to a latent space using canonical correlation analysis (CCA). Then, under the probabilistic interpretation of CCA, we calculate a representative distribution of the latent variables for each concept. Based on the representative distributions of the concepts, we derive two types of measures: the semantic relatedness between the concepts and the abstraction level of each concept. Because these measures are derived from a cross-modal scheme that enables the collaborative use of both text and visual features, the semantic relationships can successfully reflect semantic and visual contexts. Experiments conducted on tagged images collected from Flickr show that our measures are more coherent to human cognition than the conventional measures that use either text or visual features, or the WordNet-based measures. In particular, a new measure of semantic relatedness, which satisfies the triangle inequality, obtains the best results among different distance measures in our framework. The applicability of our measures to multimedia-related tasks such as concept clustering, image annotation and tag recommendation is also shown in the experiments.
  • Kouhei Tateno, Takahiro Ogawa, Miki Haseyama
    2014 IEEE 3RD GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 182 - 183 2014年 [査読有り][通常論文]
     
    This paper presents a multiple feature fusion method using topic model for social image visualization. Images in social media are represented from several aspects such as their visual information and tags. The proposed method extracts low-level features from social images and their tags and calculates their integrated high-level features. Specifically, the proposed method applies multilayer multimodal probabilistic Latent Semantic Analysis (mm-pLSA) to the low-level visual and tag features to obtain the high-level features. Then, by applying dimensionality reduction techniques to the obtained features, successful visualization becomes feasible.
  • Keisuke Maeda, Sho Takahashi, Takahiro Ogawa, Miki Haseyama
    2014 IEEE 3RD GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 169 - 170 2014年 [査読有り][通常論文]
     
    This paper presents a Bayesian network-based method for estimating a distress of road structures from inspection data. The distress is represented by a damage of road structures and its degree. In the previous work, the distress was estimated by utilizing Bayesian network based on categories of road structures, details of road structures and damaged parts. However, inspection data include not only the above items but also images of the distress. Therefore, by introducing the use of the images to the previous work, improvement of the distress estimation accuracy can be expected. The proposed method calculates Bayesian network from inspection items and their corresponding images to perform the distress estimation. Experimental results show the effectiveness of the proposed method.
  • Shohei Kinoshita, Takahiro Ogawa, Miki Haseyama
    2014 IEEE 3RD GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 102 - 103 2014年 [査読有り][通常論文]
     
    This paper presents popular music estimation based on a topic model using time information and audio features. The proposed method calculates latent topic distribution using Latent Dirichlet Allocation to obtain more accurate music features. In this approach, we also use release date information of each music as time information for concerning the relationship between music trends and each age. Then, by using the obtained latent topic distribution features, the estimation of the popular music becomes feasible based on a Support Vector Machine classifier. Experimental results show the effectiveness of our method.
  • Yuma Tanaka, Takahiro Ogawa, Miki Haseyama
    2014 IEEE 3RD GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 86 - 87 2014年 [査読有り][通常論文]
     
    This paper presents a method for reconstructing missing audio segments based on sparse representation with power spectrogram. In the proposed method, an error of power spectrograms is utilized as a quality measure representing reconstruction performance. Then the proposed method estimates missing segments based on sparse representation optimized with respect to the error of power spectrograms. This error minimization problem can be solved with a greedy algorithm by limitting the solution to only sparse one. By using our method, perceptually optimized reconstruction becomes feasible since missing segments are estimated by using the quality measure which represens auditory properties. Experimental results obtained by applying the proposed method to actual music signals from RWC Music Database show its effectiveness.
  • Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2769 - 2773 2014年 [査読有り][通常論文]
     
    This paper presents an automatic detection method of Helicobacter pylori (H. pylori) infection from multiple gastric X-ray images. As the biggest contribution of this paper, we combine multiple detection results based on a decision level fusion. In order to obtain multiple detection results, the proposed method first focuses on characteristics of gastric X-ray images with H. pylori infection and computes several visual features from multiple X-ray images taken in several specific directions. Second, we select effective features for H. pylori infection detection from all features based on the minimal-Redundancy-Maximal-Relevance algorithm, and the selected features are used to train the Support Vector Machine (SVM) classifiers that are constructed for each direction of gastric radiography. Therefore, the detection of H. pylori infection becomes feasible, and we can obtain multiple detection results from the SVM classifiers. Furthermore, we combine multiple detection results based on the decision level fusion scheme considering the detection performance of each SVM classifier. Experimental results obtained by applying the proposed method to real X-ray images prove the effectiveness of the proposed method.
  • Takahiro Ogawa, Miki Haseyama
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 1837 - 1841 2014年 [査読有り][通常論文]
     
    This paper presents an inpainting method based on 2D semi-supervised canonical correlation analysis (2D semi-CCA) including new priority estimation. The proposed method estimates relationship, i.e., the optimal correlation, between missing area and its neighboring area from known parts within the target image by using 2D CCA. In this approach, we newly introduce a semi-supervised scheme into the 2D CCA for deriving the 2D semi-CCA which corresponds to a hybrid version of 2D CCA and 2D principle component analysis (2D PCA). This enables successful relationship estimation even if sufficient number of training pairs cannot be provided. Then, by using the obtained relationship, accurate estimation of the missing intensities can be realized. Furthermore, in the proposed method, errors caused in the new variate space obtained by the 2D semi-CCA are effectively used for deriving patch priority determining inpainting order of missing areas. Experimental results show our inpainting method can outperform previously reported methods.
  • Takuya Kawakami, Takahiro Ogawa, Miki Haseyama
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 2014 Vol.8 5874 - 5878 2014年 [査読有り][通常論文]
     
    This paper presents a novel image classification based on decision-level fusion of EEG and visual features. In the proposed method, we extract the EEG features from EEG signals recorded while users stare at images, and the visual features are computed from these images. Then the classification of images is performed based on Support Vector Machine (SVM) by separately using the EEG and visual features. Furthermore, we merge the above classification results based on Supervised Learning from Multiple Experts to obtain the final classification result. This method focuses on the classification accuracy calculated from each classification result. Therefore, although classification accuracy based on EEG and visual features are different from each other, our method realizes effective integration of these classification results. In addition, we newly derive a kernelized version of the method in order to realize more accurate integration of the classification results. Consequently, our method realizes successful multimodal classification of images by the object categories that they contain.
  • Takahiro Ogawa, Miki Haseyama
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 2014 Vol.1 175 - 179 2014年 [査読有り][通常論文]
     
    A missing intensity restoration method via perceptually optimized subspace projection based on entropy component analysis (ECA) is presented in this paper. The proposed method calculates the optimal subspace of known patches within a target image based on structural similarity (SSIM) index, and the optimal bases are determined based on ECA. Then missing intensity estimation whose results maximize the SSIM index is realized by using a projection onto convex sets (POCS) algorithm whose constraints are the obtained subspace and known intensities within the target image. In this approach, a non-convex maximization problem for calculating the projection onto the subspace is reformulated as a quasi-convex problem, and the restoration of the missing intensities becomes feasible. Experimental results show that our restoration method outperforms previously reported methods.
  • Takahiro Ogawa, Miki Haseyama
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2013 179 - 179 2013年12月 [査読有り][通常論文]
     
    This paper presents an image inpainting method based on sparse representations optimized with respect to a perceptual metric. In the proposed method, the structural similarity (SSIM) index is utilized as a criterion to optimize the representation performance of image data. Specifically, the proposed method enables the formulation of two important procedures in the sparse representation problem, 'estimation of sparse representation coefficients' and 'update of the dictionary', based on the SSIM index. Then, using the generated dictionary, approximation of target patches including missing areas via the SSIM-based sparse representation becomes feasible. Consequently, image inpainting for which procedures are totally derived from the SSIM index is realized. Experimental results show that the proposed method enables successful inpainting of missing areas.
  • Takuya Kawakami, Takahiro Ogawa, Miki Haseyama
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2013 Vol.2 1197 - 1201 2013年10月18日 [査読有り][通常論文]
     
    This paper presents a novel estimation method of segments including vocals in music pieces based on collaborative use of features extracted from electroencephalogram (EEG) signals recorded while users are listening to music pieces and features extracted from these audio signals. From extracted EEG features and audio features, we estimate segments including vocals based on Support Vector Machine (SVM) by separately utilizing these two features. Furthermore, the final classification results are obtained by integrating these estimation results based on supervised learning from multiple experts. Therefore, our method realizes multimodal estimation of segments including vocals in music pieces. Experimental results show the improvement of our method over the methods utilizing only EEG or audio features. © 2013 IEEE.
  • Takahiro Ogawa, Miki Haseyama
    IEEE TRANSACTIONS ON IMAGE PROCESSING 22 3 1252 - 1257 2013年03月 [査読有り][通常論文]
     
    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
  • Takahiro Ogawa, Daisuke Izumi, Akane Yoshizaki, Miki Haseyama
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2013 1 - 17 2013年02月 [査読有り][通常論文]
     
    A super-resolution method for simultaneously realizing resolution enhancement and motion blur removal based on adaptive prior settings are presented in this article. In order to obtain high-resolution (HR) video sequences from motion-blurred low-resolution video sequences, both of the resolution enhancement and the motion blur removal have to be performed. However, if one is performed after the other, errors in the first process may cause performance deterioration of the subsequent process. Therefore, in the proposed method, a new problem, which simultaneously performs the resolution enhancement and the motion blur removal, is derived. Specifically, a maximum a posterior estimation problem which estimates original HR frames with motion blur kernels is introduced into our method. Furthermore, in order to obtain the posterior probability based on Bayes' rule, a prior probability of the original HR frame, whose distribution can adaptively be set for each area, is newly defined. By adaptively setting the distribution of the prior probability, preservation of the sharpness in edge regions and suppression of the ringing artifacts in smooth regions are realized. Consequently, based on these novel approaches, the proposed method can perform successful reconstruction of the HR frames. Experimental results show impressive improvements of the proposed method over previously reported methods.
  • Ryosuke Harakawa, Yasutaka Hatakeyama, Takahiro Ogawa, Miki Haseyama
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013) 4397 - 4401 2013年 [査読有り][通常論文]
     
    This paper presents an extraction method of hierarchical Web communities for Web video retrieval. In the proposed method, Web communities containing Web videos whose topics are similar to each other are extracted by using hyperlinks between Web pages including Web videos and their video features. Furthermore, we focus on graph structure of hyperlinks between Web pages including Web videos which belong to the Web communities. Then, by using strongly connected components and betweenness centrality of the graph, hierarchical structure of the Web communities can be estimated. Consequently, users can easily find Web videos including related topics in each hierarchy, and desired Web videos can be effectively retrieved.
  • Akihiro Takahashi, Takahiro Ogawa, Miki Haseyama
    2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings 3269 - 3273 2013年 [査読有り][通常論文]
     
    This paper presents a method of insect classification using images taken by Scanning Electron Microscope (SEM) considering magnifications. Generally, when images of the same insects are taken by SEM with different magnifications, visual features of these images are different from each other. Thus, the proposed method adopts a new scheme which groups images of different magnifications in such a way that the classification performance becomes the highest. Then a classifier is constructed for each group, and the insect classification becomes feasible based on a target image magnification. In addition, by integrating the classification results of several images obtained from the same sample, i.e., the same insect, performance improvement of the insect classification considering magnifications can be realized. Experimental results show the effectiveness of the proposed method. © 2013 IEEE.
  • Yuta Igarashi, Takahiro Ogawa, Miki Haseyama
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013) 2388 - 2392 2013年 [査読有り][通常論文]
     
    This paper presents a novel method for estimating a spectral reflectance from two kinds of input images: an image including both visible light components and near-infrared (NIR) components, and an image including only NIR components. From these input images, we estimate the spectral reflectance based on the Non-negative Matrix Factorization algorithm using spectral sensitivities of a digital camera. The estimated spectral reflectance enables several important applications. In this paper, the e ff ectiveness of the proposed method is verified by using the estimated spectral reflectance in the two image processing applications.
  • Takahiro Ogawa, Miki Haseyama
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013) 704 - 708 2013年 [査読有り][通常論文]
     
    A kernel cross-modal factor analysis (KCFA) based missing area restoration method including a new priority estimation scheme is presented in this paper. The proposed method estimates latent relationship between missing areas and their neighboring areas by deriving projection matrices minimizing their errors in the latent space based on KCFA. This latent relationship represented by the derived projection matrices is optimal for accurately restoring missing areas within the target image. Furthermore, the proposed method adopts a new priority estimation scheme which determines the restoration order of missing areas. Specifically, this priority is estimated based on the criterion representing the restoration performance derived from KCFA, and it enables adaptive selection of missing areas successfully restored by our method. Consequently, it becomes feasible to accurately perform the restoration of missing areas by using the proposed KCFA-based method. Experimental results show subjective and quantitative improvements of the proposed method over previously reported restoration methods.
  • Miki Haseyama, Takahiro Ogawa
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION 29 2 96 - 109 2013年01月 [査読有り][通常論文]
     
    A trial realization of human-centered navigation for video retrieval is presented in this article. This system consists of the following functions: (a) multimodal analysis for collaborative use of multimedia data, (b) preference extraction for the system to adapt to users' individual demands, and (c) adaptive visualization for users to be guided to their desired contents. By using these functions, users can find their desired video contents more quickly and accurately than with the conventional retrieval schemes since our system can provide new pathways to the desired contents. Experimental results verify the effectiveness of the proposed system.
  • Miki Haseyama, Takahiro Ogawa, Nobuyuki Yagi
    ITE Transactions on Media Technology and Applications 1 1 2 - 9 2013年 [査読有り][通常論文]
     
    Research trends in new video retrieval based on image and video semantic understanding are presented in this paper. First, recent studies related to image and video semantic analysis are introduced to understand leading-edge multimedia retrieval technologies. Several works related to visualization interfaces for multimedia retrieval are also presented. Finally, trends in state-of-the-art studies and the future outlook are described.
  • Soh Yoshida, Hiroshi Okada, Takahiro Ogawa, Miki Haseyama
    ITE Transactions on Media Technology and Applications 1 3 237 - 243 2013年 [査読有り][通常論文]
     
    This paper presents a new method to improve performance of SVM-based classification, which contains a target object detection scheme. The proposed method tries to detect target objects from training images and improve the performance of the image classification by calculating the hyperplane from the detection results. Specifically, the proposed method calculates a Support Vector Machine (SVM) hyperplane, and detects rectangular areas surrounding the target objects based on the distances between their feature vectors and the separating hyperplane in the feature space. Then modification of feature vectors becomes feasible by removing features that exist only in background areas. Furthermore, a new hyperplane is calculated by using the modified feature vectors. Since the removed features are not part of the target object, they are not relevant to the learning process. Therefore, their removal can improve the performance of the image classification. Experimental results obtained by applying the proposed methods to several existing SVM-based classification method show its effectiveness.
  • Zaixing He, Takahiro Ogawa, Miki Haseyama, Xinyue Zhao, Shuyou Zhang
    Radioengineering 22 3 851 - 860 2013年 [査読有り][通常論文]
     
    In this paper, we propose a novel low-density parity-check real-number code, based on compressed sensing. A real-valued message is encoded by a coding matrix (with more rows than columns) and transmitted over an erroneous channel, where sparse errors (impulsive noise) corrupt the codeword. In the decoding procedure, we apply a structured sparse (low-density) parity-check matrix, the Permuted Block Diagonal matrix, to the corrupted output, and the errors can be corrected by solving a compressed sensing problem. A compressed sensing algorithm, Cross Low-dimensional Pursuit, is used to decode the code by solving this compressed sensing problem. The proposed code has high error correction performance and decoding efficiency. The comparative experimental results demonstrate both advantages of our code. We also apply our code to cryptography.
  • Katsuki Kobayashi, Takahiro Ogawa, Miki Haseyama
    ITE Transactions on Media Technology and Applications 1 4 333 - 342 2013年 [査読有り][通常論文]
     
    This paper presents a new evaluation criterion for visualization of image search results based on the feature integration theory. This criterion is derived by combining two elements, visual saliency on visualization and grouping degree of similar images. Visual saliency, which is calculated from the feature integration theory, on visualization of image search results enables representation of users' attention, which is closely related to the effectiveness of finding images. Furthermore, since users perceive similar images that are close to each other as one group, grouping degree of similar images enables evaluation of the effectiveness when users find images similar to a desired image. Therefore, by combining visual saliency on visualization and grouping degree of similar images, we can derive the novel criterion and evaluate the effectiveness of visualization of image search results.
  • Hirokazu Tanaka, Sunmi Kim, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E95A 11 2015 - 2022 2012年11月 [査読有り][通常論文]
     
    A new spatial and temporal error concealment method for three-dimensional discrete wavelet transform (3D DWT) video coding is analyzed. 3D DWT video coding employing dispersive grouping (DG) and two-step error concealment is an efficient method in a packet loss channel [20], [21]. In the two-step error concealment method, the interpolations are only spatially applied however, higher efficiency of the interpolation can be expected by utilizing spatial and temporal similarities. In this paper, we propose an enhanced spatial and temporal error concealment method in order to achieve higher error concealment (EC) performance in packet loss networks. In the temporal error concealment method, structural similarity (SSIM) index is employed for inter group of pictures (GOP) EC and minimum mean square error (MMSE) is used for intra GOP EC. Experimental results show that the proposed method can obtain remarkable performance compared with the conventional methods.
  • Takahiro Ogawa, Hiroshi Hasegawa, Ken-ichi Sato
    IEICE TRANSACTIONS ON COMMUNICATIONS E95B 10 3139 - 3148 2012年10月 [査読有り][通常論文]
     
    We propose a novel dynamic hierarchical optical path network architecture that achieves efficient optical fast circuit switching. In order to complete wavelength path setup/teardown efficiently, the proposed network adaptively manages waveband paths and bundles of optical paths, which provide virtual mesh connectivity between node pairs for wavelength paths. Numerical experiments show that operational and facility costs are significantly reduced by employing the adaptive virtual waveband connections.
  • Marie Katsurai, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E95A 5 927 - 937 2012年05月 [査読有り][通常論文]
     
    In this paper, a novel framework for extracting visual feature-based keyword relationships from an image database is proposed. From the characteristic that a set of relevant keywords tends to have common visual features, the keyword relationships in a target image database are extracted by using the following two steps. First, the relationship between each keyword and its corresponding visual features is modeled by using a classifier. This step enables detection of visual features related to each keyword. In the second step, the keyword relationships are extracted from the obtained results. Specifically, in order to measure the relevance between two keywords, the proposed method removes visual features related to one keyword from training images and monitors the performance of the classifier obtained for the other keyword. This measurement is the biggest difference from other conventional methods that focus on only keyword co-occurrences or visual similarities. Results of experiments conducted using an image database showed the effectiveness of the proposed method.
  • Marie Katsurai, Takahiro Ogawa, Miki Haseyama
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 2012 Vol.4 2373 - 2376 2012年 [査読有り][通常論文]
     
    This paper presents a cross-modal approach for extracting semantic relationships of concepts from an image database. First, canonical correlation analysis (CCA) is used to capture the cross-modal correlations between visual features and tag features in the database. Then, in order to measure inter-concept relationships and estimate semantic levels, the proposed method focuses on the distributions of images under the probabilistic interpretation of CCA. Results of experiments conducted by using an image database showed the improvement of the proposed method over existing methods.
  • Takahiro Ogawa, Miki Haseyama
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) 2012 Vol.2 1141 - 1144 2012年 [査読有り][通常論文]
     
    This paper presents a perceptually optimized subspace estimation method for missing texture reconstruction. The proposed method calculates the optimal subspace of known patches within a target image based on structural similarity (SSIM) index instead of calculating mean square error (MSE)-based eigenspace. Furthermore, from the obtained subspace, missing texture reconstruction whose results maximize the SSIM index is performed. In this approach, the non-convex maximization problem is reformulated as a quasi convex problem, and the reconstruction of the missing textures becomes feasible. Experimental results show that our method overcomes previously reported MSE-based reconstruction methods.
  • 長谷川尭史, 小川貴弘, 渡邉日出海, 長谷山美紀
    映像情報メディア学会誌(Web) 66 7 2012年 [査読有り][通常論文]
  • Akira Tanaka, Takahiro Ogawa, Miki Haseyama
    2012 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC) 1 - 4 2012年 [査読有り][通常論文]
     
    Estimation of missing entries in a multivariate data is one of classical problems in the field of statistical science. One of most popular approaches for this problem is linear regression based on the EM algorithm. When we consider to apply this approach to block-based image inpainting problems, we have additional information, that is, a target lost pixel could be included in multiple blocks, which implies that we have multiple candidates of estimates for the pixel. In such cases, we have to choose a good estimate among the multiple candidates. In this paper, we propose a novel image inpainting method incorporating optimal block selection in terms of the expected squared errors among multiple candidates of the estimate for the target pixel. Results of numerical examples are also shown to verify the efficacy of the proposed method.
  • Takahiro Ogawa, Miki Haseyama
    IEEE TRANSACTIONS ON MULTIMEDIA 13 5 974 - 992 2011年10月 [査読有り][通常論文]
     
    In this paper, a missing image data reconstruction method based on an adaptive inverse projection via sparse representation is proposed. The proposed method utilizes sparse representation for obtaining low-dimensional subspaces that approximate target textures including missing areas. Then, by using the obtained low-dimensional subspaces, inverse projection for reconstructing missing areas can be derived to solve the problem of not being able to directly estimate missing intensities. Furthermore, in this approach, the proposed method monitors errors caused by the derived inverse projection, and the low-dimensional subspaces optimal for target textures are adaptively selected. Therefore, we can apply adaptive inverse projection via sparse representation to target missing textures, i.e., their adaptive reconstruction becomes feasible. The proposed method also introduces some schemes for color processing into the calculation of subspaces on the basis of sparse representation and attempts to avoid spurious color caused in the reconstruction results. Consequently, successful reconstruction of missing areas by the proposed method can be expected. Experimental results show impressive improvement of our reconstruction method over previously reported reconstruction methods.
  • Zaixing He, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E94A 9 1793 - 1803 2011年09月 [査読有り][通常論文]
     
    In this paper, a novel algorithm, Cross Low-dimension Pursuit, based on a new structured sparse matrix, Permuted Block Diagonal (PBD) matrix, is proposed in order to recover sparse signals from incomplete linear measurements. The main idea of the proposed method is using the PBD matrix to convert a high-dimension sparse recovery problem into two (or more) groups of highly low-dimension problems and crossly recover the entries of the original signal from them in an iterative way. By sampling a sufficiently sparse signal with a PBD matrix, the proposed algorithm can recover it efficiently. It has the following advantages over conventional algorithms: (1) low complexity, i.e., the algorithm has linear complexity, which is much lower than that of existing algorithms including greedy algorithms such as Orthogonal Matching Pursuit and (2) high recovery ability, i.e., the proposed algorithm can recover much less sparse signals than even l(1)-norm minimization algorithms. Moreover, we demonstrate both theoretically and empirically that the proposed algorithm can reliably recover a sparse signal from highly incomplete measurements.
  • Takahiro Ogawa, Miki Haseyama
    IEEE TRANSACTIONS ON IMAGE PROCESSING 20 2 417 - 432 2011年02月 [査読有り][通常論文]
     
    A missing intensity interpolation method using a kernel principal component analysis (PCA)-based projection onto convex sets (POCS) algorithm and its applications are presented in this paper. In order to interpolate missing intensities within a target image, the proposed method reconstructs local textures containing the missing pixels by using the POCS algorithm. In this reconstruction process, a nonlinear eigenspace is constructed from each kind of texture, and the optimal subspace for the target local texture is introduced into the constraint of the POCS algorithm. In the proposed method, the optimal subspace can be selected by monitoring errors converged in the reconstruction process. This approach provides a solution to the problem in conventional methods of not being able to effectively perform adaptive reconstruction of the target textures due to missing intensities, and successful interpolation of the missing intensities by the proposed method can be realized. Furthermore, since our method can restore any images including arbitrary-shaped missing areas, its potential in two image reconstruction tasks, image enlargement and missing area restoration, is also shown in this paper.
  • Takahiro Ogawa, Miki Haseyama
    2011 18TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 1133 - 1136 2011年 [査読有り][通常論文]
     
    This paper presents an adaptive kernel principal component analysis (KPCA) based missing texture reconstruction approach including a classification scheme via difference subspaces. The proposed method utilizes a KPCA-based nonlinear eigenspace, which is obtained from each kind of known texture within a target image, as a constraint for reconstructing missing textures with a constraint of known neighboring areas. Then since these two constraints are convex, we can estimate missing textures based on a projection onto convex sets (POCS) algorithm. Furthermore, in this approach, the proposed method derives a new criterion for selecting the optimal eigenspace by monitoring errors caused in the projection via a difference subspace of each kind of known texture. This provides a solution to conventional problems of not being able to perform accurate texture classification, and the adaptive reconstruction of missing textures can be realized by the proposed method. Experimental results show subjective and quantitative improvement of the proposed method over previously reported reconstruction methods.
  • Zaixing He, Takahiro Ogawa, Miki Haseyama
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING 3172 - 3175 2011年 [査読有り][通常論文]
     
    This paper proposes a novel algorithm for decoding real-field codes over erroneous channels, where the encoded message is corrupted by sparse errors, i.e., impulsive noise. The main problem of decoding such a corrupted encoded message is to reconstruct the error vector; recently, a common way to reconstruct it is to find the sparsest solution to an underdetermined system that is constructed using a parity-check matrix. Unlike the conventional approaches reconstructing the high-dimensional error vector directly, the proposed method crossly recovers the elements of error vector from two (or several) groups of low-dimensional equations. Compared with the traditional algorithms, the proposed method can decode an encoded message with a much higher corruption rate. Furthermore, the complexity of our method is linear, which is much lower than those of the traditional methods. The experimental results verified the high error correction ability and speed of the proposed method.
  • Takahiro Ogawa, Miki Haseyama
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING 2011 Vol.2 1157 - 1160 2011年 [査読有り][通常論文]
     
    This paper presents an adaptive reconstruction method of missing textures based on structural similarity (SSIM) index. The proposed method firstly performs SSIM-based selection of the optimal known local textures to adaptively obtain subspaces for reconstructing missing textures. Furthermore, from the selected known textures, the missing texture reconstruction maximizing the SSIM index is performed. In this approach, the non-convex maximization problem is reformulated as a quasi convex problem, and the adaptive reconstruction of the missing textures becomes feasible. Experimental results show impressive improvement of the proposed method over previously reported reconstruction methods.
  • Takahiro Ogawa, Miki Haseyama
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2011 1 - 29 2011年 [査読有り][通常論文]
     
    An adaptive example-based super-resolution (SR) using kernel principal component analysis (PCA) with a novel classification approach is presented in this paper. In order to enable estimation of missing high-frequency components for each kind of texture in target low-resolution (LR) images, the proposed method performs clustering of high-resolution (HR) patches clipped from training HR images in advance. Based on two nonlinear eigenspaces, respectively, generated from HR patches and their corresponding low-frequency components in each cluster, an inverse map, which can estimate missing high-frequency components from only the known low-frequency components, is derived. Furthermore, by monitoring errors caused in the above estimation process, the proposed method enables adaptive selection of the optimal cluster for each target local patch, and this corresponds to the novel classification approach in our method. Then, by combining the above two approaches, the proposed method can adaptively estimate the missing high-frequency components, and successful reconstruction of the HR image is realized.
  • Hiroyuki Ohkushi, Takahiro Ogawa, Miki Haseyama
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2011 121 - 121 2011年 [査読有り][通常論文]
     
    In this article, a method for recommendation of music pieces according to human motions based on their kernel canonical correlation analysis (CCA)-based relationship is proposed. In order to perform the recommendation between different types of multimedia data, i.e., recommendation of music pieces from human motions, the proposed method tries to estimate their relationship. Specifically, the correlation based on kernel CCA is calculated as the relationship in our method. Since human motions and music pieces have various time lengths, it is necessary to calculate the correlation between time series having different lengths. Therefore, new kernel functions for human motions and music pieces, which can provide similarities between data that have different time lengths, are introduced into the calculation of the kernel CCA-based correlation. This approach effectively provides a solution to the conventional problem of not being able to calculate the correlation from multimedia data that have various time lengths. Therefore, the proposed method can perform accurate recommendation of best matched music pieces according to a target human motion from the obtained correlation. Experimental results are shown to verify the performance of the proposed method.
  • Takahiro Ogawa, Miki Haseyama
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2011 2011年 [査読有り][通常論文]
     
    An adaptive single image superresolution (SR) method using a support vector data description (SVDD) is presented. The proposed method represents the prior on high-resolution (HR) images by hyperspheres of the SVDD obtained from training examples and reconstructs HR images from low-resolution (LR) observations based on the following schemes. First, in order to perform accurate reconstruction of HR images containing various kinds of objects, training HR examples are previously clustered based on the distance from a center of a hypersphere obtained for each cluster. Furthermore, missing high-frequency components of the target image are estimated in order that the reconstructed HR image minimizes the above distances. In this approach, the minimized distance obtained for each cluster is utilized as a criterion to select the optimal hypersphere for estimating the high-frequency components. This approach provides a solution to the problem of conventional methods not being able to perform adaptive estimation of the high-frequency components. In addition, local patches in the target low-resolution (LR) image are utilized as the training HR examples from the characteristic of self-similarities between different resolution levels in general images, and our method can perform the SR without utilizing any other HR images.
  • 田中章, 小川貴弘, 長谷山美紀, 宮腰政明
    電子情報通信学会論文誌 A J94-A 2 2011年 [査読有り][通常論文]
  • KIM Sunmi, TANAKA Hirokazu, OGAWA Takahiro, HASEYAMA Miki
    映像情報メディア学会技術報告 35 165 - 170 一般社団法人 映像情報メディア学会 2011年 
    In this paper, we present an adaptive spatial-temporal error concealment method for the wavelet based video coding in wireless networks. A three-dimensional discrete wavelet transform (3-D DWT) is performed 2-D spatial DWT coding and temporal DWT coding on a group of pictures (GOP). The transmission of compressed video suffers from errors such as packet losses that not only corrupt frame but also propagate successive frames. The proposed adaptive spatial-temporal error concealment method consists of spatial EC and temporal EC to overcome error propagation problems. The proposed method is concealing erroneous coefficients of the spatiotemporal low-frequency subband by their duplication information, and uses the proposed adaptive spatial-temporal concealment method to recover errors for entire video sequence. The performance of proposed method was evaluated with wireless packet transmission networks. Experimental results show the proposed method can serve robust and stable performance in error-prone environments.
  • Sunmi Kim, Hirokazu Tanaka, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E93A 11 2173 - 2183 2010年11月 [査読有り][通常論文]
     
    In this paper we propose a two step error concealment algorithm based on an error resilient three dimensional discrete wavelet transform (3 D DWT) video coding scheme. The proposed scheme consists of an error resilient encoder duplicating the lowest sub band bit streams for dispersive grouped frames and an error concealment decoder. The error concealment method of this decoder is decomposed of two steps the first step is replacement of erroneous coefficients in the lowest sub band by the duplicated coefficients and the second step is interpolation of the missing wavelet coefficients by minimum mean square error (MMSE) estimation. The proposed scheme can achieve robust transmission over unreliable channels. Experimental results provide performance comparisons in terms of peak signal to noise ratio (PSNR) and demonstrate increased performances compared to state of the art error concealment schemes.
  • Takahiro Ogawa, Miki Haseyama
    2010 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME 2010) 352 - 357 2010年 [査読有り][通常論文]
     
    This paper presents an adaptive reconstruction method of missing textures based on an inverse projection via sparse representation. The proposed method approximates original and corrupted textures in lower-dimensional subspaces by using the sparse representation technique. Then, this approach effectively solves problems of not being able to directly estimate an inverse projection for reconstructing missing textures. Furthermore, even if target textures contain missing areas, the proposed method enables adaptive generation of the subspaces by monitoring errors caused in their known neighboring textures by the estimated inverse projection. Consequently, since the optimal inverse projection is adaptively estimated for each texture, successful reconstruction of the missing areas can be expected. Experimental results show impressive improvement of the proposed reconstruction technique over previously reported reconstruction techniques.
  • Zaixing He, Takahiro Ogawa, Miki Haseyama
    2010 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING 4301 - 4304 2010年 [査読有り][通常論文]
     
    There exist two main problems in currently existing measurement matrices for compressed sensing of natural images, the difficulty of hardware implementation and low sensing efficiency. In this paper, we present a novel simple and efficient measurement matrix, Binary Permuted Block Diagonal (BPBD) matrix. The BPBD matrix is binary and highly sparse (all but one or several "1"s in each column are "0"s). Therefore, it can simplify the compressed sensing procedure dramatically. The proposed measurement matrix has the following advantages, which cannot be entirely satisfied by existing measurement matrices. (1) It has easy hardware implementation because of the binary elements; (2) It has high sensing efficiency because of the highly sparse structure; (3) It is incoherent with different popular sparsity basis' like wavelet basis and gradient basis; (4) It provides fast and nearly optimal reconstructions. Moreover, the simulation results demonstrate the advantages of the proposed measurement matrix.
  • Takahiro Ogawa, Miki Haseyama
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2010 2010年 [査読有り][通常論文]
     
    This paper presents a simple and effective missing texture reconstruction method based on a perceptually optimized algorithm. The proposed method utilizes the structural similarity (SSIM) index as a new visual quality measure for reconstructing missing areas. Furthermore, in order to adaptively reconstruct target images containing several kinds of textures, the following two novel approaches are introduced into the SSIM-based reconstruction algorithm. First, the proposed method performs SSIM-based selection of the optimal known local textures to adaptively obtain subspaces for reconstructing missing textures. Secondly, missing texture reconstruction that maximizes the SSIM index in the known neighboring areas is performed. In this approach, the nonconvex maximization problem is reformulated as a quasi convex problem, and adaptive reconstruction of the missing textures based on the perceptually optimized algorithm becomes feasible. Experimental results show impressive improvements of the proposed method over previously reported reconstruction methods.
  • KIM Sunmi, TANAKA Hirokazu, OGAWA Takahiro, HASEYAMA Miki
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences 93 12 2763_e1 - 2763_e1 The Institute of Electronics, Information and Communication Engineers 2010年
  • Yasutaka Hatakeyama, Takahiro Ogawa, Satoshi Asamizu, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A 8 1961 - 1969 2009年08月 [査読有り][通常論文]
     
    A novel video retrieval method based on Web community extraction using audio and visual features and textual features of video materials is proposed in this paper. In this proposed method, canonical correlation analysis is applied to these three features calculated from video materials and their Web pages, and transformation of each feature into the same variate space is possible. The transformed variates are based on the relationships between visual, audio and textual features of video materials, and the similarity between video materials in the same feature space for each feature can be calculated. Next, the proposed method introduces the obtained similarities of video materials into the link relationship between their Web pages. Furthermore, by performing link analysis of the obtained weighted link relationship, this approach extracts Web communities including similar topics and provides the degree of attribution of video materials in each Web community for each feature. Therefore, by calculating similarities of the degrees of attribution between the Web communities extracted from the three kinds of features, the desired ones are automatically selected. Consequently, by monitoring the degrees of attribution of the obtained Web communities, the proposed method can perform effective video retrieval. Some experimental results obtained by applying the proposed method to video materials obtained from actual Web pages are shown to verify the effectiveness of the proposed method.
  • Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A 8 1950 - 1960 2009年08月 [査読有り][通常論文]
     
    In this paper, a method for adaptive reconstruction of missing textures based on kernel canonical correlation analysis (CCA) with a new clustering scheme is presented. The proposed method estimates the correlation between two areas, which respectively correspond to a missing area and its neighboring area, from known parts within the target image and realizes reconstruction of the missing texture. In order to obtain this correlation, the kernel CCA is applied to each cluster containing the same kind of textures, and the optimal result is selected for the target missing area. Specifically, a new approach monitoring errors caused in the above kernel CCA-based reconstruction process enables selection of the optimal result. This approach provides a solution to the problem in traditional methods of not being able to perform adaptive reconstruction of the target textures due to missing intensities. Consequently, all of the missing textures are successfully estimated by the optimal cluster's correlation, which provides accurate reconstruction of the same kinds of textures. In addition, the proposed method can obtain the correlation more accurately than our previous works, and more successful reconstruction performance can be expected. Experimental results show impressive improvement of the proposed reconstruction technique over previously reported reconstruction techniques.
  • Tomoki Hiramatsu, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A 8 1939 - 1949 2009年08月 [査読有り][通常論文]
     
    In this paper, an ER (Error-Reduction) algorithm-based method for removal of adherent water drops from images obtained by a rear view camera mounted on a vehicle in rainy conditions is proposed. Since Fourier-domain and object-domain constraints are needed for any ER algorithm-based method, the proposed method introduces the following two novel constraints for the removal of adherent water drops. The first one is the Fourier-domain constraint that utilizes the Fourier transform magnitude of the previous frame in the obtained images as that of the target frame. Noting that images obtained by the rear view camera have the unique characteristics of objects moving like ripples because the rear view camera is generally composed of a fish-eye lens for a wide view angle, the proposed method assumes that the Fourier transform magnitudes of the target frame and the previous frame are the same in the polar coordinate system. The second constraint is the object-domain constraint that utilizes intensities in an area of the target frame to which water drops have adhered. Specifically, the proposed method models a deterioration process of intensities that are corrupted by the water drop adhering to the rear view camera lens. By utilizing these novel constraints, the proposed ER algorithm can remove adherent water drops from images obtained by the rear view camera. Experimental results that verify the performance of the proposed method are represented.
  • Shigeki Takahashi, Takahiro Ogawa, Hirokazu Tanaka, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A 3 779 - 787 2009年03月 [査読有り][通常論文]
     
    A novel error concealment method using a Kalman filter is presented ill this paper, In order to successfully utilize the Kalman filter, its state transition and observation models that are suitable for the video error concealment are newly defined as follows. The state transition model represents the video decoding process by a notion-compensated prediction. Furthermore, the new observation model that represents all image blurring process is defined. and calculation of the Kalman gain becomes possible. The problem of the traditional methods is solved by using the Kalman filter in the proposed method, and accurate reconstruction of corrupted video frames, is achieved. Consequently. an effective error concealment method using the Kalman filter is realized. Experimental results showed that the proposed method has better performance than that of traditional methods.
  • Tomoki Hiramatsu, Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A 2 577 - 584 2009年02月 [査読有り][通常論文]
     
    In this paper, a Kalman filter-based method for restoration of video images acquired by an in-vehicle camera in foggy conditions is proposed. In order to realize Kalman filter-based restoration, the proposed method clips local blocks from the target frame by using a sliding window and regards the intensities in each block as elements of the state variable of the Kalman filter. Furthermore, the proposed method designs the following two models for restoration of foggy images. The first one is an observation model, which represents a fog deterioration model. The proposed method automatically determines all parameters of the fog deterioration model from only the foggy images to design the observation model. The second one is a non-linear state transition model, which represents the target frame in the original video image from its previous frame based on motion vectors. By utilizing the observation and state transition models, the correlation between successive frames can be effectively utilized for restoration, and accurate restoration of images obtained in foggy conditions can be achieved. Experimental results show that the proposed method has better performance than that of the traditional method based on the fog deterioration model.
  • Yasutaka Hatakeyama, Takahiro Ogawa, Satoshi Asamizu, Miki Haseyama
    2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6 805 - + 2009年 [査読有り][通常論文]
     
    This paper presents a Web community-based video retrieval method using canonical correlation analysis (CCA). In the proposed method, two novel approaches are introduced into the retrieval scheme of video materials on the Web. First, the CCA is applied to three kinds of video features, visual and audio features of video materials and textual features obtained from Web pages containing those video materials. This approach provides a solution of problems of traditional methods of not being able to calculate similarities between different kinds of video features. Furthermore, from the obtained similarities and link relationships of Web pages, a new adjacency matrix is defined, and link analysis can be applied to this matrix. Then, the Web communities of the video materials whose topics are similar to each other can be automatically extracted based on their features. Therefore, by ranking the video materials in the obtained Web community, accurate video retrieval can be realized.
  • Takahiro Ogawa, Miki Haseyama
    2009 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1- 8, PROCEEDINGS 1165 - 1168 2009年 [査読有り][通常論文]
     
    This paper presents an adaptive reconstruction method of missing textures based on kernel canonical correlation analysis (CCA). The proposed method calculates the correlation between two areas, which respectively correspond to a missing area and its neighbor area, from known parts within the target image and realizes the estimation of the missing textures. In order to obtain this correlation, the kernel CCA is applied to each set containing the same kind of textures, and the optimal result is selected for the target missing area. Specifically, a new approach monitoring errors caused in the above estimation process enables the selection of the optimal result. This approach provides a solution to the problem in traditional methods of not being able to perform adaptive reconstruction of the target textures due to the missing intensities. Experimental results show subjective and quantitative improvement of the proposed reconstruction technique over previously reported reconstruction techniques.
  • Norihiro Kakukou, Takahiro Ogawa, Miki Haseyaam
    2009 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS 1- 8, PROCEEDINGS 949 - 952 2009年 [査読有り][通常論文]
     
    This paper proposes a novel flow estimation method with a particle filter based on a Helmholtz decomposition theorem. The proposed method extends a model of the Helmholtz decomposition theorem and enables the decomposition of flows into rotational, divergent, and translational components. From the extended model, the proposed method defines a state transition model and an observation model of the particle filter. Furthermore, the proposed method derives an observation density of the particle filter from an energy function based on the Helmholtz decomposition theorem. By utilizing these novel approaches, the proposed method provides a solution to the problem in the traditional ones of not being able to realize an effective flow estimation with the particle filter based on rotation, divergence, and translation, which are important geometric features. Consequently, the proposed method can accurately estimate the flows.
  • 覚幸典弘, 小川貴弘, 長谷山美紀
    電子情報通信学会論文誌 D J92-D 3 2009年 [査読有り][通常論文]
  • Takahiro Ogawa, Miki Haseyama
    ISCE: 2009 IEEE 13TH INTERNATIONAL SYMPOSIUM ON CONSUMER ELECTRONICS, VOLS 1 AND 2 342 - 343 2009年 [査読有り][通常論文]
     
    This paper presents a projection onto convex sets (POCS)-based semantic image retrieval method and its performance verification. The main contributions of the proposed method are twofold: introduction of nonlinear eigenspace of visual and semantic features into the constraint of the POCS-based semantic image retrieval algorithm and adaptive selection of the annotated images utilized for this algorithm. Then, by combining these two approaches., the semantic features of the query image are successfully estimated, and accurate image retrieval can be expected. Finally, relationship between the performance of the proposed method and the kinds of the kernel functions utilized for the kernel PICA is shown in this paper.
  • Takahiro Ogawa, Miki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E91A 8 1915 - 1923 2008年08月 [査読有り][通常論文]
     
    A projection onto convex sets (POCS)-based annotation method for semantic image retrieval is presented in this paper. Utilizing database images previously annotated by keywords, the proposed method estimates unknown semantic features of a query image from its known visual features based on a POCS algorithm, which includes two novel approaches. First, the proposed method semantically assigns database images to some clusters and introduces a nonlinear eigenspace of visual and semantic features in each cluster into the constraint of the POCS algorithm. This approach accurately provides semantic features for each cluster by using its visual features in the least squares sense. Furthermore, the proposed method monitors the error converged by the POCS algorithm in order to select the optimal cluster including the query image. By introducing the above two approaches into the POCS algorithm, the unknown semantic features of the query image are successfully estimated from its known visual features. Consequently, similar images can be easily retrieved from the database based on the obtained semantic features. Experimental results verify the effectiveness of the proposed method for semantic image retrieval.
  • Tomoki Hiramatsu, Takahiro Ogawa, Miki Haseyama
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5 3160 - 3163 2008年 [査読有り][通常論文]
     
    In this paper, a Kalman filter-based approach for adaptive restoration of video images acquired by an in-vehicle camera in foggy conditions is proposed. In order to realize Kalman filter-based restoration, the proposed method regards the intensities in each frame as elements of the state variable of the Kalman filter and designs the following two models for restoration of foggy images. The first one is an observation model, which represents a fog deterioration model. The second one is a non-linear state transition model, which represents the target frame in the original video image from its previous frame based on motion vectors. By utilizing the observation and state transition models, the correlation between successive frames can be effectively utilized for restoration. Further, the proposed method introduces a new estimation scheme of the parameter, which determines the deterioration characteristic in foggy conditions, into the Kalman filter algorithm. Consequently, since automatic determination of the fog deterioration model, which specifies the observation model, from only the foggy images is realized, the accurate restoration can be achieved. Experimental results show that the proposed method has better performance than that of the traditional method based on the fog deterioration model.
  • Norihiro Kakukou, Takahiro Ogawa, Miki Haseyama
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5 2336 - 2339 2008年 [査読有り][通常論文]
     
    This paper proposes a novel detection method of rotational and divergent structures in still images based on Helmholtz decomposition. These structures are mathematical features in vector analysis. Traditionally, some detection methods of these structures in image sequences have been proposed. By using the Helmholtz decomposition, which can decompose flows into rotational and divergent components, the traditional methods can detect these structures in image sequences. However, the rotational and divergent structures in still images cannot be detected with the traditional methods. Therefore, the proposed method introduces a new criterion into the traditional schemes in order to realize the detection of the rotational and divergent structures in still images. This criterion is derived from two properties based on relation between still images and the flows, which are composed of the rotational and divergent components. Consequently, the detection of the rotational and divergent structures in still images can be achieved.
  • Takahiro Ogawa, Miki Haseyama
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5 965 - 968 2008年 [査読有り][通常論文]
     
    A kernel PCA-based semantic feature estimation approach for similar image retrieval is presented in this paper. Utilizing database images previously annotated by keywords, the proposed method estimates unknown semantic features of a query image. First, our method performs semantic clustering of the database images and derives a new map from a nonlinear eigenspace of visual and semantic features in each cluster. This map accurately provides the semantic features for the images belonging to each cluster by using their visual features. Further, in order to select the optimal cluster including the query image, the proposed method monitors errors of the visual features caused by the semantic feature estimation process. Then, even if any semantics of the query image are unknown, its semantic features are successfully estimated by the optimal cluster. Experimental results verify the effectiveness of the proposed method for semantic image retrieval.
  • Takahiro Ogawa, Miki Haseyama
    2008 IEEE International Conference on Image Processing, Proceedings 969 - 972 2008年 [査読有り][通常論文]
     
    A kernel PCA-based semantic feature estimation approach for similar image retrieval is presented in this paper. Utilizing database images previously annotated by keywords, tire proposed method estimates unknown semantic features of a query image. First, our method performs semantic clustering of the database images and derives a new map from a nonlinear eigenspace of visual and semantic features in each c aster. This map accurately provides the semantic features for the images belonging to each cluster by using their visual features. Further, in order to select the optional cluster including the query image, the proposed method monitors errors of the visual features caused by the Semantic feature estimation process. Then, even if any semantics of the query image arc unknown, its semantic features are successfully estimated by tire optimal cluster. Experimental results verify the effectiveness of the proposed method for semantic image retrieval.
  • Masao Hiramoto, Takahiro Ogawa, Miki Haseyama
    Systems and Computers in Japan 38 13 15 - 27 2007年11月30日 [査読有り][通常論文]
     
    This paper proposes a method for general image recognition based on the progress of increasing the pixels of image sensors and improving image quality. This method can also adapt to images which have undergone geometric transformations such as the rotation and movement of images. The proposed method uses a voting system that utilizes vectors. This method uses vectors which represent intensity gradients and vectors which show position to express images and also defines voting vectors and similarity for the recognition. In addition, the proposed method has characteristics of concentrating voting locations at an origin point if there are identical images such that the voting results do not influence geometric transformations. When we performed experiments on natural images including original images which may have undergone image processing such as Gaussian or median filtering and JPEG compression, we understood that distinct differences appeared in the similarities and that recognition was possible even if an artificial process was added to the images. Even further, when we examined recognition of images using the greatest number of voting points as an application of this method, we were able to show that the recognition capability was high and that a partial image contained in one image could also be recognized. © 2007 Wiley Periodicals, Inc.
  • Takahiro Ogawa, Milki Haseyama
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E90A 8 1519 - 1527 2007年08月 [査読有り][通常論文]
     
    A new framework for reconstruction of missing textures in digital images is introduced in this paper. The framework is based on a projection onto convex sets (POCS) algorithm including a novel constraint. In the proposed method, a nonlinear eigenspace of each cluster obtained by classification of known textures within the target image is applied to the constraint. The main advantage of this approach is that the eigenspace can approximate the textures classified into the same cluster in the least-squares sense. Furthermore, by monitoring the errors converged by the POCS algorithm, a selection of the optimal cluster to reconstruct the target texture including missing intensities can be achieved. This POCS-based approach provides a solution to the problem in traditional methods of not being able to perform the selection of the optimal cluster due to the missing intensities within the target texture. Consequently, all of the missing textures are successfully reconstructed by the selected cluster's eigenspaces which correctly approximate the same kinds of textures. Experimental results show subjective and quantitative improvement of the proposed reconstruction technique over previously reported reconstruction techniques.
  • Takahiro Ogawa, Miki Haseyama
    2007 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-7 3 1229 - 1232 2007年 [査読有り][通常論文]
     
    In this paper, a new framework for texture reconstruction of missing areas, which exist all over the target image, is presented. The framework is based on a projection onto convex sets (POCS) algorithm including a novel constraint. In the proposed method, a nonlinear eigenspace of each cluster obtained by texture classification is applied to the constraint. Furthermore, by monitoring the errors converged by the POCS algorithm, selection of the optimal cluster for the target texture including missing intensities is realized in order to reconstruct it adaptively. Then, iterating the POCS-based procedures, our method renews the nonlinear eigenspaces and the reconstruction image, and outputs the reliable result. This approach provides a solution to the problem in traditional methods of not being able to perform adaptive reconstruction of the target textures due to the missing intensities. Experimental results show subjective and quantitative improvement of the proposed reconstruction technique over previously reported reconstruction techniques.
  • Takahiro Ogawa, Miki Haseyama
    2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL I, PTS 1-3, PROCEEDINGS 1 697 - 700 2007年 [査読有り][通常論文]
     
    This paper presents a missing texture reconstruction method based on projection onto convex sets (POCS). The proposed method classifies textures within the target image into some clusters in a high-dimensional texture feature space. Further, for the target missing texture, our method performs a novel approach, that monitors the errors caused by the POCS algorithm in the feature space, and adaptively selects the optimal cluster including similar textures. Then, the missing texture is restored from these similar textures by a new POCS-based nonlinear subspace projection scheme. Consequently, since the proposed method realizes the nonconventional adaptive technique using the optimal nonlinear subspace, the accurate restoration result can be obtained. Experimental results show that our method achieves higher performance than the traditional method.
  • Takahiro Ogawa, Miki Haseyama, Hideo Kitajima
    Systems and Computers in Japan 37 3 49 - 57 2006年03月 [査読有り][通常論文]
     
    This paper proposes an accurate method for the restoration of missing intensities of still images by using the optical flow. It is important in restoration to reconstruct missing edges correctly. Therefore, this paper modifies the optical flow conventionally used for motion analysis in video images and applies it to the restoration of missing intensities. Further, the proposed method introduces a new index expressing the correlation of intensities between two pixels into the scheme for calculation of the optical flow in order to obtain a flow which gives more accurate estimated values. The optical flow calculated by this index provides the pixel from the neighborhood whose intensity is most similar to that of the target pixel, so that the estimated intensity is not affected by pixels whose intensities are quite different. Consequently, even when multiple edges pass through the missing area or the direction of the edge changes significantly inside the area, the proposed method can reconstruct the edges correctly. Some experimental results are presented in order to verify the high performance of the proposed method. © 2006 Wiley Periodicals, Inc.
  • Norihiro Kakukou, Takahiro Ogawa, Miki Haseyama, Hideo Kitajima
    2006 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP 2006, PROCEEDINGS 2701 - + 2006年 [査読有り][通常論文]
     
    This paper proposes an effective image enlargement method based on an Iterated Function System(IFS), which is traditionally used for image coding. The IFS can reconstruct an image of a different size from the coding target image's. Based on this property, some methods enlarging images have been proposed by using the IFS. However, the images enlarged by the traditional methods suffer from block noise and edge discontinuity in the boundary between neighboring range blocks, which are units of the process in the IFS. The reasons for the problems of the traditional methods are that they use non-overlapping range blocks and do not consider edge continuity in the boundary between the neighboring range blocks. Therefore, the proposed method allows selection of overlapping range blocks in order to avoid the block noise. Further, the proposed method introduces a line process, which is used for edge detection, into the enlargement procedure. The edges obtained by using the line process can retain the edge continuity. Therefore, the images enlarged by the proposed method retain the edge continuity. Consequently, an accurate image enlargement can be achieved.
  • T Ogawa, M Haseyama, H Kitajima
    2005 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), VOLS 1-6, CONFERENCE PROCEEDINGS 4931 - 4934 2005年 [査読有り][通常論文]
     
    This paper proposes a GMRF-model based restoration method of missing areas in still images. The GMRF model used in the proposed method is realized by a new assumption that reasonably holds for an image source. This model can express important image features such as edges because of the use of the new assumption. Therefore, the proposed method restores the missing areas using the modified GMRF model and can correctly reconstruct the missing edges. Consequently, the proposed method achieves more accurate restoration than those of the traditional methods on both objective and subjective measures. Extensive experimental results demonstrate the improvement of the proposed method over the previous methods.
  • T Ogawa, M Haseyama, H Kitajima
    2005 International Conference on Image Processing (ICIP), Vols 1-5 2 1389 - 1392 2005年 [査読有り][通常論文]
     
    This paper presents a novel reconstruction method of missing textures using an error reduction algorithm which is one of phase retrieval methods. The proposed method estimates the Fourier transform magnitude of the missing area from another area whose texture is similar in the obtained image. In order to realize this, a novel approach that monitors the errors caused by the error reduction algorithm is introduced into the selection scheme of the similar texture. Further, the proposed method estimates the phase of the target area by using the error reduction algorithm modified for the texture reconstruction and can restore the missing area accurately. Some experimental results show that the proposed method achieves more accurate restoration than that of the traditional methods.
  • Takahiro Ogawa, Satomi Ota, Shin-Ichi Ito, Yasue Mitsukura, Minoru Fukumi, Norio Akamatsu
    Knowledge-Based Intelligent Information and Engineering Systems 657 - 663 2005年
  • Takahiro Ogawa, Miki Haseyama, Hideo Kitajima
    Proceedings - IEEE International Symposium on Circuits and Systems 4931 - 4934 2005年 [査読有り][通常論文]
     
    This paper proposes a GMRF-model based restoration method of missing areas in still images. The GMRF model used in the proposed method is realized by a new assumption that reasonably holds for an image source. This model can express important image features such as edges because of the use of the new assumption. Therefore, the proposed method restores the missing areas using the modified GMRF model and can correctly reconstruct the missing edges. Consequently, the proposed method achieves more accurate restoration than those of the traditional methods on both objective and subjective measures. Extensive experimental results demonstrate the improvement of the proposed method over the previous methods. © 2005 IEEE.
  • M Hiramoto, T Ogawa, M Haseyama
    ICIP: 2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1- 5 2 3049 - 3052 2004年 [査読有り][通常論文]
     
    This paper introduces a method for recognizing images using a new approach to expressing images as vectors. Using this expression method, an image is constructed from 2 types of vectors - vectors indicating positions and vectors denoting intensity gradients for those positions. When investigating the amount of difference between two images, similarities are evaluated by calculating voting densities in the image space, using the vectors making up the sample image in relation to the vectors expressing the reference image. The expression proposed is invariant to image rotation and by changing the resolution hierarchically, recognition using this expression is also adaptable to perspective and detail. Using this method, we carried out experimentation recognizing representative images from various fields and the results show that the method is effective in discriminating between them.
  • Kalman Filter-Based Error Concealment for Video Ttansmission
    [査読有り][通常論文]
  • オプティカルフローを用いた静止画像における失われた輝度値の復元
    [査読有り][通常論文]
  • 輝度勾配ベクトルを用いた画像識別方法
    [査読有り][通常論文]
  • GMRFモデルを用いた静止画像における失われた輝度値の復元
    [査読有り][通常論文]
  • 携帯電話を用いた救急救命のための情報提供システム
    [査読有り][通常論文]

その他活動・業績

受賞

  • 2023年10月 2023 IEEE 12th Global Conference on Consumer Electronic, Bronze Prize GCCE2023 Excellent Student Poster Award
  • 2023年10月 2023 IEEE 12th Global Conference on Consumer Electronic, Silver Prize GCCE2023 Excellent Paper Award
  • 2023年07月 2023 ICCE-TW Best Paper Award Honorable Metion
  • 2023年02月 2022 IEEE Sapporo Section Encouragement Award 3件
  • 2023年02月 2022 IEEE Sapporo Section Student Paper Contest, Encouraging Prize
  • 2023年01月 International Workshop on Advanced Image Technology (IWAIT2023) Best Paper Award
  • 2022年12月 映像情報メディア学会 優秀研究発表賞
  • 2022年12月 令和4年度電気・情報関係学会北海道支部連合大会 3件
  • 2022年10月 2022 IEEE 11th Global Conference on Consumer Electronic, Bronze Prize GCCE2022 Excellent Student Paper Award
  • 2022年10月 2022 IEEE 11th Global Conference on Consumer Electronic, Silver Prize GCCE2022 Excellent Poster Award
  • 2022年10月 2022 IEEE 11th Global Conference on Consumer Electronic, Silver Prize GCCE2022 Excellent Student Poster Award
  • 2022年09月 土木学会 土木情報学システム開発賞
  • 2022年08月 MIRU 2022 学生奨励賞2件
  • 2022年03月 IEEE LifeTech 2022 WIE Excellent Poster Award
  • 2022年02月 2021 IEEE Sapporo Section Student Paper Contest, Best Presentation Award
  • 2022年02月 2021 IEEE Sapporo Section Encouragement Award 2件
  • 2022年01月 International Workshop on Advanced Image Technology (IWAIT2022) Best Paper Award
  • 2021年12月 映像情報メディア学会 優秀研究発表賞
  • 2021年12月 令和3年度電気・情報関係学会北海道支部連合大会 若手優秀論文発表賞 3件
  • 2021年10月 The 1st Hokkaido Young Professionals Workshop Best Student Presentation Award
  • 2021年10月 2021 IEEE 10th Global Conference on Consumer Electronics, Gold Prize GCCE2021 Excellent Poster Award
  • 2021年10月 2021 IEEE 10th Global Conference on Consumer Electronics, Gold Prize GCCE2021 Excellent Student Poster Award
  • 2021年10月 2021 IEEE 10th Global Conference on Consumer Electronics, Silver Prize GCCE2021 Excellent Student Poster Award
  • 2021年10月 2021 IEEE 10th Global Conference on Consumer Electronics, GCCE2021 Outstanding Paper Award
  • 2021年06月 映像情報メディア学会丹羽高柳賞論文賞
  • 2021年03月 IEEE LifeTech 2021 Excellent Student Paper Award for Oral Presentation, 2nd Prize
  • 2021年03月 2021 IEEE 3rd Global Conference on Life Sciences and Technologies, Excellent Poster (On-site) Award Winners: Bronze Prize
  • 2021年03月 ACM Multimedia Asia 2020, Best Paper Runner-up Award
  • 2021年02月 2020 IEEE Sapporo Section Student Paper Awards, Best Paper Award
  • 2021年02月 2020 IEEE Sapporo Section Student Paper Awards, Encouragement Paper Award
  • 2020年11月 令和2年度電気・情報関係学会北海道支部連合大会 若手優秀論文発表賞 3件
  • 2020年10月 2020 IEEE 9th Global Conference on Consumer Electronics, Gold Prize IEEE GCCE2020 Excellent Student Paper Award
  • 2020年10月 2020 IEEE 9th Global Conference on Consumer Electronics, Gold Prize GCCE2020 Excellent Poster Award
  • 2020年10月 2020 IEEE 9th Global Conference on Consumer Electronics, Gold Prize IEEE GCCE2020 Excellent Demo! Award
  • 2020年10月 2020 IEEE 9th Global Conference on Consumer Electronics, Silver Prize IEEE GCCE2020 Excellent Paper Award
  • 2020年10月 2020 IEEE 9th Global Conference on Consumer Electronics, Bronze Prize GCCE2020 Excellent Paper Award
  • 2020年06月 映像情報メディア学会丹羽高柳賞論文賞
  • 2020年05月 2020 ICCE-TW Best Paper Award Honorable Metion
  • 2020年02月 The 2019 IEEE Sapporo Section Encouragement Award
  • 2020年02月 The 2019 IEEE Sapporo Section Student Paper Contest Encouraging Prize 3件
  • 2019年12月 令和元年度電気・情報関係学会北海道支部連合大会 若手優秀論文発表賞 2件
     
    受賞者: 小川 貴弘
  • 2019年12月 映像情報メディア学会 優秀研究発表賞
     
    受賞者: 小川 貴弘
  • 2019年10月 2019 IEEE 8th Global Conference on Consumer Electronics, Silver Prize IEEE GCCE 2019 Excelent Paper Award
     
    受賞者: 小川 貴弘
  • 2019年10月 2019 IEEE 8th Global Conference on Consumer Electronics, Outstanding Prize IEEE GCCE 2019 Excelent Demo! Award
     
    受賞者: 小川 貴弘
  • 2019年10月 2019 IEEE 8th Global Conference on Consumer Electronics, Silver Prize IEEE GCCE 2019 Excelent Poster Award
     
    受賞者: 小川 貴弘
  • 2019年03月 2019 IEEE 1st Global Conference on Life Sciences and Technologies, 2nd Prize IEEE Lifetech 2019 Excellent Paper Award
     
    受賞者: 小川 貴弘
  • 2019年02月 The 2018 IEEE Sapporo Section Encouragement Award 2件
     
    受賞者: 小川 貴弘
  • 2019年02月 The 2018 IEEE Sapporo Section Student Paper Contest Encouraging Prize
     
    受賞者: 小川 貴弘
  • 2019年01月 The 2019 joint International Workshop on Advanced Image Technology & International Forum on Medical Imaging in Asia IWAIT Best Paper Award
     
    受賞者: 小川 貴弘
  • 2018年12月 映像情報メディア学会 優秀研究発表賞
     
    受賞者: 小川 貴弘
  • 2018年12月 平成30年度電気・情報関係学会北海道支部連合大会 優秀論文発表賞
     
    受賞者: 小川 貴弘
  • 2018年10月 2018 IEEE 7th Global Conference on Consumer Electronics, 1st Prize IEEE GCCE 2018 Excellent Poster Award
     
    受賞者: 小川 貴弘
  • 2018年10月 2018 IEEE 7th Global Conference on Consumer Electronics, IEEE GCCE 2018 Outstanding Paper Award
     
    受賞者: 小川 貴弘
  • 2018年 The 2017 IEEE Sapporo Section Encouragement Award (2件)
     
    受賞者: 小川貴弘
  • 2018年 The 2017 IEEE Sapporo Section Student Paper Contest Encouraging Prize
     
    受賞者: 小川貴弘
  • 2018年 平成29年度電気・情報関係学会北海道支部連合大会 優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2017年 2017 IEEE 6th Global Conference on Consumer Electronics, IEEE GCCE 2017 Outstanding Poster Award
     
    受賞者: 小川貴弘
  • 2017年 精密工学会画像応用技術専門委員会・映像情報メディア学会メディア工学研究委員会合同サマーセミナー 優秀発表賞
     
    受賞者: 小川貴弘
  • 2017年 電子情報通信学会 学術奨励賞
     
    受賞者: 小川貴弘
  • 2017年 The 2016 IEEE Sapporo Section Encouragement Award
     
    受賞者: 小川貴弘
  • 2017年 The 2016 IEEE Sapporo Section Student Paper Contest Encouraging Prize
     
    受賞者: 小川貴弘
  • 2017年 平成28年度電気・情報関係学会北海道支部連合大会 優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2017年 International Workshop on Advanced Image Technology (IWAIT2017) Best Paper Award
     
    受賞者: 小川貴弘
  • 2016年 平成27年度 SIP若手奨励賞
     
    受賞者: 小川貴弘
  • 2016年 The 2015 IEEE Sapporo Section Encouragement Award (2件)
     
    受賞者: 小川貴弘
  • 2016年 The 2015 IEEE Sapporo Section Student Paper Contest Encouraging Prize
     
    受賞者: 小川貴弘
  • 2016年 平成27年度電気・情報関係学会北海道支部連合大会 優秀論文発表賞 (2件)
     
    受賞者: 小川貴弘
  • 2016年 2016 IEEE 5th Global Conference on Consumer Electronics 1st Prize IEEE GCCE 2016 Excellent Poster Award
     
    受賞者: 小川貴弘
  • 2016年 映像情報メディア学会 論文査読功労賞
     
    受賞者: 小川貴弘
  • 2015年 平成27年度 映像情報メディア学会 優秀研究発表賞
     
    受賞者: 小川貴弘
  • 2015年 The 2014 IEEE Sapporo Section Student Paper Contest Best Presentation Award
     
    受賞者: 小川貴弘
  • 2015年 International Workshop on Advanced Image Technology (IWAIT2015) Best Paper Award
     
    受賞者: 小川貴弘
  • 2015年 IEEE GCCE 2015 Excellent Poster Award
     
    受賞者: 小川貴弘
  • 2015年 IEEE GCCE 2015 Outstanding Poster Award
     
    受賞者: 小川貴弘
  • 2014年 IEEE GCCE 2014 Undergraduate Poster Award
     
    受賞者: 小川貴弘
  • 2013年 平成25年度電気・情報関係学会北海道支部連合大会 優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2011年 平成23年度信号処理学生奨励賞 (2件)
     
    受賞者: 小川貴弘
  • 2011年 平成23年度電気関係学会北海道支部連合大会 若手優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2011年 映像情報メディア学会 学生優秀発表賞
     
    受賞者: 小川貴弘
  • 2010年 平成22年度電気関係学会北海道支部連合大会 若手優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2010年 2010 IEEE Sapporo Section Student Member Best Presentation Award
     
    受賞者: 小川貴弘
  • 2009年 電子情報通信学会論文賞
     
    受賞者: 小川貴弘
  • 2008年 平成20年度電気関係学会北海道支部連合大会 若手優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2008年 2008 IEEE Sapporo Section Student Member Encouraging Prize
     
    受賞者: 小川貴弘
  • 2007年 平成19年度電気関係学会北海道支部連合大会 若手優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2007年 IEEE International Conference on Consumer Electronics, IEEE Consumer Electronics Society Japan Chapter Young Scientist Paper Award
     
    受賞者: 小川貴弘
  • 2006年 2006 IEEE Sapporo Section Student Paper Contest Award
     
    受賞者: 小川貴弘
  • 2005年 精密工学会画像応用技術専門委員会・映像情報メディア学会メディア工学研究委員会合同サマーセミナー優秀発表賞
     
    受賞者: 小川貴弘
  • 2005年 平成17年度電気情報関係学会北海道支部連合大会 若手優秀論文発表賞
     
    受賞者: 小川貴弘
  • 2005年 映像情報メディア学会 研究奨励賞
     
    受賞者: 小川貴弘

共同研究・競争的資金等の研究課題

  • 日本学術振興会:科学研究費助成事業 基盤研究(B)
    研究期間 : 2021年04月 -2026年03月 
    代表者 : 小川 貴弘, 前田 圭介, 藤後 廉
     
    本研究課題では、エッジAI時代の超低演算量・低容量化を実現する汎用深層学習理論の構築を目指す。研究代表者が進めてきた低演算量・低容量バイナリスパース表現技術とクロスモーダル埋め込み技術の研究を融合させ、AIの演算量と学習データ量を大幅に削減可能な新たな理論を構築する。具体的に、最先端の深層学習モデルをバイナリスパース表現により模倣し、さらに、他のモダリティからの知識転移を行うことで、深層学習の利点である高い精度を保持しつつ、演算量削減と学習データ量の小規模化を同時に実現する。本研究課題では、構築した理論が汎用性を有することを示すとともに、エッジデバイス上での評価検証を行う。尚、本研究課題は研究分担者とともに遂行し、実施項目である「① モデルクローニング技術の実現による演算量の削減」および「② クロスモーダル知識転移技術の実現による学習データ量の小規模化」については、①の研究を小川・藤後が、②の研究を小川・前田が実施する。 令和3年度は、「深層学習モデルにおける中間層出力」と「バイナリスパース表現係数」との間で相関を最大化するクロスモーダル埋め込み理論を構築した。具体的に、ソースドメインに対応する実数データとバイナリスパース表現係数との間でクロスモーダル埋め込みを行い、それらの相関が最大化されるよう、バイナリスパース表現における辞書学習を可能とした。この際、バイナリスパース表現係数は0または1の疎なデータであることに注目し、観測データがバイナリスパース値である制約を設けた新たなクロスモーダル埋め込み理論を実現した。さらに、構築した理論やその応用に関する研究成果の対外発表についても積極的に行い、クロスモーダル埋め込み理論を応用した研究成果が画像処理分野における世界最高峰の国際会議ICIP等に採択されている。
  • 国立研究開発法人日本医療研究開発機構(AMED)・医療機器等研究成果展開事業
    研究期間 : 2022年 -2025年
  • 日本学術振興会:科学研究費助成事業 基盤研究(B)
    研究期間 : 2020年04月 -2024年03月 
    代表者 : 安斉 俊久, 永井 利幸, 小川 貴弘, 横田 勲, 清水 厚志, 平田 健司, 小柴 生造, 櫻井 美佳
     
    HFpEF計500症例を目標に以下の組み入れ基準・除外基準で北海道大学病院を含む全国24施設からElectronic Data Captureシステムを用いて詳細な臨床情報を含めて登録し、各種解析を並行して実施した。組み入れ基準:外来もしくは入院心不全症例:①20歳以上でフラミンガム心不全診断基準を満足する心不全症状/所見があり、②左室駆出率50%以上かつBNP値100pg/mLを超えるもしくはN末端proBNP値400 pg/mLを超える、③本人からの文書同意が可能。除外基準:①敗血症、②心筋炎、③閉塞型肥大型心筋症、④拘束型心筋症、⑤重度の弁膜症、⑥心臓移植後あるいは待機、⑦1か月以内の予定心臓手術各種解析:①心不全マルチバイオマーカー解析 ②アレイ(ゲノムワイド関連)解析 ③網羅的メタボローム解析 ④人工知能解析 今年度は昨年度に引き続き、上記基準に該当する心不全症例の登録を開始してきた。令和4年3月末の時点で、目標症例数を超えるHFpEF664例の登録が得られ、バイオマーカー、アレイ、メタボローム解析も完了した。また、歩行動画の統一条件撮影に関しては昨年度特許申請に至った撮影アプリケーションを用いて現在歩行動画が回収できた192例に対し、歩行パターンの機械学習によるクラスター解析を行っており、教師なし学習で臨床医が判定した臨床フレイルスケールを高い弁別能で予測出来ることに加え、予後との関連も明らかになりつつある。
  • JSPS 研究拠点形成事業
    研究期間 : 2021年04月 -2024年03月
  • 国立研究開発法人日本医療研究開発機構(AMED)ムーンショット型研究開発事業
    研究期間 : 2021年 -2023年
  • 日本学術振興会:科学研究費助成事業 基盤研究(C)
    研究期間 : 2018年04月 -2022年03月 
    代表者 : 小川 貴弘, 長谷山 美紀
     
    本研究では、低演算量・低容量畳み込みスパース表現技術の構築を目指す。具体的に、表現係数をバイナリとすることで、「最近傍基底探索に基づくスパース近似」と「単純な加算のみの辞書学習」を可能とし、画質評価指標に一切依存しない低演算量の畳み込みスパース表現を実現するものである。令和元年度は、「畳み込みバイナリスパース表現の実現」に関する研究開発を実施した。具体的に、前年度までに実現されたバイナリスパース表現に関する理論に、畳み込みスパース表現手法を導入することで、表現能力の向上とさらなる演算量の削減を目指した。本研究実施の結果、畳み込みスパース表現を用いることにより、対象画像をよりスパースな表現係数で高精度に近似可能となるため、バイナリスパース表現における最近傍基底の探索回数・辞書学習における加算回数の削減による低演算量化と近似性能の向上を同時に実現した。以上に加えて、バイナリスパース表現の汎用性検証についても実施しており、本表現方法が、画質評価指標に依存せずに適用することが可能であり、これまでの平均二乗誤差に基づく評価指標のみで導出可能であった問題を、他の指標、特に偏微分が困難な指標においても同様に適用可能になることを明らかにした。これまで、多くの画質評価指標が提案されているが、これらを用いたスパース表現の解析的な最適化が困難である問題に対して、ブレークスルーを生み出すことが可能になった。実際の画像復元、具体的に超解像やインペインティングの問題に対して適用することで、新しい画質評価指標に基づいた画像の再構成が高精度に可能になることも明らかにした。
  • 日本学術振興会:科学研究費助成事業 基盤研究(B)
    研究期間 : 2017年04月 -2022年03月 
    代表者 : 長谷山 美紀, 高橋 翔, 小川 貴弘, 畠山 泰貴
     
    本研究では、センサーデータを利用することで、ユーザの興味を正確に推定可能な次世代高精度検索を目指し、それを実現するためのスーパーマルチモーダル人間情報解析基盤を実現する。本基盤によって、検索対象となるマルチメディアコンテンツに留まらず、ユーザの行動履歴や取り巻く多様なセンサーのデータを統合的に解析可能とし、従来の興味の推定精度の限界を超える次世代高精度検索を実現した。具体的に、次の4技術で構築した。 技術1 ユーザを取り巻くセンサーを用いて興味推定を行う多種類センサーデータ統合解析技術 技術2 異なる種類のデータの関連性をグラフ化し、興味推定を高精度化する超グラフ解析技術 技術3 データの時間的変化を考慮して技術2の興味推定を高精度化する動的超グラフ解析技術 技術4 技術1~3により得られるユーザの興味推定結果に基づき、SNS等の異なる情報源からコンテンツの提示を可能とする異種情報源データ検索技術(実施中) 以上の研究において、特筆すべき実績を以下で説明する。本研究では、異種モダリティ間の関連性を表現可能な画像生成モデル・キャプショニングモデルを構築し、与えられるクエリの種類に依存しない検索手法を実現した。具体的に、敵対的画像生成ネットワークを中心とする最新の深層学習に基づくモデルを導入することで、ユーザの興味を正確にとらえたコンテンツの生成を可能とし、最新の画像検索手法と比較して高精度な検索結果の提示が可能となっている。これらの理論は、実際にシステムとして構築され、例えば、ユーザからより自由度の高いテキストの文章をクエリとして与えられた際にも正確にマルチメディアコンテンツを検索することが可能になることが明らかとなっている。
  • 自治体による観光情報発信支援のためのサイバーフィジカルデータ解析プラットフォームに関する研究開発
    総務省:戦略的情報通信研究開発推進制度 (SCOPE) 重点領域型研究開発 (ICT重点研究開発分野推進型 3年枠)
    研究期間 : 2018年04月 -2021年03月 
    代表者 : 小川 貴弘
  • インフラ維持管理データサイエンスの高度化と体系化
    総務省:戦略的情報通信研究開発推進制度 (SCOPE) 重点領域型研究開発(ICT重点研究開発分野推進型 2年枠)
    研究期間 : 2018年04月 -2020年03月 
    代表者 : 小川 貴弘
  • 日本学術振興会:科学研究費助成事業 挑戦的萌芽研究
    研究期間 : 2015年04月 -2018年03月 
    代表者 : 長谷山 美紀, 小川 貴弘
     
    本研究では、画像の符号化、復元、認識、検索・推薦等の画像処理諸分野における精度限界を打破する超汎用メディア横断型基底の導出理論を構築した。具体的に、画像とその撮像内容を表すデータの両者を統合的に解析することで、画像の各々の領域に対する意味理解を実現し、同時にその最適な近似を与える基底を導出した。本研究で導出するメディア横断型の基底は、高い汎用性を備えるため、画像処理の様々な分野への応用が可能であり、それらの精度向上が期待できる。したがって、本研究では、得られるメディア横断型基底を広い応用分野に適用することで、各分野においてブレークスルーを与えることが可能であることを明らかにした。
  • 日本学術振興会:科学研究費助成事業 基盤研究(B)
    研究期間 : 2013年04月 -2017年03月 
    代表者 : 長谷山 美紀, 小川 貴弘, 八木 伸行
     
    本研究では、ユーザの行動に合わせて適応的に変化するマルチメディアコンテンツ生成システムの構築を行った。具体的に、メディア横断型相関分析法に基づいてユーザの行動とマルチメディアコンテンツ間の関係性を抽出することで、時々刻々変化するユーザの志向をモデル化し、その結果から新種のマルチメディアコンテンツを生成するシステムを実現した。さらに、実現されたシステムを複数のユーザが用いることで、相互にコンテンツを推薦することが可能なシステムを構築した。研究代表者は、本研究で構築された推薦システムを様々な場において実証し、その評価を行っている。
  • 日本学術振興会:科学研究費助成事業 新学術領域研究(研究領域提案型)
    研究期間 : 2012年06月 -2017年03月 
    代表者 : 野村 周平, 長谷山 美紀, 古崎 晃司, 篠原 現人, 溝口 理一郎, 来村 徳信, 松浦 啓一, 上田 恵介, 松原 始, 山崎 剛史, 小川 貴弘, 土屋 広司, 河合 俊郎
     
    生物系の研究者は、昆虫、鳥類、魚類のSEM画像とテキストデータによるデータセットを、30,000件以上集積した。情報系研究者は、このデータを元に、オントロジーを援用した画像検索システムを実現した。このシステムは当初の目的通り、インターネット上に公開された。アウトリーチ活動としては、バイオミメティクスに関する企画展を国立科学博物館で開催(H28年4-6月)するなど、博物館での出展を行った。また、一般向けの書籍をH28年3月に出版した。
  • 日本学術振興会:科学研究費助成事業 若手研究(B)
    研究期間 : 2010年04月 -2014年03月 
    代表者 : 小川 貴弘
     
    本研究課題では,データベース中に存在する教師画像から得られる表現基底に基づいて高精細画像の再構成を実現するアルゴリズムの構築を行った.具体的に,1) 高精細画像のモデル化による複数の劣化要素の同時除去,2) 高精細画像を得るための表現基底の導出,3) 対象コンテンツに最適な表現基底の適応的選択を実現した.これにより,従来のエラーコンシールメント,符号化雑音除去,高解像度化の各復元分野において存在した再構成精度の限界の向上を実現した.
  • 日本学術振興会:科学研究費助成事業 基盤研究(B)
    研究期間 : 2009年 -2012年 
    代表者 : 長谷山 美紀, 小川 貴弘, 荒木 健治
     
    画像や映像、音楽などのメディアが固有に持つユーザの曖昧な要求に応える検索理論の構築との全体構想に基づき、ユーザが映像や画像、音楽をクエリとして与えた場合に、望むコンテンツを推定し、効果的に提示する手法を実現した。具体的には、その実現のために、画像、映像、音楽の3つのメディアに検索対象を絞り、メディア横断型検索を実現し、ユーザがキーワードや同一メディアコンテンツのクエリを提示できない場合にも、希望するコンテンツを取得可能とした。
  • デジタルコンテンツの印象語(感性メタデータ)を付加する処理の研究開発
    総務省:戦略的情報通信研究開発推進制度
    研究期間 : 2009年04月 -2010年03月 
    代表者 : 長谷山 美紀
  • 日本学術振興会:科学研究費助成事業 特別研究員奨励費
    研究期間 : 2005年 -2007年 
    代表者 : 小川 貴弘
     
    これまで欠落領域の復元に用いてきたアルゴリズムを、画像の高解像度化、および類似画像検索のための画像の意味的特徴量の推定に応用を行った。まず、画像の高解像度化では、これまで提案を行ってきたカーネル主成分分析に基づいた失われた情報の推定法を応用し、既知の情報を低解像度画像における低周波成分、未知の情報を本来高解像度画像が有していたはずの高周波成分に置き換えることで、失われた高周波成分の推定を可能とした。近年の単一フレームによる超解像手法では、失われた高周波成分を推定するために、教師データとなる高解像度画像が必要となるのに対して、我々の研究では画像の異なる階層間での相関を用いることで、教師データを用いずに失われた高周波成分の推定が可能であり、非常に有効であることが確認された。 さらに、我々は画像の高解像度化への応用だけでなく、復元や再構成とは異なる分野である類似画像検索への応用も試みた。一般に、対象となる画像に対して高精度に類似画像検索を行うためには、単純に画像特徴量間で距離を求めるのではなく、画像特徴量から意味的特徴量を推定し、それらの間で距離を求める手法が必要となる。我々はカーネル主成分分析を用いた画像の意味的特徴量の推定手法を提案し、与えられる画像の画像特徴量から意味的特徴量を高精度に推定することを可能とした。これにより、得られる意味的特徴量を用いることで、従来では困難であった画像内容に基づく類似画像検索を行うことが可能となった。このように、我々は復元・再構成の分野だけでなく、近年発展の著しい検索の分野への新たなアプローチの提案も行ってきた。

教育活動情報

主要な担当授業

  • メディア表現論特論
    開講年度 : 2021年
    課程区分 : 修士課程
    開講学部 : 情報科学研究科
    キーワード : 情報の変換, 情報の符号化, メディア表現, メディア処理
  • メディア表現論特論
    開講年度 : 2021年
    課程区分 : 修士課程
    開講学部 : 情報科学院
    キーワード : 情報の変換, 情報の符号化, メディア表現, メディア処理
  • メディア表現論特論
    開講年度 : 2021年
    課程区分 : 博士後期課程
    開講学部 : 情報科学研究科
    キーワード : 情報の変換, 情報の符号化, メディア表現, メディア処理
  • メディア表現論特論
    開講年度 : 2021年
    課程区分 : 博士後期課程
    開講学部 : 情報科学院
    キーワード : 情報の変換, 情報の符号化, メディア表現, メディア処理
  • 信号処理
    開講年度 : 2021年
    課程区分 : 学士課程
    開講学部 : 工学部
    キーワード : 離散時間信号、フーリエ変換、離散時間フーリエ変換、z変換、離散時間システム
  • 画像処理応用
    開講年度 : 2021年
    課程区分 : 学士課程
    開講学部 : 工学部
    キーワード : 確率信号 フーリエ変換 線形予測 自己回帰モデル   
  • 画像解析論
    開講年度 : 2021年
    課程区分 : 学士課程
    開講学部 : 工学部
    キーワード : 画像処理 信号解析 画像圧縮 画像符号化

大学運営

委員歴

  • 2017年04月 - 現在   電子情報通信学会   イメージ・メディア・クォリティ研究専門委員会
  • 2012年04月 - 現在   The Institute of Image Information and Television Engineers   ITE Transactions on Media Technology and Applications, Associate Editor
  • 2012年04月 - 現在   映像情報メディア学会   メディア工学研究会 専門委員
  • 2011年04月 - 現在   電子情報通信学会   電子情報通信学会論文誌 常任査読委員
  • 2008年04月 - 現在   電気・情報関係学会北海道支部連合大会編集委員
  • 2023年01月 - 2023年12月   IEEE International Conference on Multimedia & Expo 2023 (ICME2023) Workshop (Fourth ICME Workshop on Artificial Intelligence in Sports (AI-Sports)) Organizer
  • 2022年04月 - 2023年03月   令和4年度電気・情報関係学会北海道支部連合大会 実行委員
  • 2023年 - 2023年   MIRU2023 エリアチェア
  • 2022年01月 - 2022年12月   IEEE International Conference on Multimedia & Expo 2022 (ICME2022) Workshop (Third ICME Workshop on Artificial Intelligence in Sports (AI-Sports)) Organizer
  • 2021年04月 - 2022年03月   令和3年度電気・情報関係学会北海道支部連合大会 実行委員
  • 2022年 - 2022年   MIRU2022 エリアチェア
  • 2021年01月 - 2021年12月   IEEE GCCE 2021 Organized Session Chair
  • 2021年01月 - 2021年12月   IEEE Lifetech2021 Organized Session Chair
  • 2021年01月 - 2021年12月   IEEE International Conference on Multimedia & Expo 2021 (ICME2021) Workshop (Second ICME Workshop on Artificial Intelligence in Sports (AI-Sports)) Organizer
  • 2021年 - 2021年   MIRU2021 エリアチェア
  • 2020年01月 - 2020年12月   IEEE Lifetech2020 Organized Session Chair
  • 2020年01月 - 2020年12月   IEEE GCCE2020 TPC Chair
  • 2020年01月 - 2020年12月   IEEE GCCE 2020 Organized Session Chair
  • 2020年01月 - 2020年12月   IEEE International Conference on Multimedia & Expo 2020 (ICME2020) Workshop (IEEE International Workshop of Artificial Intelligence in Sports (AI-Sports)) Organizer
  • 2020年01月 - 2020年12月   ICIPRob2020 International Program Committee Member
  • 2019年01月 - 2019年12月   IEEE GCCE2019 Conference Chair
  • 2018年 - 2018年   IEEE GCCE 2018   TPC Vice Chair
  • 2018年 - 2018年   ACM ICMR2018   Doctoral Symposium Chair
  • 2011年04月 - 2017年03月   電子情報通信学会   画像工学研究会 専門委員
  • 2017年 - 2017年   IEEE GCCE 2017   Organized Session Chair
  • 2009年 - 2009年   9. ISCE 2009 (13th International Symposium on Consumer Electronics)   Special Session Chairs


Copyright © MEDIA FUSION Co.,Ltd. All rights reserved.