研究者データベース

藤後 廉(トウゴ レン)
数理・データサイエンス教育研究センター
特任助教

基本情報

所属

  • 数理・データサイエンス教育研究センター

職名

  • 特任助教

学位

  • 博士(情報科学)

論文上での記載著者名

  • Ren Togo

ホームページURL

科研費研究者番号

  • 60840395

J-Global ID

プロフィール

  • 2015年3月 北海道大学医学部保健学科放射線技術科学専攻 卒業.
    2017年3月 北海道大学大学院情報科学研究科 修士課程 修了.
    2019年3月 北海道大学大学院 情報科学研究科 博士後期課程 修了 (在学期間短縮).
    2019年4月 日本学術振興会 特別研究員 (PD).


    2020年2月 北海道大学 数理・データサイエンス教育研究センター 特任助教.



    医用画像を中心とした異分野連携に関する研究に従事.
    診療放射線技師国家資格.
    IEEE会員.

研究キーワード

  • ピロリ   X線   検索   信号処理   画像処理   胃がん   MRI   医用画像   PET   機械学習   画像生成   深層学習   

研究分野

  • 情報通信 / 知能情報学

研究活動情報

論文

  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    CoRR abs/2104.02864 2021年
  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    CoRR abs/2104.02857 2021年
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    53 - 54 2021年
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    51 - 52 2021年
  • Ren Togo, Naoki Saito 0006, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama
    Sensors 21 6 2088 - 2088 2021年
  • Zongyao Li, Kazuhiro Kitajima, Kenji Hirata, Ren Togo, Junki Takenaka, Yasuo Miyoshi, Kohsuke Kudo, Takahiro Ogawa, Miki Haseyama
    EJNMMI RESEARCH 11 1 2021年01月 [査読有り]
     
    Background To improve the diagnostic accuracy of axillary lymph node (LN) metastasis in breast cancer patients using 2-[F-18]FDG-PET/CT, we constructed an artificial intelligence (AI)-assisted diagnosis system that uses deep-learning technologies. Materials and methods Two clinicians and the new AI system retrospectively analyzed and diagnosed 414 axillae of 407 patients with biopsy-proven breast cancer who had undergone 2-[F-18]FDG-PET/CT before a mastectomy or breast-conserving surgery with a sentinel lymph node (LN) biopsy and/or axillary LN dissection. We designed and trained a deep 3D convolutional neural network (CNN) as the AI model. The diagnoses from the clinicians were blended with the diagnoses from the AI model to improve the diagnostic accuracy. Results Although the AI model did not outperform the clinicians, the diagnostic accuracies of the clinicians were considerably improved by collaborating with the AI model: the two clinicians' sensitivities of 59.8% and 57.4% increased to 68.6% and 64.2%, respectively, whereas the clinicians' specificities of 99.0% and 99.5% remained unchanged. Conclusions It is expected that AI using deep-learning technologies will be useful in diagnosing axillary LN metastasis using 2-[F-18]FDG-PET/CT. Even if the diagnostic performance of AI is not better than that of clinicians, taking AI diagnoses into consideration may positively impact the overall diagnostic accuracy.
  • Ren Togo, Haruna Watanabe, Takahiro Ogawa, Miki Haseyama
    COMPUTERS IN BIOLOGY AND MEDICINE 123 2020年08月 [査読有り][通常論文]
     
    Aim: The aim of this study was to determine whether our deep convolutional neural network-based anomaly detection model can distinguish differences in esophagus images and stomach images obtained from gastric X-ray examinations.Methods: A total of 6012 subjects were analyzed as our study subjects. Since the number of esophagus X-ray images is much smaller than the number of gastric X-ray images taken in X-ray examinations, we took an anomaly detection approach to realize the task of organ classification. We constructed a deep autoencoding gaussian mixture model (DAGMM) with a convolutional autoencoder architecture. The trained model can produce an anomaly score for a given test X-ray image. For comparison, the original DAGMM, AnoGAN, and a One-Class Support Vector Machine (OCSVM) that were trained with features obtained by a pre-trained Inception-v3 network were used.Results: Sensitivity, specificity, and the calculated harmonic mean of the proposed method were 0.956, 0.980, and 0.968, respectively. Those of the original DAGMM were 0.932, 0.883, and 0.907, respectively. Those of AnoGAN were 0.835, 0.833, and 0.834, respectively, and those of OCSVM were 0.932, 0.935, and 0.934, respectively. Experimental results showed the effectiveness of the proposed method for an organ classification task.Conclusion: Our deep convolutional neural network-based anomaly detection model has shown the potential for clinical use in organ classification.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    WORLD JOURNAL OF GASTROENTEROLOGY 26 25 3650 - 3659 2020年07月 [査読有り][通常論文]
     
    BACKGROUND The risk of gastric cancer increases in patients withHelicobacter pylori-associated chronic atrophic gastritis (CAG). X-ray examination can evaluate the condition of the stomach, and it can be used for gastric cancer mass screening. However, skilled doctors for interpretation of X-ray examination are decreasing due to the diverse of inspections. AIM To evaluate the effectiveness of stomach regions that are automatically estimated by a deep learning-based model for CAG detection. METHODS We used 815 gastric X-ray images (GXIs) obtained from 815 subjects. The ground truth of this study was the diagnostic results in X-ray and endoscopic examinations. For a part of GXIs for training, the stomach regions are manually annotated. A model for automatic estimation of the stomach regions is trained with the GXIs. For the rest of them, the stomach regions are automatically estimated. Finally, a model for automatic CAG detection is trained with all GXIs for training. RESULTS In the case that the stomach regions were manually annotated for only 10 GXIs and 30 GXIs, the harmonic mean of sensitivity and specificity of CAG detection were 0.955 +/- 0.002 and 0.963 +/- 0.004, respectively. CONCLUSION By estimating stomach regions automatically, our method contributes to the reduction of the workload of manual annotation and the accurate detection of the CAG.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING 58 6 1239 - 1250 2020年06月 [査読有り][通常論文]
     
    High-quality annotations for medical images are always costly and scarce. Many applications of deep learning in the field of medical image analysis face the problem of insufficient annotated data. In this paper, we present a semi-supervised learning method for chronic gastritis classification using gastric X-ray images. The proposed semi-supervised learning method based on tri-training can leverage unannotated data to boost the performance that is achieved with a small amount of annotated data. We utilize a novel learning method named Between-Class learning (BC learning) that can considerably enhance the performance of our semi-supervised learning method. As a result, our method can effectively learn from unannotated data and achieve high diagnostic accuracy for chronic gastritis.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    99 - 100 2020年
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2521 - 2525 2020年
  • Ren Togo, Takahiro Ogawa, Miki Haseyama
    2466 - 2470 2020年
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2431 - 2435 2020年
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2426 - 2430 2020年
  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    305 - 309 2020年
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    61 - 65 2020年
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2263 - 2267 2020年
  • Keigo Sakurai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    942 - 943 2020年
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    712 - 713 2020年
  • Nao Nakagawa, Ren Togo, Takahiro Ogawa, Miki Haseyama
    692 - 693 2020年
  • Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    667 - 669 2020年
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE ACCESS 8 96777 - 96786 2020年 [査読有り][通常論文]
     
    A new approach that drastically improves cross-modal retrieval performance in vision and language (hereinafter referred to as & x201C;vision and language retrieval & x201D;) is proposed in this paper. Vision and language retrieval takes data of one modality as a query to retrieve relevant data of another modality, and it enables flexible retrieval across different modalities. Most of the existing methods learn optimal embeddings of visual and lingual information to a single common representation space. However, we argue that the forced embedding optimization results in loss of key information for sentences and images. In this paper, we propose an effective utilization of representation spaces in a simple but robust vision and language retrieval method. The proposed method makes use of multiple individual representation spaces through text-to-image and image-to-text models. Experimental results showed that the proposed approach enhances the performance of existing methods that embed visual and lingual information to a single common representation space.
  • Aesthetic style transfer through text-to-image synthesis and image-to-image translation
    Megumi Kotera, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 492 - 493 2019年10月 [査読有り][通常論文]
  • Voice-input multimedia information retrieval system based on text-to-image GAN
    Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 2019年10月 [査読有り][通常論文]
  • Estimation of drilling energy from tunnel cutting face image based on online learning
    Kentaro Yamamoto, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 794 - 795 2019年10月 [査読有り][通常論文]
  • Detection of distress region from subway tunnel images via U-net-based deep semantic segmentation
    An Wang, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 2019年10月 [査読有り][通常論文]
  • Ren Togo, Nobutake Yamamichi, Katsuhiro Mabe, Yu Takahashi, Chihiro Takeuchi, Mototsugu Kato, Naoya Sakamoto, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    JOURNAL OF GASTROENTEROLOGY 54 4 321 - 329 2019年04月 [査読有り][通常論文]
     
    BackgroundDeep learning has become a new trend of image recognition tasks in the field of medicine. We developed an automated gastritis detection system using double-contrast upper gastrointestinal barium X-ray radiography.MethodsA total of 6520 gastric X-ray images obtained from 815 subjects were analyzed. We designed a deep convolutional neural network (DCNN)-based gastritis detection scheme and evaluated the effectiveness of our method. The detection performance of our method was compared with that of ABC (D) stratification.ResultsSensitivity, specificity, and harmonic mean of sensitivity and specificity of our method were 0.962, 0.983, and 0.972, respectively, and those of ABC (D) stratification were 0.925, 0.998, and 0.960, respectively. Although there were 18 false negative cases in ABC (D) stratification, 14 of those 18 cases were correctly classified into the positive group by our method.ConclusionsDeep learning techniques may be effective for evaluation of gastritis/non-gastritis. Collaborative use of DCNN-based gastritis detection systems and ABC (D) stratification will provide more reliable gastric cancer risk information.
  • Ren Togo, Takahiro Ogawa, Osamu Manabe, Kenji Hirata, Tohru Shiga, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 237 - 238 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a method for extracting important regions for deep learning models in the identification of cardiac sarcoidosis using polar map images. Although deep learning-based detection methods have widely studied, they are still often called black boxes. Since high reliability for provided results from computer-aided diagnosis systems is important toward clinical applications, this problem should be solved. In this paper, we try to visualize important regions for deep learning-based models for improvement of understanding to clinicians. We monitor the variance of confidence of a model constructed with a deep learning-based feature and define it as a contribution value toward the estimated label. We visualize important regions for models based on the contribution value.
  • Haruna Watanabe, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 235 - 236 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. In this paper, we propose a method to detect bone metastatic tumors using computed tomography (CT) images. Bone metastatic tumors spread from primary cancer to other organs, and they can cause severe pain. Therefore, it is important to detect metastatic tumors earlier in addition to primary cancer. However, since metastatic tumors are very small, and they emerge from unpredictable regions in the body, collecting metastatic tumor images is difficult compared to primary cancer. In such a case, it can be considered that the idea of anomaly detection is suitable. The proposed method based on a generative adversarial network model trains with only non-metastatic bone tumor images and detects bone metastatic tumor in an unsupervised manner. Then the anomaly score is defined for each test CT image. Experimental results show the anomaly scores between non-metastatic bone tumor images and metastatic bone tumor images are clearly different. The anomaly detection approach may be effective for the detection of bone metastatic tumors in CT images.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 196 - 197 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. This paper presents a detection method of gastritis from gastric X-ray images using fine-tuning techniques. With the development of deep convolutional neural networks (DCNNs), DCNN-based methods have achieved more accurate performance than conventional machine learning methods using hand-crafted features in the field of medical image analysis. However, lack of training images often occurs in clinical situations even though DCNNs require a large amount of training images to avoid overfitting. Therefore, the proposed method aims to consider the clinical situations that a limited amount of the training images are available. By fine-tuning a DCNN pre-trained with a large amount of annotated natural images, we avoid overfitting and realize accurate detection of the gastritis with a small amount of the training images.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE 1st Global Conference on Life Sciences and Technologies, LifeTech 2019 273 - 274 2019年03月 [査読有り][通常論文]
     
    © 2019 IEEE. In this paper, we present a deep learning method for classifying subcellular protein patterns in human cells. Our method is mainly based on transfer learning and utilizes a newly proposed loss function named focal loss to deal with the problem of severe class imbalance existing in the task. The performance of our method is evaluated by a MacroF1 score of total 28 classes, and the final MacroF1 score of our method is 0.706.
  • Ren Togo, Naoki Saito, Takahiro Ogawa, Miki Haseyama
    IEEE ACCESS 7 162395 - 162404 2019年 [査読有り][通常論文]
     
    A method for estimating regions of deterioration in electron microscope images of rubber materials is presented in this paper. Deterioration of rubber materials is caused by molecular cleavage, external force, and heat. An understanding of these characteristics is essential in the field of material science for the development of durable rubber materials. Rubber material deterioration can be observed by using on electron microscope but it requires much effort and specialized knowledge to find regions of deterioration. In this paper, we propose an automated deterioration region estimation method based on deep learning and anomaly detection techniques to support such material development. Our anomaly detection model, called Transfer Learning-based Deep Autoencoding Gaussian Mixture Model (TL-DAGMM), uses only normal regions for training since obtaining training data for regions of deterioration is difficult. TL-DAGMM makes use of extracted high representation features from a pre-trained deep learning model and can automatically learn the characteristics of normal rubber material regions. Regions of deterioration are estimated at the pixel level by calculated anomaly scores. Experiments on real rubber material electron microscope images demonstrated the effectiveness of our model.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE ACCESS 7 169920 - 169930 2019年 [査読有り][通常論文]
     
    In this paper, we propose a novel scene retrieval and re-ranking method based on a text-to-image Generative Adversarial Network (GAN). The proposed method generates an image from an input query sentence based on the text-to-image GAN and then retrieves a scene that is the most similar to the generated image. By utilizing the image generated from the input query sentence as a query, we can control semantic information of the query image at the text level. Furthermore, we introduce a novel interactive re-ranking scheme to our retrieval method. Specifically, users can consider the importance of each word within the first input query sentence. Then the proposed method re-generates the query image that reflects the word importance provided by users. By updating the generated query image based on the word importance, it becomes feasible for users to revise retrieval results through this re-ranking process. In experiments, we showed that our retrieval method including the re-ranking scheme outperforms recently proposed retrieval methods.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE ACCESS 7 153183 - 153193 2019年 [査読有り][通常論文]
     
    Scene retrieval from input descriptions has been one of the most important applications with the increasing number of videos on the Web. However, this is still a challenging task since semantic gaps between features of texts and videos exist. In this paper, we try to solve this problem by utilizing a text-to-image Generative Adversarial Network (GAN), which has become one of the most attractive research topics in recent years. The text-to-image GAN is a deep learning model that can generate images from their corresponding descriptions. We propose a new retrieval framework, Query is GAN, based on the text-to-image GAN that drastically improves scene retrieval performance by simple procedures. Our novel idea makes use of images generated by the text-to-image GAN as queries for the scene retrieval task. In addition, unlike many studies on text-to-image GANs that mainly focused on the generation of high-quality images, we reveal that the generated images have reasonable visual features suitable for the queries even though they are not visually pleasant. We show the effectiveness of the proposed framework through experimental evaluation in which scene retrieval is performed from real video datasets.
  • Saya Takada, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 479 - 480 2019年 [査読有り][通常論文]
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE Global Conference on Consumer Electronics (GCCE) 13 - 14 2019年 [査読有り][通常論文]
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 1825 - 1829 2019年 [査読有り][通常論文]
     
    We present a new scene retrieval method based on text-to-image Generative Adversarial Network (GAN) and its application to query-based video summarization. Text-to-image GAN is a deep learning method that can generate images from their corresponding sentences. In this paper, we reveal a characteristic that deep learning-based visual features extracted from images generated by text-to-image GAN include semantic information sufficiently. By utilizing the generated images as queries, the proposed method achieves higher scene retrieval performance than those of the state-of-the-art methods. In addition, we introduce a novel architecture that can consider order relationship of the input sentences to our method for realizing a target video summarization. Specifically, the proposed method generates multiple images thorough text-to-image GAN from multiple sentences summarizing target videos. Their summarized video can be obtained by performing the retrieval of corresponding scenes from the target videos according to the generated images with considering the order relationship. Experimental results show the effectiveness of the proposed method in the retrieval and summarization performance.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 1371 - 1375 2019年 [査読有り][通常論文]
     
    This paper presents a method for gastritis detection from gastric Xray images via fine-tuning techniques using a deep convolutional neural network (DCNN). DCNNs can learn parameters to capture high-dimensional features which express semantic contents of images by training on a large number of labeled images. However, lack of gastric X-ray images for training often occurs. To realize accurate detection with a small number of gastric X-ray images, the proposed method adopts fine-tuning techniques and newly introduces simple annotation of stomach regions to gastric X-ray images used for training. The proposed method fine-tunes a pre-trained DCNN with patches and three kinds of patch-level class labels considering not only the image-level ground truth ("gastritis"/"non-gastritis") but also the regions of a stomach since the outside of the stomach is not related to the image-level ground truth. In the test phase, by estimating the patch-level class labels with the fine-tuned DCNN, the proposed method enables the image-level class label estimation which excludes the effect of the unnecessary regions. Experimental results show the effectiveness of the proposed method.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) 1 - 5 2019年 [査読有り][通常論文]
     
    Text-to-image Generative Adversarial Network (GAN) is a deep learning model that generates an image from an input sentence. It is expressly attracting attentions because of its applicability of the generated images. However, many existing studies have still focused on generation of high-quality images, and there are few studies focusing on application of the generated images since text-to-image GANs still cannot produce visually pleasing images in the complicated tasks. In this paper, we apply a text-to-image GAN as a generator of query images for a scene retrieval task to show availability of the visually non-pleasant images. The proposed method utilizes a low-resolution generated image that focuses on a sentence and a high-resolution generated image that focuses on each word of the sentence to retrieve a desired scene. With this mechanism, the proposed method realizes a high-accuracy scene retrieval from a sentence input. Experimental results show the effectiveness of our method.
  • Zongyao Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) 1 - 5 2019年 [査読有り][通常論文]
     
    This paper presents a method of semi-supervised learning based on tri-training for gastritis classification using gastric X-ray images. The proposed method is constructed based on the tri-training architecture, and the strategies of label smoothing regularization and random erasing augmentation are utilized in the method to enhance the performance. Although the task of gastritis classification is challenging, we report that the proposed semi-supervised learning method using only a small number of labeled data achieves 0.888 harmonic mean of sensitivity and specificity on test data composed of 615 patients.
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS) 1 - 5 2019年 [査読有り][通常論文]
     
    With the development of convolutional neural networks (CNNs), CNN-based methods for medical image analysis have achieved more accurate performance than conventional machine learning methods using hand-crafted features. Although these methods utilize a large number of training images and realize high performance, lack of the training images often occurs in medical image analysis due to several reasons. This paper presents a novel image generation method to construct a dataset for gastritis detection from gastric X-ray images. The proposed method effectively utilizes two kinds of training images (gastritis and non-gastritis images) to generate images of each domain by introducing label conditioning into a generative model. Experimental results using real-world gastric X-ray images show the effectiveness of the proposed method.
  • Ren Togo, Takahiro Ogawa, Miki Haseyama
    IEEE ACCESS 7 87448 - 87457 2019年 [査読有り][通常論文]
     
    In this paper, a novel synthetic gastritis image generation method based on a generative adversarial network (GAN) model is presented. Sharing medical image data is a crucial issue for realizing diagnostic supporting systems. However, it is still difficult for researchers to obtain medical image data since the data include individual information. Recently proposed GAN models can learn the distribution of training images without seeing real image data, and individual information can be completely anonymized by generated images. If generated images can be used as training images in medical image classification, promoting medical image analysis will become feasible. In this paper, we targeted gastritis, which is a risk factor for gastric cancer and can be diagnosed by gastric X-ray images. Instead of collecting a large amount of gastric X-ray image data, an image generation approach was adopted in our method. We newly propose loss function-based conditional progressive growing generative adversarial network (LC-PGGAN), a gastritis image generation method that can be used for a gastritis classification problem. The LC-PGGAN gradually learns the characteristics of gastritis in gastric X-ray images by adding new layers during the training step. Moreover, the LC-PGGAN employs loss function-based conditional adversarial learning so that generated images can be used as the gastritis classification task. We show that images generated by the LC-PGGAN are effective for gastritis classification using gastric X-ray images and have clinical characteristics of the target symptom.
  • Ren Togo, Kenji Hirata, Osamu Manabe, Hiroshi Ohira, Ichizo Tsujino, Keiichi Magota, Takahiro Ogawa, Miki Haseyama, Tohru Shiga
    COMPUTERS IN BIOLOGY AND MEDICINE 104 81 - 86 2019年01月 [査読有り][通常論文]
     
    Aims: The aim of this study was to determine whether deep convolutional neural network (DCNN)-based features can represent the difference between cardiac sarcoidosis (CS) and non-CS using polar maps.Methods: A total of 85 patients (33 CS patients and 52 non-CS patients) were analyzed as our study subjects. One radiologist reviewed PET/CT images and defined the left ventricle region for the construction of polar maps. We extracted high-level features from the polar maps through the Inception-v3 network and evaluated their effectiveness by applying them to a CS classification task. Then we introduced the ReliefF algorithm in our method. The standardized uptake value (SUV)-based classification method and the coefficient of variance (CoV)-based classification method were used as comparative methods.Results: Sensitivity, specificity and the harmonic mean of sensitivity and specificity of our method with the ReliefF algorithm were 0.839, 0.870 and 0.854, respectively. Those of the SUVmax-based classification method were 0.468, 0.710 and 0.564, respectively, and those of the CoV-based classification method were 0.655, 0.750 and 0.699, respectively.Conclusion: The DCNN-based high-level features may be more effective than low-level features used in conventional quantitative analysis methods for CS classification.
  • Takahiro Ogawa, Kento Sugata, Ren Togo, Miki Haseyama
    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS 7 1 36 - 44 2019年 [査読有り][通常論文]
     
    A novel method that integrates brain activity-based classifications obtained from multiple users is presented in this paper. The proposed method performs decision-level fusion (DLF) of the classifications using a kernelized version of extended supervised learning from multiple experts (KESLME), which is newly derived in this paper. In this approach, feature-level fusion of multiuser electroencephalogram (EEG) features is performed by multiset supervised locality preserving canonical correlation analysis (MSLPCCA). In the proposed method, the multiple classification results are obtained by classifiers separately constructed for the multiuser EEG features. Then DLF of these classification results becomes feasible based on KESLME, which can provide the final decision with consideration of the relationship between the MSLPCCA-based integrated EEG features and each classifier's performance. In this way, a new multi-classifier decision technique, which depends only on users' brain activities, is realized, and the performance in an image classification task becomes comparable to that of Inception-v3, one of the state-of-the-art deep convolutional neural networks.
  • Ren Togo, Kenji Hirata, Osamu Manabe, Hiroshi Ohira, Ichizo Tsujino, Takahiro Ogawa, Miki Haseyama, Tohru Shiga
    JOURNAL OF NUCLEAR MEDICINE 59 2018年05月 [査読有り][通常論文]
  • Keisuke Kawauchi, Kenji Hirata, Seiya Ichikawa, Osamu Manabe, Kentaro Kobayashi, Shiro Watanabe, Miki Haseyama, Takahiro Ogawa, Ren Togo, Tohru Shiga, Chietsugu Katoh
    JOURNAL OF NUCLEAR MEDICINE 59 2018年05月 [査読有り][通常論文]
  • Ren Togo, Kenta Ishihara, Katsuhiro Mabe, Harufumi Oizumi, Takahiro Ogawa, Mototsugu Kato, Naoya Sakamoto, Shigemi Nakajima, Masahiro Asaka, Miki Haseyama
    WORLD JOURNAL OF GASTROINTESTINAL ONCOLOGY 10 2 62 - 70 2018年02月 [査読有り][通常論文]
     
    AIMTo perform automatic gastric cancer risk classification using photofluorography for realizing effective mass screening as a preliminary study.METHODSWe used data for 2100 subjects including X-ray images, pepsinogen. and. levels, PG I/PG II ratio, Helicobacter pylori (H. pylori) antibody, H. pylori eradication history and interview sheets. We performed two-stage classification with our system. In the first stage, H. pylori infection status classification was performed, and H. pylori -infected subjects were automatically detected. In the second stage, we performed atrophic level classification to validate the effectiveness of our system.RESULTSSensitivity, specificity and Youden index (YI) of H. pylori infection status classification were 0.884, 0.895 and 0.779, respectively, in the first stage. In the second stage, sensitivity, specificity and YI of atrophic level classification for H. pylori -infected subjects were 0.777, 0.824 and 0.601, respectively.CONCLUSIONAlthough further improvements of the system are needed, experimental results indicated the effectiveness of machine learning techniques for estimation of gastric cancer risk.
  • Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    2018 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN (ICCE-TW) 2018年 [査読有り][通常論文]
     
    This paper presents an anonymous gastritis image generation method for improving gastritis recognition performance. We realize the generation of realistic gastritis images by considering label information. Experimental results showed that anonymous images generated by our method had a potential for a gastritis recognition task. Concretely, the recognition performance of a classifier constructed with the anonymous images outperformed the performance of the conventional image generation method-based classifier.
  • Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) 2082 - 2086 2018年 [査読有り][通常論文]
     
    This paper presents an anonymous gastritis image generation method based on a generative adversarial network approach. Since clinical individual data include highly confidential information, they must be handled carefully. Although data sharing is demanded to construct large-scale medical image datasets for deep learning-based recognition tasks, managing and annotating these data have been conducted manually. The proposed method enables the generation of anonymous images by an adversarial learning approach. Experimental results show that generated images by our method contribute to a gastritis recognition task. This will be helpful for constructing large-scale medical image datasets effectively.
  • Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2018 IEEE 7TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE 2018) 198 - 199 2018年 [査読有り][通常論文]
     
    Image retrieval plays an important role in the information society. Many studies have been conducted to improve accuracy of the image retrieval. However, there exists a major limitation in their input methods. For example, if users only have a vague description that does not include detailed information such as its name and do not have an appropriate input image, it is difficult to retrieve their desired images. To solve this problem, we propose a novel image retrieval method that enables retrieval of a desired image from a vague description. In the proposed method, we generate a query image from a vague description through an Attentional Generative Adversarial Network. By using the generated query image, the proposed method enables users to retrieve images even if they do not have a clear retrieval description as an input. Experimental results show the effectiveness of our method.
  • Effectiveness Evaluation of Imaging Direction for Estimation of Gastritis Regions on Gastric X-ray Images
    Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    International Technical Conference on Circuits, Systems, Computers, and Communications (ITC-CSCC) 459 - 460 2017年05月 [査読有り][通常論文]
  • Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama
    2017 IEEE 6TH GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE) 2017- 1 - 2 2017年 [査読有り][通常論文]
     
    Aesthetic quality assessment plays an important role in how people organize large image collections. Many studies on aesthetic quality assessment are based on design of hand-crafted features without considering whether attributes conveyed by images can actually affect image aesthetics. This paper presents an aesthetic quality assessment method which uses new visual features. The proposed method utilizes Supervised Locality Preserving Canonical Correlation Analysis (SLPCCA) to derive the new features which maximize correlation between attributes and visual features. Finally, by applying ridge regression to the SLPCCA-based features, successful aesthetic quality assessment is realized.
  • Ren Togo, Kenta Ishihara, Takahiro Ogawa, Miki Haseyama
    COMPUTERS IN BIOLOGY AND MEDICINE 77 9 - 15 2016年10月 [査読有り][通常論文]
     
    Since technical knowledge and a high degree of experience are necessary for diagnosis of chronic gastritis, computer-aided diagnosis (CAD) systems that analyze gastric X-ray images are desirable in the field of medicine. Therefore, a new method that estimates salient regions related to chronic gastritis/non-gastritis for supporting diagnosis is presented in this paper. In order to estimate salient regions related to chronic gastritis/non-gastritis, the proposed method monitors the distance between a target image feature and Support Vector Machine (SVM)-based hyperplane for its classification. Furthermore, our method realizes removal of the influence of regions outside the stomach by using positional relationships between the stomach and other organs. Consequently, since the proposed method successfully estimates salient regions of gastric X-ray images for which chronic gastritis and non-gastritis are unknown, visual support for inexperienced clinicians becomes feasible. (C) 2016 Elsevier Ltd. All rights reserved.

その他活動・業績

受賞

  • 2019年 Silver Prize IEEE GCCE 2019 Excellent Poster Award
     
    受賞者: Megumi Kotera
  • 2019年 Outstanding Prize IEEE GCCE2019 Excellent Demo! Award
     
    受賞者: Rintaro Yanagi
  • 2017年 電子情報通信学会北海道支部学生奨励賞
     
    受賞者: 藤後 廉
  • 2015年 平成27年度電気・情報関係学会北海道支部連合大会 優秀論文発表賞
     
    受賞者: 藤後 廉

共同研究・競争的資金等の研究課題

  • 医用画像を対象とした機械学習に基づく逐次的データクレンジング技術の構築
    日本学術振興会:科学研究費助成事業 若手研究
    研究期間 : 2020年04月 -2024年03月 
    代表者 : 藤後 廉
  • 機械学習に基づくマルチモーダル画像生成手法の構築
    日本学術振興会:科学研究費助成事業 特別研究員奨励費
    研究期間 : 2019年04月 -2021年03月 
    代表者 : 藤後 廉


Copyright © MEDIA FUSION Co.,Ltd. All rights reserved.