닫기

Multimodal Biomedical Imaging and System

Research

Research Area

  • 1
    Multimodal Biomedical System
    Our main application of multimodal imaging and analysis technologies is cancer research. Thus, our short-term goal in cancer research is to build a translational multimodal endoscopic imaging and analysis tool that offers high-frequency ultrasound, spectral, confocal, and stereoscopic 3D imaging for the detection and characterization of tumor lesions with enhanced image contrast and high specificity. Its application fields will range from gastrointestinal to ovarian cancer. We built a preclinical multimode optical imaging and analysis system capable of fluorescence intensity, lifetime, spectral, 2-photon, and confocal imaging for small animals. In addition, we have utilized the multimode optical imaging and analysis system to assess novel nanoparticles for both breast tumor detection and treatment during my postdoctoral period. Currently, we are combining high-frequency ultrasound and multiple optical imaging and analysis technologies to determine breast cancer cells' metastatic potentials in vitro and in vivo. For the last several years, various multimodal imaging and analysis tools have emerged and also gained great attention in the biomedical imaging field. However, the development of innovative imaging and analysis tools, which can be translated into the clinic with high sensitivity and specificity, is still needed for better cancer detection and characterization. Therefore, with my unique and extensive experience, knowledge, and technologies, we will build an innovative multimodal endoscopic imaging and analysis tool that can be rapidly translated into the clinic. I believe that the tool will offer complementary information in the detection of tumor lesions and thus has the potential to be a decision-making tool for cancer detection with high accuracy.
  • 2
    Mobile Healthcare System
    We are currently working on the development of various mobile healthcare systems, including smartphone-based mobile multispectral imaging systems and deep learning-based ECG classification. In particular, smartphone-based mobile multispectral imaging systems are employed to detect various skin lesions quantitatively, thus offering better diagnostic outcomes. Also, we are collaborating with Bionet for the development of an advanced ECG classification algorithm.
  • 3
    Deep learning-based Image Analysis and Enhancement
    In the medical imaging and remote sensing field, our research focuses on the development of innovative deep learning and machine learning based on image analysis techniques. In the biomedical field, we are developing a novel deep-learning technology with ultrasound and optical images in 2-D and 3-D to detect better and diagnose various human diseases. Our projects related to intelligent biomedical image analysis include deep learning-based spectral image analysis of tumor regions and various skin lesions, 3D semantic segmentation of diseased regions in ultrasound images, time-resolved object tracking, and segmentation in various biomedical optical images. On the other hand, in the remote sensing field, a deep learning network is being constructed to generate an automatic digital map for better semantic segmentation of objects in aerial and satellite images. The current projects in the remote sensing field include the development of deep learning models for semantic segmentation of buildings and roads and change detection in aerial and satellite images. For these projects, our lab is collaborating with Dabeeo Inc.