🧍‍♂️ About Me

Hangbei Cheng (成航北)

Master’s Student (Class of 2023)

College of Computer Science and Technology, Taiyuan University of Technology (TYUT)

Advisor: Assoc. Prof. Yongfei Wu and Assoc. Prof. Xueyu Liu

Lab: IMBR Lab @ TYUT

“愿专注视觉理解,做真正看得懂世界的模型,也做值得被记住的成果。”

📫 Contact: chenghangbei0702@163.com

🔗 Links: GitHub


🔍 Research Interests

My current research focuses on vision modeling and learning with limited annotations in medical image scenarios. Specifically:

  • Weakly Supervised & Semi-supervised Medical Image Segmentation
    → Multi-scale lesion feature learning, label noise robustness, shape prior modeling.

  • Multi-modal Prompt Engineering & Visual Foundation Models
    → Leveraging large-scale models (e.g., CLIP, SAM) for domain adaptation, semantic alignment, and efficient transfer to medical domains.

  • Embodied Intelligence & Multi-agent Collaboration in Medical AI (Exploratory)
    → Modeling decision-making and robustness in multi-agent, multi-modal diagnostic systems.

My long-term vision is to design intelligent, trustworthy, and interpretable visual systems that can truly “see and understand” the world, not just recognize pixels.


🎓 Education

  • 2019.09 – 2023.07 — B.Eng. in Data Science and Big Data Technology, College of Big Data, TYUT.
  • 2023.09 – Present — M.Eng. in Computer Science and Technology, College of CCST, TYUT.

📝 Publications

✍️ Representative Papers

  • First Author. GLMKD: Joint Global and Local Mutual Knowledge Distillation for Weakly Supervised Lesion Segmentation in Histopathology Images.
    Expert Systems with Applications (SCI-Q1 TOP, IF=7.5) Published
ESWA
GLMKD
Paper Code

Highlights:

  • Proposed a novel dual-stream knowledge distillation framework for weakly supervised pathology segmentation.
  • Combined global supervision and multi-scale MIL for the first time in WSSS.
  • Introduced shape transfer loss for improved global-local feature alignment.
  • First Author. FMaMIL:FFT-enhanced Vision Mamba Multi-instance Learning for Weakly Supervised Lesion Segmentation in Medical Images. Medical Image Analysis (SCI-Q1 TOP | CCF-B, IF=10.7) 1st RRC
MedIA
FMaMIL
arXiv Paper Code

Highlights:

  • Introduced the first Mamba-based MIL segmentation model with learnable frequency encoding.
  • Bidirectional scanning captures contextual pathology structures.
  • CAM-guided soft label correction boosts robustness to label noise.

📚 Other Publications

🔬 Medical Imaging

  • First Author. SegMind: Dual-Brain Collaboration with Multi-Modal Prompts and Multi-Teacher Knowledge Integration for Semi-Supervised Medical Segmentation. AAAI 2026 (CCF-A) Submitted

  • First Author. FALMIL: Frequency-aware Linear MIL for Efficient Weakly Supervised Lesion Segmentation in Gigapixel Pathology Images. PRCV 2025 (CCF-C) Submitted

  • Co-First Author. DSAGL: Dual-Stream Attention-Guided Learning for Weakly Supervised Whole Slide Image Classification. Biomedical Signal Processing and Control (BSPC) (SCI-Q2) 1st RRC

  • Second Author. A Dual-branch Network with Cross-scale Feature Interaction and Alignment for WSIs Classification. Information Fusion (SCI-Q1 TOP, IF=15.5) 1st UR

  • Second Author. Multi-stained Renal Histopathology Image Segmentation via Meta-learning with Guided Collaborative Distillation. AAAI 2026 (CCF-A) Submitted

  • Second Author. DGMCN: Depth-Guided Multi-modal Collaboration Network for Robust Polyp Segmentation in Endoscopic Images. JVCIR (SCI-Q4) 1st UR

  • Third Author. LEPG-SAM: Local Enhancement Perception and Guidance for Weakly Supervised Medical Image Segmentation. To be submitted CMPB (SCI-Q2) Manuscript in Preparation

  • Co-author. Fourier Transform-Based Shape Constrained Framework for Generalizable Medical Image Segmentation. PRCV 2025 (CCF-C) Submitted

  • Co-author. MSMTSeg: Multi-Stained Multi-Tissue Segmentation of Kidney Histology Images via Generative Self Supervised Meta Learning Framework. IEEE JBHI (SCI-Q2 TOP) Published

  • Co-author. Diagnosis of diabetic kidney disease in whole slide images via AI-driven quantification of pathological indicators. CIBM (SCI-Q2) Published


🤖 Multimodal Learning, Federated Security & Embodied AI

  • Co-First Author. AlignBreaker: Multimodal Feature Perturbation Attacks on Embodied AI via Alignment Disruption. To be submitted usenix (CCF-A) Manuscript in Preparation

  • Third Author. MTSec: AIGC-Enhanced Security Model Training for Multimodal Federated Learning. Knowledge-Based Systems (SCI-Q1 TOP, IF=7.2) 2nd RRC

  • Third Author. Face Anti-Spoofing Model with Online Distillation of Local and Global Features. ACM MM 2025 (CCF-A) Planned Resubmission


💬 Research Projects

  • [PI] Multi-domain Visual Feature Fusion for Weakly Supervised Pathology Segmentation.

    Shanxi Graduate Innovation Project (2024KY232), 09/2024–Present
    → Designed a dual-stream framework based on Mamba encoder to enhance spatial-frequency modeling for multi-scale lesion detection.

  • [Core Member] High-resolution Kidney Pathology AI Diagnosis System.

    Shanxi Natural Science Project(General Project), 09/2023–Present
    → Led MIL model development for lesion segmentation and KW-node recognition with interpretability constraints.

  • [Team Lead] Auxiliary Diagnosis System for Kidney Pathology via Missing-Modality Generation.

    National Innovation Training Program for College Students, 06/2025–Present → Developed stain transfer pipeline with meta-learning for style consistency and domain generalization.

  • [Team Lead] GAN-based Pathology Stain Quality Enhancement and Evaluation.

    Shanxi Undergraduate Innovation Project, 06/2022–06/2023
    → Developed stain transfer pipeline with meta-learning for style consistency and domain generalization.


🎖 Honors & Awards

  • 🏅 2023.06 — Outstanding Undergraduate Graduate, TYUT
  • 🏅 2023.07 — Outstanding Undergraduate Thesis, TYUT
  • 🏅 2022.05 — “May Fourth” Outstanding Communist Youth League Member, TYUT
  • 🏅 2021.09 — Excellent Practical Team Member, TYUT
  • 🏅 2021-2024 — “Internet+” / “Challenge Cup” Innovation Contests — *5× Shanxi Gold, 2× Shanxi Silver
  • 🏅 2022.06 — 12th National “ZhengDa Cup” Market Research Competition — *3rd Prize
  • 🏅 2023.06-present — National Scholarship ×1 (Postgrad), 1st-Class Postgrad Scholarship ×2
  • 🏅 2019.06-2023.06 — 1st-Class Undergrad Scholarship ×6, National Encouragement Scholarship ×3

🧑‍🎓 Student Leadership & Activities

  • 2021.06 – 2023.06 — Head, University Science and Technology Association, TYUT
  • 2021.09 – 2023.06 — TYUT RuinoYun Innovation Base Student Leader
  • 2022.09 – 2023.06 — Committee Member, Undergraduate Party Branch, College of Big Data

🧭 PhD Motivation & Direction

During my master’s, I’ve built a strong foundation in medical image modeling, weakly-supervised learning, and large vision model tuning. I aim to further explore explainable, robust and adaptive vision systems for complex real-world environments.
In the PhD phase, I hope to:

  • Delve into multimodal and embodied perception
  • Address trustworthiness and generalization in medical AI
  • Bridge model-level innovation and real-world application

I’m open to collaboration or joint projects with labs on related topics.