• Fangyu Liu received the B.S. degree in software engineering from the University of South China, Hengyang, China, in 2020, and the M.S. degree in computer application technology from Shanghai University, Shanghai, China, in 2023. He is currently pursuing the Ph.D. degree in computer application technology at the University of Chinese Academy of Sciences, Beijing, China. His current research interests include biomedical signal processing, sensor information fusion, wearable health-monitoring devices, medical image analysis, and machine learning.
  • Biomedical Signal Processing
  • Sensor Information Fusion
  • Wearable Health-monitoring Devices
  • Medical Image Analysis
  • Machine Learning
  • 2023.09.19 - 2024.05.07:A student co-trained by BGI-shenzhen
  • 2019.11.15 - 2020.02.15:Intern at Nuclear Industry Engineering Research and Design Co., LTD
  1. Topic: Research on wearable intelligent sensing and computing methods for individualized sports health [Link]
    Source: Shenzhen International Cooperation Project
    Date: 2023 - 2025
    Duty: Student participation
  1. Topic: Tensor-based multimodal data fusion with causal reasoning and interpretable aid diagnosis for Alzheimer's disease [Link]
    Source: Key Program for International Science and Technology Cooperation Projects of China
    Date: 2024 - Present
    Duty: Student participation
Graphical Abstract
A Transfer Learning Method for Cross-Scenario Abnormal Running Posture Detection Based on Wearable Sensors
Xiang Li, Hao Wang, Fangyu Liu, Ye Li, Han Zhang*, Fangmin Sun*
IEEE Transactions on Consumer Electronics, 2025
Q2 of CAS, Q1 of JCR, IF:10.9, Accepted

This paper proposes a Cross-Scenario Transfer Learning (CSTL) framework to address the poor adaptability and high complexity of deep learning-based abnormal running posture detection using wearable inertial sensors. The framework includes: (1) A LightNorm Transformer (LNT) model pre-trained on single-scenario data for initial feature extraction; (2) A fine-tuning stage with contribution-guided freezing and dynamic adapters to transfer the model to five running scenarios, evaluated via leave-one-out cross-validation. Results show that CSTL achieves 87.01% accuracy with only 10 fine-tuning epochs, outperforming non-transfer methods and optimizing the accuracy-efficiency trade-off. The solution enhances robustness for real-world applications.

Graphical Abstract
MixMatch-based semi-supervised learning approach for cross-domain locomotion and transportation mode recognition
Hao Wang1, Fangyu Liu1, Xiang Li, Huazhen Huang, Ye Li, Fangmin Sun* (1 equal contribution)
Companion of the 2025 on ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2025
UbiComp Workshop 2025, CCF-A, EI, Accepted

Cross-domain human activity recognition using wearable inertial sensors remains a challenging task, especially in scenarios where no labeled data is available in the target domain. To address this, our team (SIAT-BIT) propose a semi-supervised learning framework based on MixMatch for locomotion and transportation mode recognition. Our approach leverages labeled data from multiple public HAR datasets and unlabeled data from the Sussex-Huawei Locomotion-Transportation Recognition Challenge Task 2 (Kyutech IMU) dataset. The framework integrates pseudo-label generation, data augmentation, soft label sharpening, and cross-sample mixing to mitigate domain shift and label scarcity. Experimental results demonstrate that the proposed method achieves competitive performance in Task 2 with an accuracy of 76.8% and F1 score of 76.5%, confirming its effectiveness in modeling real-world cross-domain activity scenarios without relying on target domain labels.

Graphical Abstract
A sensor-site hybrid algorithm pipeline for locomotion and transportation mode recognition
Fangyu Liu1, Hao Wang1, Huazhen Huang, Xiang Li, Ye Li, Fangmin Sun* (1 equal contribution)
Companion of the 2025 on ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2025
UbiComp Workshop 2025, CCF-A, EI, Accepted

Human Activity Recognition has been widely applied in mobile analysis, mobile health, and intelligent sensing. However, the existing HAR algorithms still face challenges in sensor modality dropout, varying device placement, and limited model generalization. To address these issues, the Sussex-Huawei Locomotion (SHL) recognition challenge provides a complex real-world dataset for algorithm development. In this study, our team (SIAT-BIT) proposes a classification framework based on handcrafted features, extracting 156 statistical, time-domain, and frequency-domain features from a total of 12 channels, including the tri-axial signals and their amplitudes of accelerometers, gyroscopes, and magnetometers. The experimental results show that our approach achieves strong recognition performance across multiple device placements, achieving an accuracy rate of 71.4% and an F1 score of 71.8% on the verification dataset which confirms the effectiveness and efficiency of our method.

Graphical Abstract
A Key Feature Screening Method for Human Activity Recognition Based on Multi-head Attention Mechanism
Hao Wang1, Fangyu Liu1, Xiang Li, Ye Li, Fangmin Sun* (1 equal contribution)
IEEE International Joint Conference on Biometrics, 2025
IJCB 2025, CCF-C, EI, Accepted

Human activity recognition (HAR) using wearable sensors is crucial for ubiquitous computing applications in healthcare, fitness monitoring, and smart environments. Sensor-based HAR faces challenges from high-dimensional, multi-channel time series data containing redundant or irrelevant features that degrade performance and interpretability. We propose a lightweight feature screening framework guided by multi-head attention to address this. The model employs channel-wise linear transformations to extract localized representations from each sensor axis, then uses a multi-head attention module to dynamically assess feature importance across channels. This approach emphasizes informative components while suppressing noise and redundancy. On the KU-HAR dataset, our method achieves 96.0% accuracy using only 60 selected features. The selected features also provide valuable references for future research in feature selection, model simplification and multimodal sensor fusion.

Graphical Abstract
Influencing Factors Mining and Modeling of Energy Expenditure in Running Based on Wearable Sensors
Fangyu Liu, Hao Wang, Weilin Zang, Ye Li, Fangmin Sun*
Proceedings of the 2024 International Conference on Sports Technology and Performance Analysis, 2024
ICSTPA 2024, EI

This study addresses the interpretability limitations of deep learning-based wearable energy expenditure monitoring by developing an explainable regression model for real-time running energy prediction. Through systematic analysis of demographic, physical activity and physiological features, the research proposes a novel hand-crafted feature selection method identifying 743 key features. Among various machine learning algorithms tested, Gradient Boosted Regression (GBR) achieved optimal performance (CC=0.970, RMSE=1.004, MAE=0.729) in five-fold cross-validation with 34 volunteers. The study not only accomplishes accurate real-time energy expenditure prediction but also provides a new technical approach combining manual feature engineering with interpretable algorithms for sports monitoring applications.

Graphical Abstract
Multi-task joint learning network based on adaptive patch pruning for Alzheimer's disease diagnosis and clinical score prediction
Fangyu Liu1, Shizhong Yuan1, Weimin Li*, Qun Xu, Xing Wu, Ke Han*, Jingchao Wang, Shang Miao (1 equal contribution)
Biomedical Signal Processing and Control, 2024
Q2 of CAS, Q1 of JCR, IF:4.9

This study proposes an innovative Multi-Task Joint Learning Network (MTJLN) to address key challenges in brain disease diagnosis and clinical score prediction. The method divides brain images into 216 local patches covering all potential lesion areas and employs a patch pruning algorithm to automatically select informative regions, overcoming the limitations of pre-determining discriminative locations. The novel framework integrates fine-grained patch-based multimodal features with coarse-grained non-image features at intermediate layers, effectively utilizing intrinsic correlations between multimodal data and multitask variables. A specially designed weighted loss function enables the inclusion of subjects with incomplete clinical scores, significantly improving data utilization. Experiments on 842 ADNI subjects demonstrate the method's effectiveness in predicting pathological stages and clinical scores, providing a new solution for precise brain disease diagnosis and progression assessment.

Graphical Abstract
Patch-based deep multi-modal learning framework for Alzheimer's disease diagnosis using multi-view neuroimaging
Fangyu Liu1, Shizhong Yuan1, Weimin Li*, Qun Xu**, Bin Sheng (1 equal contribution)
Biomedical Signal Processing and Control, 2023
Q2 of CAS, Q1 of JCR, IF:4.9

This study addresses the limitations of single-modality methods and spatial information loss in patch-based approaches for Alzheimer's disease (AD) diagnosis by proposing a Patch-based Deep Multi-Modal Learning (PDMML) framework. The framework features a prior-free discriminative location discovery strategy to automatically identify potential lesion areas, eliminating reliance on anatomical landmarks and expert experience. It innovatively integrates multimodal features at the patch level to capture comprehensive disease representations while preserving spatial information through joint patch learning, overcoming the flattening-induced information loss. Evaluated on 842 ADNI subjects, PDMML demonstrates superior performance in both discriminative region localization and brain disease diagnosis, offering a robust computer-aided solution for early mild cognitive impairment (MCI) detection.

Reviewer

  • Reviewer for the journal "IEEE Journal of Biomedical and Health Informatics"
  • Reviewer for the journal "Knowledge-Based Systems"
  • Reviewer for the journal "Scientific Reports"
  • Reviewer for the journal "The Journal of Supercomputing"
  • Reviewer for the 21st IEEE International Conference on Ubiquitous Intelligence and Computing (UIC 2024)

Volunteer

  • Volunteer for the 15th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics (ACM BCB 2024)
  • Volunteer for the national science-popularizing public activity--"the 20th Public Science Day held by the Chinese Academy of Sciences"
  • 2020, Outstanding Graduates of Hunan Province, China
  • 2020, Outstanding Graduates of the University of South China