no code implementations • 31 Jan 2024 • Mengxi Liu, Vitor Fortes Rey, Yu Zhang, Lala Shakti Swarup Ray, Bo Zhou, Paul Lukowicz
While IMUs are currently the prominent fitness tracking modality, through iMove, we show bio-impedence can help improve IMU-based fitness tracking through sensor fusion and contrastive learning. To evaluate our methods, we conducted an experiment including six upper body fitness activities performed by ten subjects over five days to collect synchronized data from bio-impedance across two wrists and IMU on the left wrist. The contrastive learning framework uses the two modalities to train a better IMU-only classification model, where bio-impedance is only required at the training phase, by which the average Macro F1 score with the input of a single IMU was improved by 3. 22 \% reaching 84. 71 \% compared to the 81. 49 \% of the IMU baseline model.
no code implementations • 11 Jan 2024 • Sizhen Bian, Mengxi Liu, Bo Zhou, Paul Lukowicz, Michele Magno
To this end, we first sorted the explorations into three domains according to the involved body forms: body-part electric field, whole-body electric field, and body-to-body electric field, and enumerated the state-of-art works in the domains with a detailed survey of the backed sensing tricks and targeted applications.
no code implementations • 3 Jan 2024 • Daniel Geißler, Bo Zhou, Mengxi Liu, Sungho Suh, Paul Lukowicz
This work offers a heuristic evaluation of the effects of variations in machine learning training regimes and learning paradigms on the energy consumption of computing, especially HPC hardware with a life-cycle aware perspective.
no code implementations • 3 Jan 2024 • Mengxi Liu, Zimin Zhao, Daniel Geißler, Bo Zhou, Sungho Suh, Paul Lukowicz
Recent advancements in Artificial Neural Networks have significantly improved human activity recognition using multiple time-series sensors.
no code implementations • 22 May 2023 • Mengxi Liu, Bo Zhou, Zimin Zhao, Hyeonseok Hong, Hyun Kim, Sungho Suh, Vitor Fortes Rey, Paul Lukowicz
In this work, we propose an open-source scalable end-to-end RTL framework FieldHAR, for complex human activity recognition (HAR) from heterogeneous sensors using artificial neural networks (ANN) optimized for FPGA or ASIC integration.
no code implementations • 10 Nov 2022 • Mengxi Liu, Sizhen Bian, Paul Lukowicz
This work described a novel non-contact, wearable, real-time eye blink detection solution based on capacitive sensing technology.
no code implementations • 8 Oct 2022 • Mengxi Liu, Sizhen Bian, Bo Zhou, Agnes Grünerbl, Paul Lukowicz
We studied the frequency sensitivity of the electrochemical impedance spectrum regarding distinct beverages and the importance of features like amplitude, phase, and real and imaginary components for beverage classification.
no code implementations • 3 Oct 2022 • Mengxi Liu, Sungho Suh, Bo Zhou, Agnes Gruenerbl, Paul Lukowicz
Meanwhile, we evaluate the impact of the infrared array sensor on the recognition accuracy of these activities.
no code implementations • 18 Jul 2022 • Sizhen Bian, Kexuan Guo, Mengxi Liu, Bo Zhou, Paul Lukowicz
In more detail, the transmitters generate the oscillating magnetic fields with a registered sequence, the receiver senses the strength of the induced magnetic field by a customized three axes coil, which is configured as the LC oscillator with the same oscillating frequency so that an induced current shows up when the receiver is located in the field of the generated magnetic field.
1 code implementation • IEEE International Geoscience and Remote Sensing Symposium IGARSS 2021 • Mengxi Liu, Qian Shi
In view of the insufficient of current change detection, we propose a deeply-supervised attention metric-based network (DSAMNet) for bi-temporal image change detection.
1 code implementation • 27 Feb 2021 • Mengxi Liu, Qian Shi, Andrea Marinoni, Da He, Xiaoping Liu, Liangpei Zhang
The experimental results demonstrate the superiority of the proposed method, which not only outperforms all baselines -with the highest F1 scores of 87. 40% on the building change detection dataset and 92. 94% on the change detection dataset -but also obtains the best accuracies on experiments performed with images having a 4x and 8x resolution difference.