Search Results for author: Xinyi Tong

Found 6 papers, 3 papers with code

SSLCL: An Efficient Model-Agnostic Supervised Contrastive Learning Framework for Emotion Recognition in Conversations

1 code implementation25 Oct 2023 Tao Shi, Xiao Liang, Yaoyuan Liang, Xinyi Tong, Shao-Lun Huang

To address these challenges, we propose an efficient and model-agnostic SCL framework named Supervised Sample-Label Contrastive Learning with Soft-HGR Maximal Correlation (SSLCL), which eliminates the need for a large batch size and can be seamlessly integrated with existing ERC models without introducing any model-specific assumptions.

Contrastive Learning Emotion Recognition

Personalized Federated Learning with Feature Alignment and Classifier Collaboration

3 code implementations20 Jun 2023 Jian Xu, Xinyi Tong, Shao-Lun Huang

Data heterogeneity is one of the most challenging issues in federated learning, which motivates a variety of approaches to learn personalized models for participating clients.

Personalized Federated Learning Representation Learning

Real Spike: Learning Real-valued Spikes for Spiking Neural Networks

1 code implementation13 Oct 2022 Yufei Guo, Liwen Zhang, Yuanpei Chen, Xinyi Tong, Xiaode Liu, YingLei Wang, Xuhui Huang, Zhe Ma

Motivated by this assumption, a training-inference decoupling method for SNNs named as Real Spike is proposed, which not only enjoys both unshared convolution kernels and binary spikes in inference-time but also maintains both shared convolution kernels and Real-valued Spikes during training.

RecDis-SNN: Rectifying Membrane Potential Distribution for Directly Training Spiking Neural Networks

no code implementations CVPR 2022 Yufei Guo, Xinyi Tong, Yuanpei Chen, Liwen Zhang, Xiaode Liu, Zhe Ma, Xuhui Huang

Unfortunately, with the propagation of binary spikes, the distribution of membrane potential will shift, leading to degeneration, saturation, and gradient mismatch problems, which would be disadvantageous to the network optimization and convergence.

Quantization

A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning

no code implementations NeurIPS 2021 Xinyi Tong, Xiangxiang Xu, Shao-Lun Huang, Lizhong Zheng

Current transfer learning algorithm designs mainly focus on the similarities between source and target tasks, while the impacts of the sample sizes of these tasks are often not sufficiently addressed.

Image Classification Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.