1 code implementation • 19 Mar 2024 • Jiyi Chen, Pengyu Li, Yutong Wang, Pei-Cheng Ku, Qing Qu
This work proposes a deep learning (DL)-based framework, namely Sim2Real, for spectral signal reconstruction in reconstructive spectroscopy, focusing on efficient data sampling and fast inference time.
1 code implementation • 12 Mar 2024 • Yutong Wang, Rishi Sonthalia, Wei Hu
Under a random matrix theoretic assumption on the data distribution and an eigendecay assumption on the data covariance matrix $\boldsymbol{\Sigma}$, we demonstrate that any near-interpolator exhibits rapid norm growth: for $\tau$ fixed, $\boldsymbol{\beta}$ has squared $\ell_2$-norm $\mathbb{E}[\|{\boldsymbol{\beta}}\|_{2}^{2}] = \Omega(n^{\alpha})$ where $n$ is the number of samples and $\alpha >1$ is the exponent of the eigendecay, i. e., $\lambda_i(\boldsymbol{\Sigma}) \sim i^{-\alpha}$.
1 code implementation • 21 Feb 2024 • Yutong Wang, Chaoyang Jiang, Xieyuanli Chen
Meanwhile, local bundle adjustment is performed utilizing the objects and points-based covisibility graphs in our visual object mapping process.
no code implementations • 24 Dec 2023 • Jianqiang Ren, Chao He, Lin Liu, Jiahao Chen, Yutong Wang, Yafei Song, Jianfang Li, Tangli Xue, Siqi Hu, Tao Chen, Kunkun Zheng, Jianjing Xiang, Liefeng Bo
There is a growing demand for customized and expressive 3D characters with the emergence of AI agents and Metaverse, but creating 3D characters using traditional computer graphics tools is a complex and time-consuming task.
no code implementations • 29 Nov 2023 • Yutong Wang, Clayton Scott
The notion of margin loss has been central to the development and analysis of algorithms for binary classification.
1 code implementation • 24 Oct 2023 • Pengyu Li, Yutong Wang, Xiao Li, Qing Qu
We study deep neural networks for the multi-label classification (MLab) task through the lens of neural collapse (NC).
no code implementations • 10 Oct 2023 • Ren-Jian Wang, Ke Xue, Yutong Wang, Peng Yang, Haobo Fu, Qiang Fu, Chao Qian
DivHF learns a behavior descriptor consistent with human preference by querying human feedback.
no code implementations • 4 Oct 2023 • Zhiwei Xu, Yutong Wang, Spencer Frei, Gal Vardi, Wei Hu
Second, they can undergo a period of classical, harmful overfitting -- achieving a perfect fit to training data with near-random performance on test data -- before transitioning ("grokking") to near-optimal generalization later in training.
1 code implementation • 3 Aug 2023 • Minhao Zou, Zhongxue Gan, Yutong Wang, Junheng Zhang, Dongyan Sui, Chun Guan, Siyang Leng
In this work, a universal feature encoder for both graph and hypergraph representation learning is designed, called UniG-Encoder.
Ranked #6 on Node Classification on Cornell
no code implementations • 14 Feb 2023 • Yutong Wang, Clayton D. Scott
Gamma-Phi losses constitute a family of multiclass classification loss functions that generalize the logistic and other common losses, and have found application in the boosting literature.
no code implementations • 9 Aug 2022 • Ke Xue, Yutong Wang, Cong Guan, Lei Yuan, Haobo Fu, Qiang Fu, Chao Qian, Yang Yu
Generating agents that can achieve zero-shot coordination (ZSC) with unseen partners is a new challenge in cooperative multi-agent reinforcement learning (MARL).
1 code implementation • 1 Jun 2022 • Yutong Wang, Renze Lou, Kai Zhang, MaoYan Chen, Yujiu Yang
To address these problems, in this work, we propose a novel learning framework named MORE (Metric learning-based Open Relation Extraction).
no code implementations • 19 May 2022 • Yutong Wang, Clayton D. Scott
Recent research in the theory of overparametrized learning has sought to establish generalization guarantees in the interpolating regime.
no code implementations • 7 Apr 2022 • Yutong Wang, Mehul Damani, Pamela Wang, Yuhong Cao, Guillaume Sartoretti
This review aims to provide an analysis of the state-of-the-art in distributed MARL for multi-robot cooperation.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 4 Mar 2022 • Jianxin Zhang, Yutong Wang, Clayton Scott
Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels.
1 code implementation • 28 Jan 2022 • Yutong Wang, Guillaume Sartoretti
There, our comparison results show that FCMNet outperforms state-of-the-art communication-based reinforcement learning methods in all StarCraft II micromanagement tasks, and value decomposition methods in certain tasks.
1 code implementation • ICLR 2022 • Yutong Wang, Clayton D. Scott
Indeed, existing applications of VC theory to large networks obtain upper bounds on VC dimension that are proportional to the number of weights, and for a large class of networks, these upper bound are known to be tight.
no code implementations • ICLR 2022 • Yutong Wang, Ke Xue, Chao Qian
However, due to the inefficient selection mechanisms, these methods cannot fully guarantee both high quality and diversity.
no code implementations • 17 May 2021 • Andrey Ignatov, Andres Romero, Heewon Kim, Radu Timofte, Chiu Man Ho, Zibo Meng, Kyoung Mu Lee, Yuxiang Chen, Yutong Wang, Zeyu Long, Chenhao Wang, Yifei Chen, Boshen Xu, Shuhang Gu, Lixin Duan, Wen Li, Wang Bofei, Zhang Diankai, Zheng Chengjian, Liu Shaoli, Gao Si, Zhang Xiaofeng, Lu Kaidi, Xu Tianyu, Zheng Hui, Xinbo Gao, Xiumei Wang, Jiaming Guo, Xueyi Zhou, Hao Jia, Youliang Yan
Video super-resolution has recently become one of the most important mobile-related problems due to the rise of video communication and streaming services.
1 code implementation • 10 Feb 2021 • Yutong Wang, Clayton D. Scott
Recent empirical evidence suggests that the Weston-Watkins support vector machine is among the best performing multiclass extensions of the binary SVM.
no code implementations • 23 Jan 2021 • Ziqi Tang, Yutong Wang, Jiebo Luo
Next, we perform exploratory data analysis to delve into the data.
no code implementations • NeurIPS 2020 • Yutong Wang, Clayton D. Scott
A recent empirical comparison of nine such formulations [Do\v{g}an et al. 2016] recommends the variant proposed by Weston and Watkins (WW), despite the fact that the WW-hinge loss is not calibrated with respect to the 0-1 loss.
no code implementations • 1 Jul 2019 • Yutong Wang, Jiyuan Zheng, Qijiong Liu, Zhou Zhao, Jun Xiao, Yueting Zhuang
More specifically, we devise a discriminator, Relation Guider, to capture the relations between the whole passage and the associated answer and then the Multi-Interaction mechanism is deployed to transfer the knowledge dynamically for our question generation system.