1 code implementation • 26 Feb 2024 • Hao Wang, Zeyu Gao, Chao Zhang, Zihan Sha, Mingyang Sun, Yuchen Zhou, Wenyu Zhu, Wenju Sun, Han Qiu, Xi Xiao
At the core, our approach boosts superior transfer learning capabilities by effectively aligning binary code with their semantics explanations (in natural language), resulting a model able to generate better embeddings for binary code.
1 code implementation • 5 Dec 2023 • Yuchen Zhou, Jiayuan Gu, Xuanlin Li, Minghua Liu, Yunhao Fang, Hao Su
Open-world 3D part segmentation is pivotal in diverse applications such as robotics and AR/VR.
1 code implementation • 21 Nov 2023 • Zeyu Gao, Hao Wang, Yuchen Zhou, Wenyu Zhu, Chao Zhang
Given the significant successes of large language models (LLMs) in various tasks, there is growing anticipation of their efficacy in vulnerability detection.
1 code implementation • 15 Nov 2023 • Yuchen Zhou, Emmy Liu, Graham Neubig, Michael J. Tarr, Leila Wehbe
In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories.
no code implementations • 4 Nov 2023 • Yuchen Zhou, Yuxin Chen
Tensor clustering, which seeks to extract underlying cluster structures from noisy tensor observations, has gained increasing attention.
1 code implementation • 31 Oct 2023 • Yichi Zhang, Jiayi Pan, Yuchen Zhou, Rui Pan, Joyce Chai
Vision-Language Models (VLMs) are trained on vast amounts of data captured by humans emulating our understanding of the world.
1 code implementation • 17 Oct 2023 • Xinyu Li, Yao Xiao, Yuchen Zhou
Performing SPLEE to obtain a high-dimensional embedding of the large-scale graph and then using t-SGNE to reduce its dimension for visualization, we are able to visualize graphs with up to 300K nodes and 1M edges within 5 minutes and achieve approximately 10% improvement in visualization quality.
1 code implementation • 24 Aug 2023 • Wenyu Zhu, Hao Wang, Yuchen Zhou, JiaMing Wang, Zihan Sha, Zeyu Gao, Chao Zhang
By feeding explicit knowledge as additional inputs to the Transformer, and fusing implicit knowledge with a novel pre-training task, kTrans provides a new perspective to incorporating domain knowledge into a Transformer framework.
1 code implementation • 5 Apr 2023 • Yuchen Zhou, Michael J. Tarr, Daniel Yurovsky
Based on these results, we conclude that verb acquisition is influenced by all three sources of complexity, but that the variability of visual structure poses the most significant challenge for verb learning.
no code implementations • 10 Mar 2023 • Yuchen Zhou, Yuxin Chen
This paper is concerned with estimating the column subspace of a low-rank matrix $\boldsymbol{X}^\star \in \mathbb{R}^{n_1\times n_2}$ from contaminated data.
no code implementations • ACL 2021 • Ruipeng Jia, Yanan Cao, Fang Fang, Yuchen Zhou, Zheng Fang, Yanbing Liu, Shi Wang
In this paper, we conceptualize the single-document extractive summarization as a rebalance problem and present a deep differential amplifier framework.
no code implementations • 29 Dec 2020 • Dong Xia, Anru R. Zhang, Yuchen Zhou
In all these models, we observe that different from many matrix/vector settings in existing work, debiasing is not required to establish the asymptotic distribution of estimates or to make statistical inference on low-rank tensors.
1 code implementation • 6 Oct 2020 • Yuchen Zhou, Anru R. Zhang, Lili Zheng, Yazhen Wang
This paper studies a general framework for high-order tensor SVD.
no code implementations • NeurIPS 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep K. Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
4 code implementations • 6 Nov 2019 • Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, Ramesh Chukka, Cody Coleman, Sam Davis, Pan Deng, Greg Diamos, Jared Duke, Dave Fick, J. Scott Gardner, Itay Hubara, Sachin Idgunji, Thomas B. Jablin, Jeff Jiao, Tom St. John, Pankaj Kanwar, David Lee, Jeffery Liao, Anton Lokhmotov, Francisco Massa, Peng Meng, Paulius Micikevicius, Colin Osborne, Gennady Pekhimenko, Arun Tejusve Raghunath Rajan, Dilip Sequeira, Ashish Sirasao, Fei Sun, Hanlin Tang, Michael Thomson, Frank Wei, Ephrem Wu, Lingjie Xu, Koichi Yamada, Bing Yu, George Yuan, Aaron Zhong, Peizhao Zhang, Yuchen Zhou
Machine-learning (ML) hardware and software system demand is burgeoning.
no code implementations • 30 Oct 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
no code implementations • 21 Sep 2019 • T. Tony Cai, Anru R. Zhang, Yuchen Zhou
We study sparse group Lasso for high-dimensional double sparse linear regression, where the parameter of interest is simultaneously element-wise and group-wise sparse.
no code implementations • 21 Oct 2018 • Anru R. Zhang, Yuchen Zhou
The non-asymptotic tail bounds of random variables play crucial roles in probability, statistics, and machine learning.
no code implementations • ICML 2018 • Wenlong Mou, Yuchen Zhou, Jun Gao, Li-Wei Wang
We study the problem of generalization guarantees for dropout training.