no code implementations • 28 Nov 2023 • Xiaohui Chen, Yongfei Liu, Yingxiang Yang, Jianbo Yuan, Quanzeng You, Li-Ping Liu, Hongxia Yang
Recent advancements in text-to-image (T2I) generative models have shown remarkable capabilities in producing diverse and imaginative visuals based on text prompts.
no code implementations • 14 Nov 2023 • Han Gao, Xu Han, Xiantao Fan, Luning Sun, Li-Ping Liu, Lian Duan, Jian-Xun Wang
A notable feature of our approach is the method proposed for long-span flow sequence generation, which is based on autoregressive gradient-based conditional sampling, eliminating the need for cumbersome retraining processes.
no code implementations • 22 Oct 2023 • Mingyang Wu, Xiaohui Chen, Li-Ping Liu
Recently developed deep neural models like NetGAN, CELL, and Variational Graph Autoencoders have made progress but face limitations in replicating key graph statistics on generating large graphs.
1 code implementation • NeurIPS 2023 • Xiaohui Chen, Yinkai Wang, Yuanqi Du, Soha Hassoun, Li-Ping Liu
Self-supervised training methods for transformers have demonstrated remarkable performance across various domains.
1 code implementation • 15 Jun 2023 • Gabriel Appleby, Linfeng Liu, Li-Ping Liu
Spatial interpolation is a class of estimation problems where locations with known values are used to estimate values at other locations, with an emphasis on harnessing spatial locality and trends.
no code implementations • 25 May 2023 • Xiaohui Chen, Jiankai Sun, Taiqing Wang, Ruocheng Guo, Li-Ping Liu, Aonan Zhang
Most subsampling methods are model-based and often require a pre-trained pilot model to measure data importance via e. g. sample hardness.
1 code implementation • 6 May 2023 • Xiaohui Chen, Jiaxing He, Xu Han, Li-Ping Liu
The empirical study shows that EDGE is much more efficient than competing methods and can generate large graphs with thousands of nodes.
1 code implementation • 26 Nov 2022 • Han Gao, Xu Han, Jiaoyang Huang, Jian-Xun Wang, Li-Ping Liu
Recently the Transformer structure has shown good performances in graph learning tasks.
no code implementations • 19 Nov 2022 • Xiaohui Chen, Yukun Li, Aonan Zhang, Li-Ping Liu
Learning to generate graphs is challenging as a graph is a set of pairwise connected, unordered nodes encoding complex combinatorial structures.
1 code implementation • 19 Oct 2022 • Linfeng Liu, Xu Han, Dawei Zhou, Li-Ping Liu
In this work, we convert graph pruning to a problem of node relabeling and then relax it to a differentiable problem.
2 code implementations • 23 Jun 2022 • Patrick Feeney, Sarah Schneider, Panagiotis Lymperopoulos, Li-Ping Liu, Matthias Scheutz, Michael C. Hughes
In order for artificial agents to successfully perform tasks in changing environments, they must be able to both detect and adapt to novelty.
no code implementations • 25 Mar 2022 • Xinmeng Li, Hao Zhu, Li-Ping Liu, Soha Hassoun
We show that annotation performance, for ESP and other models, is a strong function of the number of molecules in the candidate set and their similarity to the target molecule.
no code implementations • ICLR 2022 • Xu Han, Han Gao, Tobias Pfaff, Jian-Xun Wang, Li-Ping Liu
Graph-based next-step prediction models have recently been very successful in modeling complex high-dimensional physical systems on irregular meshes.
1 code implementation • 28 Sep 2021 • Xinmeng Li, Li-Ping Liu, Soha Hassoun
We show that each of our auxiliary tasks boosts learning of the embedding vectors, and that contrastive learning using Boost-RS outperforms attribute concatenation and multi-label learning.
no code implementations • 25 Jun 2021 • Li-Ping Liu, Ruiyuan Gu, Xiaozhe Hu
Particularly this work constructs polynomial feedforward neural networks using the product activation, a new activation function constructed from multiplications.
1 code implementation • 4 Jun 2021 • Linfeng Liu, Michael C. Hughes, Soha Hassoun, Li-Ping Liu
In this work, we propose a new model, Stochastic Iterative Graph MAtching (SIGMA), to address the graph matching problem.
no code implementations • 29 Mar 2021 • Linfeng Liu, Michael C. Hughes, Li-Ping Liu
We propose a new model, the Neighbor Mixture Model (NMM), for modeling node labels in a graph.
1 code implementation • 14 Dec 2020 • Xu Han, Xiaohui Chen, Li-Ping Liu
Motivated by the observation that GAN ensembles often outperform single GANs in generation tasks, we propose to construct GAN ensembles for anomaly detection.
1 code implementation • 9 Feb 2020 • Julie Jiang, Li-Ping Liu, Soha Hassoun
We develop in this work a technique, Enzymatic Link Prediction (ELP), for predicting the likelihood of an enzymatic transformation between two molecules.
1 code implementation • 12 Dec 2019 • Ramtin Hosseini, Neda Hassanpour, Li-Ping Liu, Soha Hassoun
Annotation results are in agreement to those obtained using other tools that utilize additional information in the form of spectral signatures.
no code implementations • 27 Nov 2019 • Zhi Fengy, Haoyi Xiong, Chuanyuan Song, Sijia Yang, Baoxin Zhao, Licheng Wang, Zeyu Chen, Shengwen Yang, Li-Ping Liu, Jun Huan
Our experiments using the real-world data showed that SecureGBM can well secure the communication and computation of LightGBM training and inference procedures for the both parties while only losing less than 3% AUC, using the same number of iterations for gradient boosting, on a wide range of benchmark datasets.
no code implementations • 22 Nov 2019 • Shengwen Yang, Bing Ren, Xuhui Zhou, Li-Ping Liu
The system is built on the pa-rameter server architecture and aims to speed up the model training via utilizing a cluster of servers in case of large volume of training data.
no code implementations • 18 Nov 2019 • Qiang Huang, Jianhui Bu, Weijian Xie, Shengwen Yang, Weijia Wu, Li-Ping Liu
Sentence matching is an essential task in the QA systems and is usually reformulated as a Paraphrase Identification (PI) problem.
Ranked #13 on Paraphrase Identification on Quora Question Pairs (Accuracy metric)
2 code implementations • ICLR 2019 • Xingjian Li, Haoyi Xiong, Hanchao Wang, Yuxuan Rao, Li-Ping Liu, Zeyu Chen, Jun Huan
Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network.
no code implementations • 8 Sep 2018 • Linfeng Liu, Li-Ping Liu
Many recent inference methods approximate the posterior distribution with a simpler distribution defined on a small number of inducing points.
no code implementations • ICML 2017 • Li-Ping Liu, David M. Blei
In this paper, we develop zero-inflated embeddings, a new embedding method that is designed to learn from sparse observations.
no code implementations • 20 Oct 2015 • Li-Ping Liu, Thomas G. Dietterich, Nan Li, Zhi-Hua Zhou
This paper introduces a new approach, Transductive Top K (TTK), that seeks to minimize the hinge loss over all training instances under the constraint that exactly $k$ test instances are predicted as positive.
no code implementations • 20 May 2014 • Li-Ping Liu, Daniel Sheldon, Thomas G. Dietterich
The Collective Graphical Model (CGM) models a population of independent and identically distributed individuals when only collective statistics (i. e., counts of individuals) are observed.