Search Results for author: Dapeng Wu

Found 30 papers, 7 papers with code

Spatial-Temporal DAG Convolutional Networks for End-to-End Joint Effective Connectivity Learning and Resting-State fMRI Classification

no code implementations16 Dec 2023 Rui Yang, Wenrui Dai, Huajun She, Yiping P. Du, Dapeng Wu, Hongkai Xiong

To address these issues in an end-to-end manner, we model the brain network as a directed acyclic graph (DAG) to discover direct causal connections between brain regions and propose Spatial-Temporal DAG Convolutional Network (ST-DAGCN) to jointly infer effective connectivity and classify rs-fMRI time series by learning brain representations based on nonlinear structural equation model.

Time Series Time Series Classification

scBiGNN: Bilevel Graph Representation Learning for Cell Type Classification from Single-cell RNA Sequencing Data

no code implementations16 Dec 2023 Rui Yang, Wenrui Dai, Chenglin Li, Junni Zou, Dapeng Wu, Hongkai Xiong

A gene-level GNN is established to adaptively learn gene-gene interactions and cell representations via the self-attention mechanism, and a cell-level GNN builds on the cell-cell graph that is constructed from the cell representations generated by the gene-level GNN.

Classification Graph Representation Learning

Deep Learning Enables Large Depth-of-Field Images for Sub-Diffraction-Limit Scanning Superlens Microscopy

no code implementations27 Oct 2023 Hui Sun, Hao Luo, Feifei Wang, Qingjiu Chen, Meng Chen, Xiaoduo Wang, Haibo Yu, Guanglie Zhang, Lianqing Liu, JianPing Wang, Dapeng Wu, Wen Jung Li

Scanning electron microscopy (SEM) is indispensable in diverse applications ranging from microelectronics to food processing because it provides large depth-of-field images with a resolution beyond the optical diffraction limit.

Defect Detection Image-to-Image Translation +1

Spatial-Temporal Enhanced Transformer Towards Multi-Frame 3D Object Detection

1 code implementation1 Jul 2023 Yifan Zhang, Zhiyu Zhu, Junhui Hou, Dapeng Wu

Specifically, to model the inter-object spatial interaction and complex temporal dependencies, we introduce the spatial-temporal graph attention network, which represents queries as nodes in a graph and enables effective modeling of object interactions within a social context.

3D Object Detection Graph Attention +2

Distributed Pruning Towards Tiny Neural Networks in Federated Learning

no code implementations5 Dec 2022 Hong Huang, Lan Zhang, Chaoyue Sun, Ruogu Fang, Xiaoyong Yuan, Dapeng Wu

To address these challenges, we propose FedTiny, a distributed pruning framework for federated learning that generates specialized tiny models for memory- and computing-constrained devices.

Federated Learning Network Pruning

FedZKT: Zero-Shot Knowledge Transfer towards Resource-Constrained Federated Learning with Heterogeneous On-Device Models

no code implementations8 Sep 2021 Lan Zhang, Dapeng Wu, Xiaoyong Yuan

To achieve knowledge transfer across these heterogeneous on-device models, a zero-shot distillation approach is designed without any prerequisites for private on-device data, which is contrary to certain prior research based on a public dataset or a pre-trained data generator.

Federated Learning Transfer Learning

Server Averaging for Federated Learning

no code implementations22 Mar 2021 George Pu, Yanlin Zhou, Dapeng Wu, Xiaolin Li

Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server.

Federated Learning

ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles

no code implementations21 Sep 2020 Xiaoyong Yuan, Leah Ding, Lan Zhang, Xiaolin Li, Dapeng Wu

The experimental results reveal the severity of ES Attack: i) ES Attack successfully steals the victim model without data hurdles, and ES Attack even outperforms most existing model stealing attacks using auxiliary data in terms of model accuracy; ii) most countermeasures are ineffective in defending ES Attack; iii) ES Attack facilitates further attacks relying on the stolen model.

BIG-bench Machine Learning

Distilled One-Shot Federated Learning

1 code implementation17 Sep 2020 Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li, Dapeng Wu

DOSFL serves as an inexpensive method to quickly converge on a performant pre-trained model with less than 0. 1% communication cost of traditional methods.

Federated Learning One-Shot Learning

Asking Complex Questions with Multi-hop Answer-focused Reasoning

1 code implementation16 Sep 2020 Xiyao Ma, Qile Zhu, Yanlin Zhou, Xiaolin Li, Dapeng Wu

Asking questions from natural language text has attracted increasing attention recently, and several schemes have been proposed with promising results by asking the right question words and copy relevant words from the input to the question.

Question Generation Question-Generation

PRI-VAE: Principle-of-Relevant-Information Variational Autoencoders

no code implementations13 Jul 2020 Yanjun Li, Shujian Yu, Jose C. Principe, Xiaolin Li, Dapeng Wu

Although substantial efforts have been made to learn disentangled representations under the variational autoencoder (VAE) framework, the fundamental properties to the dynamics of learning of most VAE models still remain unknown and under-investigated.

Application of Deep Interpolation Network for Clustering of Physiologic Time Series

no code implementations27 Apr 2020 Yanjun Li, Yuanfang Ren, Tyler J. Loftus, Shounak Datta, M. Ruppert, Ziyuan Guan, Dapeng Wu, Parisa Rashidi, Tezcan Ozrazgat-Baslanti, Azra Bihorac

M Interpretation: In a heterogeneous cohort of hospitalized patients, a deep interpolation network extracted representations from vital sign data measured within six hours of hospital admission.

Clustering Time Series +1

A Batch Normalized Inference Network Keeps the KL Vanishing Away

1 code implementation ACL 2020 Qile Zhu, Jianlin Su, Wei Bi, Xiaojiang Liu, Xiyao Ma, Xiaolin Li, Dapeng Wu

Variational Autoencoder (VAE) is widely used as a generative model to approximate a model's posterior on latent variables by combining the amortized variational inference and deep neural networks.

Dialogue Generation Language Modelling +3

Improving Question Generation with Sentence-level Semantic Matching and Answer Position Inferring

no code implementations2 Dec 2019 Xiyao Ma, Qile Zhu, Yanlin Zhou, Xiaolin Li, Dapeng Wu

Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation.

Position Question Generation +2

DeepAtom: A Framework for Protein-Ligand Binding Affinity Prediction

1 code implementation1 Dec 2019 Yanjun Li, Mohammad A. Rezaei, Chenglong Li, Xiaolin Li, Dapeng Wu

The cornerstone of computational drug design is the calculation of binding affinity between two biological counterparts, especially a chemical compound, i. e., a ligand, and a protein.

Drug Discovery Feature Engineering +1

A Federated Filtering Framework for Internet of Medical Things

no code implementations17 Apr 2019 Sunny Sanyal, Dapeng Wu, Boubakr Nour

Based on the dominant paradigm, all the wearable IoT devices used in the healthcare sector also known as the internet of medical things (IoMT) are resource constrained in power and computational capabilities.

Networking and Internet Architecture

Turbo Learning for Captionbot and Drawingbot

no code implementations NeurIPS 2018 Qiuyuan Huang, Pengchuan Zhang, Dapeng Wu, Lei Zhang

We study in this paper the problems of both image captioning and text-to-image generation, and present a novel turbo learning approach to jointly training an image-to-text generator (a. k. a.

Image Captioning Text Generation +1

Hierarchically Structured Reinforcement Learning for Topically Coherent Visual Story Generation

no code implementations21 May 2018 Qiuyuan Huang, Zhe Gan, Asli Celikyilmaz, Dapeng Wu, Jian-Feng Wang, Xiaodong He

We propose a hierarchically structured reinforcement learning approach to address the challenges of planning for generating coherent multi-sentence stories for the visual storytelling task.

reinforcement-learning Reinforcement Learning (RL) +2

Attentive Tensor Product Learning

no code implementations20 Feb 2018 Qiuyuan Huang, Li Deng, Dapeng Wu, Chang Liu, Xiaodong He

This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models.

Constituency Parsing Image Captioning +4

Structured Memory based Deep Model to Detect as well as Characterize Novel Inputs

no code implementations30 Jan 2018 Pratik Prabhanjan Brahma, Qiuyuan Huang, Dapeng Wu

While deep learning has pushed the boundaries in various machine learning tasks, the current models are still far away from replicating many functions that a normal human brain can do.

Memorization

A Neural-Symbolic Approach to Design of CAPTCHA

no code implementations29 Oct 2017 Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, Dapeng Wu

To address this, this paper promotes image/visual captioning based CAPTCHAs, which is robust against machine-learning-based attacks.

BIG-bench Machine Learning Image Captioning +1

Tensor Product Generation Networks for Deep NLP Modeling

2 code implementations NAACL 2018 Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, Dapeng Wu

We present a new approach to the design of deep networks for natural language processing (NLP), based on the general technique of Tensor Product Representations (TPRs) for encoding and processing symbol structures in distributed neural networks.

Caption Generation

Context-Aware Online Learning for Course Recommendation of MOOC Big Data

no code implementations11 Oct 2016 Yifan Hou, Pan Zhou, Ting Wang, Li Yu, Yuchong Hu, Dapeng Wu

In this respect, the key challenge is how to realize personalized course recommendation as well as to reduce the computing and storage costs for the tremendous course data.

Recommendation Systems

Differentially Private Online Learning for Cloud-Based Video Recommendation with Multimedia Big Data in Social Networks

no code implementations1 Sep 2015 Pan Zhou, Yingxue Zhou, Dapeng Wu, Hai Jin

In addition, none of them has considered both the privacy of users' contexts (e, g., social status, ages and hobbies) and video service vendors' repositories, which are extremely sensitive and of significant commercial value.

Privacy Preserving Recommendation Systems

Joint Association Graph Screening and Decomposition for Large-scale Linear Dynamical Systems

no code implementations17 Nov 2014 Yiyuan She, Yuejia He, Shijie Li, Dapeng Wu

In particular, our method can pre-determine and remove unnecessary edges based on the joint graphical structure, referred to as JAG screening, and can decompose a large network into smaller subnetworks in a robust manner, referred to as JAG decomposition.

Learning Topology and Dynamics of Large Recurrent Neural Networks

no code implementations5 Oct 2014 Yiyuan She, Yuejia He, Dapeng Wu

Large-scale recurrent networks have drawn increasing attention recently because of their capabilities in modeling a large variety of real-world phenomena and physical mechanisms.

Cannot find the paper you are looking for? You can Submit a new open access paper.