Search Results for author: Zhenyu Yang

Found 15 papers, 4 papers with code

Skeleton Ground Truth Extraction: Methodology, Annotation Tool and Benchmarks

1 code implementation10 Oct 2023 Cong Yang, Bipin Indurkhya, John See, Bo Gao, Yan Ke, Zeyd Boukhers, Zhenyu Yang, Marcin Grzegorzek

However, most existing shape and image datasets suffer from the lack of skeleton GT and inconsistency of GT standards.

A Radiomics-Incorporated Deep Ensemble Learning Model for Multi-Parametric MRI-based Glioma Segmentation

no code implementations19 Mar 2023 Yang Chen, Zhenyu Yang, Jingtong Zhao, Justus Adamson, Yang Sheng, Fang-Fang Yin, Chunhao Wang

Four deep neural networks as sub-models following the U-Net architecture were trained for the segmenting of a region-of-interest (ROI): each sub-model utilizes the mp-MRI and 1 of the 4 PCs as a 5-channel input for a 2D execution.

Dimensionality Reduction Ensemble Learning +4

State of the Art and Potentialities of Graph-level Learning

no code implementations14 Jan 2023 Zhenyu Yang, Ge Zhang, Jia Wu, Jian Yang, Quan Z. Sheng, Shan Xue, Chuan Zhou, Charu Aggarwal, Hao Peng, Wenbin Hu, Edwin Hancock, Pietro Liò

Traditional approaches to learning a set of graphs heavily rely on hand-crafted features, such as substructures.

Graph Learning

Shadow-Oriented Tracking Method for Multi-Target Tracking in Video-SAR

no code implementations29 Nov 2022 Xiaochuan Ni, Xiaoling Zhang, Xu Zhan, Zhenyu Yang, Jun Shi, Shunjun Wei, Tianjiao Zeng

To avoid missed tracking, a detection method based on deep learning is designed to thoroughly learn shadows' features, thus increasing the accurate estimation.

Quantifying U-Net Uncertainty in Multi-Parametric MRI-based Glioma Segmentation by Spherical Image Projection

no code implementations12 Oct 2022 Zhenyu Yang, Kyle Lafata, Eugene Vaios, Zongsheng Hu, Trey Mullikin, Fang-Fang Yin, Chunhao Wang

The SPU-Net model was compared with (1) the classic U-Net model with test-time augmentation (TTA) and (2) linear scaling-based U-Net (LSU-Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score).

Segmentation Specificity

Complicated Background Suppression of ViSAR Image For Moving Target Shadow Detection

no code implementations21 Sep 2022 Zhenyu Yang, Xiaoling Zhang, Xu Zhan

The existing Video Synthetic Aperture Radar (ViSAR) moving target shadow detection methods based on deep neural networks mostly generate numerous false alarms and missing detections, because of the foreground-background indistinguishability.

Shadow Detection

Generating Coherent Narratives by Learning Dynamic and Discrete Entity States with a Contrastive Framework

1 code implementation8 Aug 2022 Jian Guan, Zhenyu Yang, Rongsheng Zhang, Zhipeng Hu, Minlie Huang

Despite advances in generating fluent texts, existing pretraining models tend to attach incoherent event sequences to involved entities when generating narratives such as stories and news.

Sentence

Shadow-Background-Noise 3D Spatial Decomposition Using Sparse Low-Rank Gaussian Properties for Video-SAR Moving Target Shadow Enhancement

no code implementations7 Jul 2022 Xiaowo Xu, Xiaoling Zhang, Tianwen Zhang, Zhenyu Yang, Jun Shi, Xu Zhan

Moving target shadows among video synthetic aperture radar (Video-SAR) images are always interfered by low scattering backgrounds and cluttered noises, causing poor detec-tion-tracking accuracy.

Shadow Detection

Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation

1 code implementation6 Jun 2022 Pei Ke, Haozhe Ji, Zhenyu Yang, Yi Huang, Junlan Feng, Xiaoyan Zhu, Minlie Huang

Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks.

Data-to-Text Generation Unsupervised Pre-training

LaMemo: Language Modeling with Look-Ahead Memory

1 code implementation NAACL 2022 Haozhe Ji, Rongsheng Zhang, Zhenyu Yang, Zhipeng Hu, Minlie Huang

Although Transformers with fully connected self-attentions are powerful to model long-term dependencies, they are struggling to scale to long texts with thousands of words in language modeling.

Language Modelling

A Neural Ordinary Differential Equation Model for Visualizing Deep Neural Network Behaviors in Multi-Parametric MRI based Glioma Segmentation

no code implementations1 Mar 2022 Zhenyu Yang, Zongsheng Hu, Hangjie Ji, Kyle Lafata, Scott Floyd, Fang-Fang Yin, Chunhao Wang

Methods: By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we designed a novel deep learning model, neural ODE, in which deep feature extraction was governed by an ODE without explicit expression.

Segmentation

A Radiomics-Boosted Deep-Learning Model for COVID-19 and Non-COVID-19 Pneumonia Classification Using Chest X-ray Image

no code implementations19 Jul 2021 Zongsheng Hu, Zhenyu Yang, Kyle J. Lafata, Fang-Fang Yin, Chunhao Wang

To develop a deep-learning model that integrates radiomics analysis for enhanced performance of COVID-19 and Non-COVID-19 pneumonia detection using chest X-ray image, two deep-learning models were trained based on a pre-trained VGG-16 architecture: in the 1st model, X-ray image was the sole input; in the 2nd model, X-ray image and 2 radiomic feature maps (RFM) selected by the saliency map analysis of the 1st model were stacked as the input.

Pneumonia Detection Specificity

Semantic-Enhanced Explainable Finetuning for Open-Domain Dialogues

no code implementations6 Jun 2021 Yinhe Zheng, Yida Wang, Pei Ke, Zhenyu Yang, Minlie Huang

This paper propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling.

Informativeness Language Modelling +1

The distance between the weights of the neural network is meaningful

no code implementations31 Jan 2021 Liqun Yang, Yijun Yang, Yao Wang, Zhenyu Yang, Wei Zeng

In the application of neural networks, we need to select a suitable model based on the problem complexity and the dataset scale.

A t-SNE Based Classification Approach to Compositional Microbiome Data

no code implementations frontiers 2020 Xueli Xu, Zhongming Xie, Zhenyu Yang, Dongfang Li, Ximing Xu

This study presented a t-SNE based classification approach for compositional microbiome data, which enabled us to build classifiers and classify new samples in the reduced dimensional space produced by t-SNE.

Classification Dimensionality Reduction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.