Search Results for author: Zhenzhen Wang

Found 6 papers, 1 papers with code

Label Cleaning Multiple Instance Learning: Refining Coarse Annotations on Single Whole-Slide Images

1 code implementation22 Sep 2021 Zhenzhen Wang, Carla Saoud, Sintawat Wangsiricharoen, Aaron W. James, Aleksander S. Popel, Jeremias Sulam

Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.

Deep Attention Multiple Instance Learning +1

Attention-Aware Noisy Label Learning for Image Classification

no code implementations30 Sep 2020 Zhenzhen Wang, Chunyan Xu, Yap-Peng Tan, Junsong Yuan

In this paper, the attention-aware noisy label learning approach ($A^2NL$) is proposed to improve the discriminative capability of the network trained on datasets with potential label noise.

Classification General Classification +2

Conditional Generative Adversarial Network for Structured Domain Adaptation

no code implementations CVPR 2018 Weixiang Hong, Zhenzhen Wang, Ming Yang, Junsong Yuan

In recent years, deep neural nets have triumphed over many computer vision problems, including semantic segmentation, which is a critical task in emerging autonomous driving and medical image diagnostics applications.

Autonomous Driving Domain Adaptation +2

Compressive Quantization for Fast Object Instance Search in Videos

no code implementations ICCV 2017 Tan Yu, Zhenzhen Wang, Junsong Yuan

Most of current visual search systems focus on image-to-image (point-to-point) search such as image and object retrieval.

Instance Search Object +3

Additive Nearest Neighbor Feature Maps

no code implementations ICCV 2015 Zhenzhen Wang, Xiao-Tong Yuan, Qingshan Liu, Shuicheng Yan

In this paper, we present a concise framework to approximately construct feature maps for nonlinear additive kernels such as the Intersection, Hellinger's, and Chi^2 kernels.

Cannot find the paper you are looking for? You can Submit a new open access paper.