no code implementations • 19 Oct 2023 • Xiang Zhang, Senyu Li, Zijun Wu, Ning Shi
Expanding on our findings, we introduce "Vision Description Prompting," a method that effectively improves performance in challenging vision-related tasks.
1 code implementation • 2 Oct 2023 • Zijun Wu, Yongkang Wu, Lili Mou
Prompt tuning in natural language processing (NLP) has become an increasingly popular method for adapting large language models to specific tasks.
1 code implementation • 10 Sep 2023 • Zijun Wu, Anup Anand Deshmukh, Yongkang Wu, Jimmy Lin, Lili Mou
Our approach involves a two-stage training process: pretraining with an unsupervised parser and finetuning on downstream NLP tasks.
1 code implementation • 16 Apr 2023 • Yangyi Liu, Huan Liu, Liangyan Li, Zijun Wu, Jun Chen
Although it is possible to augment the NH-HAZE23 dataset by leveraging other non-homogeneous dehazing datasets, we observe that it is necessary to design a proper data-preprocessing approach that reduces the distribution gaps between the target dataset and the augmented one.
no code implementations • 24 Feb 2022 • Zhize Wu, Huanyi Li, XiaoFeng Wang, Zijun Wu, Le Zou, Lixiang Xu, Ming Tan
Household garbage images are usually faced with complex backgrounds, variable illuminations, diverse angles, and changeable shapes, which bring a great difficulty in garbage image classification.
no code implementations • CVPR 2022 • Huan Liu, Zijun Wu, Liangyan Li, Sadaf Salehkalaibar, Jun Chen, Keyan Wang
Motivated by this observation, we propose a test-time training method which leverages a helper network to assist the dehazing model in better adapting to a domain of interest.
1 code implementation • 18 Sep 2021 • Zijun Wu, Zi Xuan Zhang, Atharva Naik, Zhijian Mei, Mauajama Firdaus, Lili Mou
In this work, we address the explainability of NLI by weakly supervised logical reasoning, and propose an Explainable Phrasal Reasoning (EPR) approach.
no code implementations • 25 Mar 2021 • Xiaohong Liu, Zhihao Shi, Zijun Wu, Jun Chen
We also propose a novel intra-task knowledge transfer mechanism that can memorize and take advantage of synthetic domain knowledge to assist the learning process on the translated data.
no code implementations • 23 Jun 2018 • Thomas Weise, Zijun Wu, Markus Wagner
We propose using more advanced methods to discriminate between "good" and "bad" sample runs, with the goal of increasing the correlation of the chosen run with the a-posteriori best one.
no code implementations • 21 Dec 2016 • Zijun Wu, Rolf Moehring, Jianhui Lai
For simple instances that have a $\{1, n\}$-valued distance function and a unique optimal solution, we prove a stochastic runtime of $O(n^{6+\epsilon})$ with the vertex-based random solution generation, and a stochastic runtime of $O(n^{3+\epsilon}\ln n)$ with the edge-based random solution generation for an arbitrary $\epsilon\in (0, 1)$.