1 code implementation • 7 Mar 2024 • Boshi Wang, Hao Fang, Jason Eisner, Benjamin Van Durme, Yu Su
We find that existing LLMs, including GPT-4 and open-source LLMs specifically fine-tuned for tool use, only reach a correctness rate in the range of 30% to 60%, far from reliable use in practice.
no code implementations • 27 Feb 2024 • Yao Li, Chengpu Yu, Hao Fang, Jie Chen
A computationally efficient and numerically reliable parameter identification algorithm is proposed by equating optimal control strategies with a system of linear equations, and the associated relative error upper bound is derived in terms of data volume and signal-to-noise ratio.
1 code implementation • 6 Feb 2024 • Hao Fang, Yixiang Qiu, Hongyao Yu, Wenbo Yu, Jiawei Kong, Baoli Chong, Bin Chen, Xuan Wang, Shu-Tao Xia
Model Inversion (MI) attacks aim to disclose private information about the training data by abusing access to the pre-trained models.
no code implementations • 31 Jan 2024 • Hao Fang, Ajian Liu, Haocheng Yuan, Junze Zheng, Dingheng Zeng, Yanhong Liu, Jiankang Deng, Sergio Escalera, Xiaoming Liu, Jun Wan, Zhen Lei
These three modules seamlessly form a robust unified attack detection framework.
no code implementations • 2 Oct 2023 • Andrew D. Gordon, Carina Negreanu, José Cambronero, Rasika Chakravarthy, Ian Drosos, Hao Fang, Bhaskar Mitra, Hannah Richardson, Advait Sarkar, Stephanie Simmons, Jack Williams, Ben Zorn
Hence, we are seeing the emergence of tool-assisted experiences to help the user double-check a piece of AI-generated content.
1 code implementation • 20 Sep 2023 • Kumar Shridhar, Harsh Jhamtani, Hao Fang, Benjamin Van Durme, Jason Eisner, Patrick Xia
To enable exploration in this space, we present SCREWS, a modular framework for reasoning with revisions.
1 code implementation • 18 Sep 2023 • Kevin Lin, Patrick Xia, Hao Fang
We evaluate the ability of semantic parsers based on large language models (LLMs) to handle contextual utterances.
1 code implementation • ICCV 2023 • Hao Fang, Bin Chen, Xuan Wang, Zhi Wang, Shu-Tao Xia
Federated Learning (FL) has recently emerged as a promising distributed machine learning framework to preserve clients' privacy, by allowing multiple clients to upload the gradients calculated from their local data to a central server.
no code implementations • 15 May 2023 • Harsh Jhamtani, Hao Fang, Patrick Xia, Eran Levy, Jacob Andreas, Ben Van Durme
Designing natural language interfaces has historically required collecting supervised data to translate user requests into carefully designed intent representations.
no code implementations • 15 Apr 2023 • Hao Fang, Ajian Liu, Jun Wan, Sergio Escalera, Hugo Jair Escalante, Zhen Lei
Based on this dataset and protocol-$3$ for evaluating the robustness of the algorithm under quality changes, we organized a face presentation attack detection challenge in surveillance scenarios.
no code implementations • 3 Jan 2023 • Hao Fang, Ajian Liu, Jun Wan, Sergio Escalera, Chenxu Zhao, Xu Zhang, Stan Z. Li, Zhen Lei
In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks.
1 code implementation • 11 Oct 2022 • Hao Cheng, Hao Fang, Xiaodong Liu, Jianfeng Gao
Given its effectiveness on knowledge-intensive natural language processing tasks, dense retrieval models have become increasingly popular.
no code implementations • 27 Sep 2022 • Apan Dastider, Hao Fang, Mingjie Lin
Real-time interception of fast-moving objects by robotic arms in dynamic environments poses a formidable challenge due to the need for rapid reaction times, often within milliseconds, amidst dynamic obstacles.
1 code implementation • 16 Sep 2022 • Hao Fang, Anusha Balakrishnan, Harsh Jhamtani, John Bufe, Jean Crawford, Jayant Krishnamurthy, Adam Pauls, Jason Eisner, Jacob Andreas, Dan Klein
Satisfying these constraints simultaneously is difficult for the two predominant paradigms in language generation: neural language modeling and rule-based generation.
1 code implementation • 24 May 2022 • Elias Stengel-Eskin, Emmanouil Antonios Platanios, Adam Pauls, Sam Thomson, Hao Fang, Benjamin Van Durme, Jason Eisner, Yu Su
Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows.
1 code implementation • ICCV 2021 • Hao Fang, Daoxin Zhang, Yi Zhang, Minghao Chen, Jiawei Li, Yao Hu, Deng Cai, Xiaofei He
In this paper, we study the Salient Object Ranking (SOR) task, which manages to assign a ranking order of each detected object according to its visual saliency.
no code implementations • NAACL 2021 • Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, Jacob Andreas
We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers.
no code implementations • 31 May 2021 • Hao Fang, Chen Gong, Chen Zhang, Yanan Sui, Luming Li
Speech disorders often occur at the early stage of Parkinson's disease (PD).
1 code implementation • 24 Sep 2020 • Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, Alexander Zotov
We describe an approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph.
3 code implementations • 31 Aug 2020 • Tu Zheng, Hao Fang, Yi Zhang, Wenjian Tang, Zheng Yang, Haifeng Liu, Deng Cai
Lane detection is one of the most important tasks in self-driving.
Ranked #4 on Lane Detection on TuSimple
no code implementations • CVPR 2020 • Hao Fang, Florent Lafarge
Converting point clouds generated by Laser scanning, multiview stereo imagery or depth cameras into compact polygon meshes is a challenging problem in vision.
no code implementations • 6 May 2020 • Hao Fang
Additionally, we construct a new knowledge base to power the socialbot by collecting social chat content from a variety of sources.
no code implementations • 7 Feb 2020 • Bingquan Zhu, Hao Fang, Yanan Sui, Luming Li
Data sharing for medical research has been difficult as open-sourcing clinical data may violate patient privacy.
1 code implementation • NAACL 2019 • Hao Cheng, Hao Fang, Mari Ostendorf
Characterizing these differences can be useful in human-computer interaction, as well as analysis of human-human conversations.
no code implementations • CVPR 2018 • Hao Fang, Florent Lafarge, Mathieu Desbrun
Interpreting 3D data such as point clouds or surface meshes depends heavily on the scale of observation.
no code implementations • NAACL 2018 • Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark, Ari Holtzman, Yejin Choi, Noah A. Smith, Mari Ostendorf
We present Sounding Board, a social chatbot that won the 2017 Amazon Alexa Prize.
1 code implementation • EMNLP 2017 • Hao Cheng, Hao Fang, Mari Ostendorf
We develop a novel factored neural model that learns comment embeddings in an unsupervised way leveraging the structure of distributional context in online discussion forums.
no code implementations • 16 Aug 2016 • Hao Fang, Hao Cheng, Mari Ostendorf
Many social media platforms offer a mechanism for readers to react to comments, both positively and negatively, which in aggregate can be thought of as community endorsement.
1 code implementation • EMNLP 2016 • Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, Li Deng
We develop a novel bi-directional attention model for dependency parsing, which learns to agree on headword predictions from the forward and backward parsing directions.
Ranked #4 on Chinese Dependency Parsing on Chinese Pennbank
no code implementations • EMNLP 2015 • Aaron Jaech, Victoria Zayats, Hao Fang, Mari Ostendorf, Hannaneh Hajishirzi
This paper addresses the question of how language use affects community reaction to comments in online discussion forums, and the relative importance of the message vs. the messenger.
no code implementations • IJCNLP 2015 • Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell
Two recent approaches have achieved state-of-the-art results in image captioning.
18 code implementations • 1 Apr 2015 • Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
In this paper we describe the Microsoft COCO Caption dataset and evaluation server.
1 code implementation • CVPR 2015 • Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig
The language model learns from a set of over 400, 000 image descriptions to capture the statistics of word usage.
Ranked #1 on Image Captioning on COCO Captions test