Search Results for author: Tongtong Wu

Found 17 papers, 10 papers with code

Event Causality Identification via Derivative Prompt Joint Learning

1 code implementation COLING 2022 Shirong Shen, Heng Zhou, Tongtong Wu, Guilin Qi

This paper studies event causality identification, which aims at predicting the causality relation for a pair of events in a sentence.

Event Causality Identification Language Modelling +1

Double Mixture: Towards Continual Event Detection from Speech

1 code implementation20 Apr 2024 Jingqi Kang, Tongtong Wu, Jinming Zhao, Guitao Wang, Yinwei Wei, Hao Yang, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari

To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.'

Continual Learning Disentanglement +1

Counter-intuitive: Large Language Models Can Better Understand Knowledge Graphs Than We Thought

no code implementations18 Feb 2024 Xinbang Dai, Yuncheng Hua, Tongtong Wu, Yang Sheng, Qiu Ji, Guilin Qi

Although the method of enhancing large language models' (LLMs') reasoning ability and reducing their hallucinations through the use of knowledge graphs (KGs) has received widespread attention, the exploration of how to enable LLMs to integrate the structured knowledge in KGs on-the-fly remains inadequate.

Knowledge Graphs Question Answering

Continual Learning for Large Language Models: A Survey

1 code implementation2 Feb 2024 Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, Gholamreza Haffari

Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale.

Continual Learning Continual Pretraining +2

Towards Event Extraction from Speech with Contextual Clues

1 code implementation27 Jan 2024 Jingqi Kang, Tongtong Wu, Jinming Zhao, Guitao Wang, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari

While text-based event extraction has been an active research area and has seen successful application in many domains, extracting semantic events from speech directly is an under-explored problem.

Event Extraction speech-recognition +1

Benchmarking Large Language Models in Complex Question Answering Attribution using Knowledge Graphs

no code implementations26 Jan 2024 Nan Hu, Jiaoyan Chen, Yike Wu, Guilin Qi, Sheng Bi, Tongtong Wu, Jeff Z. Pan

The attribution of question answering is to provide citations for supporting generated statements, and has attracted wide research attention.

Benchmarking Knowledge Graphs +1

Towards Lifelong Scene Graph Generation with Knowledge-ware In-context Prompt Learning

no code implementations26 Jan 2024 Tao He, Tongtong Wu, Dongyang Zhang, Guiduo Duan, Ke Qin, Yuan-Fang Li

Besides, extensive experiments on the two mainstream benchmark datasets, VG and Open-Image(v6), show the superiority of our proposed model to a number of competitive SGG models in terms of continuous learning and conventional settings.

Graph Generation In-Context Learning +1

NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery

no code implementations26 May 2023 Farhad Moghimifar, Shilin Qu, Tongtong Wu, Yuan-Fang Li, Gholamreza Haffari

Norms, which are culturally accepted guidelines for behaviours, can be integrated into conversational models to generate utterances that are appropriate for the socio-cultural context.

Continual Multimodal Knowledge Graph Construction

1 code implementation15 May 2023 Xiang Chen, Ningyu Zhang, Jintian Zhang, Xiaohan Wang, Tongtong Wu, Xi Chen, Yongheng Wang, Huajun Chen

Multimodal Knowledge Graph Construction (MKGC) involves creating structured representations of entities and relations using multiple modalities, such as text and images.

Continual Learning graph construction +1

Learn from Yesterday: A Semi-Supervised Continual Learning Method for Supervision-Limited Text-to-SQL Task Streams

1 code implementation21 Nov 2022 Yongrui Chen, Xinnan Guo, Tongtong Wu, Guilin Qi, Yang Li, Yang Dong

The first solution Vanilla is to perform self-training, augmenting the supervised training data with predicted pseudo-labeled instances of the current task, while replacing the full volume retraining with episodic memory replay to balance the training efficiency with the performance of previous tasks.

Continual Learning Text-To-SQL

Towards Relation Extraction From Speech

1 code implementation17 Oct 2022 Tongtong Wu, Guitao Wang, Jinming Zhao, Zhaoran Liu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari

We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Neural Topic Modeling with Deep Mutual Information Estimation

no code implementations12 Mar 2022 Kang Xu, Xiaoqiu Lu, Yuan-Fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou

NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.

Mutual Information Estimation Text Clustering +1

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

1 code implementation27 Feb 2022 Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari

In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for task-specific natural language generation with none or a handful of task-specific labeled examples.

Data Augmentation Disentanglement +3

Pretrained Language Model in Continual Learning: A Comparative Study

no code implementations ICLR 2022 Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, Gholamreza Haffari

In this paper, we thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 veins of CL methods on 3 benchmarks in 2 typical incremental settings.

Continual Learning Language Modelling

Few-Shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning

1 code implementation EMNLP 2020 Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Tongtong Wu

Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.

Knowledge Base Question Answering Meta Reinforcement Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.