Search Results for author: Qiaozhu Mei

Found 52 papers, 20 papers with code

Unlocking the `Why' of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience

no code implementations20 Feb 2024 Tao Chen, Siqi Zuo, Cheng Li, Mingyang Zhang, Qiaozhu Mei, Michael Bendersky

To this end, we introduce an LLM-based approach to generate a dataset that consists of textual explanations of why real users make certain purchase decisions.

Explanation Generation Recommendation Systems

Bridging the Preference Gap between Retrievers and LLMs

no code implementations13 Jan 2024 Zixuan Ke, Weize Kong, Cheng Li, Mingyang Zhang, Qiaozhu Mei, Michael Bendersky

Large Language Models (LLMs) have demonstrated superior results across a wide range of tasks, and Retrieval-augmented Generation (RAG) is an effective way to enhance the performance by locating relevant information and placing it into the context window of the LLM.

Question Answering Retrieval

A Turing Test: Are AI Chatbots Behaviorally Similar to Humans?

no code implementations19 Nov 2023 Qiaozhu Mei, Yutong Xie, Walter Yuan, Matthew O. Jackson

Their behaviors are often distinct from average and modal human behaviors, in which case they tend to behave on the more altruistic and cooperative end of the distribution.

Fairness

Automated Evaluation of Personalized Text Generation using Large Language Models

no code implementations17 Oct 2023 Yaqing Wang, Jiepu Jiang, Mingyang Zhang, Cheng Li, Yi Liang, Qiaozhu Mei, Michael Bendersky

Personalized text generation presents a specialized mechanism for delivering content that is specific to a user's personal context.

Text Generation text similarity

Emoji Promotes Developer Participation and Issue Resolution on GitHub

no code implementations30 Aug 2023 YuHang Zhou, Xuan Lu, Ge Gao, Qiaozhu Mei, Wei Ai

In this paper, we study how emoji usage influences developer participation and issue resolution in virtual workspaces.

Causal Inference

Teach LLMs to Personalize -- An Approach inspired by Writing Education

no code implementations15 Aug 2023 Cheng Li, Mingyang Zhang, Qiaozhu Mei, Yaqing Wang, Spurthi Amba Hombaiah, Yi Liang, Michael Bendersky

Inspired by the practice of writing education, we develop a multistage and multitask framework to teach LLMs for personalized generation.

Retrieval Text Generation

Ranking & Reweighting Improves Group Distributional Robustness

no code implementations9 May 2023 Yachuan Liu, Bohan Zhang, Qiaozhu Mei, Paramveer Dhillon

Recent work has shown that standard training via empirical risk minimization (ERM) can produce models that achieve high accuracy on average but low accuracy on underrepresented groups due to the prevalence of spurious features.

Information Retrieval Learning-To-Rank +2

A Prompt Log Analysis of Text-to-Image Generation Systems

1 code implementation8 Mar 2023 Yutong Xie, Zhaoying Pan, Jinge Ma, Luo Jie, Qiaozhu Mei

Despite the plenty of efforts to improve the generative models, there is limited work on understanding the information needs of the users of these systems at scale.

Text-to-Image Generation

Team Resilience under Shock: An Empirical Analysis of GitHub Repositories during Early COVID-19 Pandemic

no code implementations29 Jan 2023 Xuan Lu, Wei Ai, Yixin Wang, Qiaozhu Mei

While many organizations have shifted to working remotely during the COVID-19 pandemic, how the remote workforce and the remote teams are influenced by and would respond to this and future shocks remain largely unknown.

counterfactual

Why is constrained neural language generation particularly challenging?

no code implementations11 Jun 2022 Cristina Garbacea, Qiaozhu Mei

Recent advances in deep neural language models combined with the capacity of large scale datasets have accelerated the development of natural language generation systems that produce fluent and coherent texts (to various degrees of success) in a multitude of tasks and application contexts.

Text Generation

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

3 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

Partition-Based Active Learning for Graph Neural Networks

1 code implementation23 Jan 2022 Jiaqi Ma, Ziqiao Ma, Joyce Chai, Qiaozhu Mei

We study the problem of semi-supervised learning with Graph Neural Networks (GNNs) in an active learning setup.

Active Learning Node Classification

Fast Learning of MNL Model from General Partial Rankings with Application to Network Formation Modeling

1 code implementation31 Dec 2021 Jiaqi Ma, Xingjian Zhang, Qiaozhu Mei

The problem of learning mixture of MNL models from partial rankings naturally arises in such applications.

Discrete Choice Models

How Much Space Has Been Explored? Measuring the Chemical Space Covered by Databases and Machine-Generated Molecules

no code implementations22 Dec 2021 Yutong Xie, Ziqiao Xu, Jiaqi Ma, Qiaozhu Mei

We further evaluate how well the existing databases and generation models cover the chemical space in terms of #Circles.

Drug Discovery

Subgroup Generalization and Fairness of Graph Neural Networks

1 code implementation NeurIPS 2021 Jiaqi Ma, Junwei Deng, Qiaozhu Mei

Despite enormous successful applications of graph neural networks (GNNs), theoretical understanding of their generalization ability, especially for node-level tasks where data are not independent and identically-distributed (IID), has been sparse.

Fairness

Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem

2 code implementations21 Jun 2021 Jiaqi Ma, Junwei Deng, Qiaozhu Mei

This connection not only enhances our understanding on the problem of adversarial attack on GNNs, but also allows us to propose a group of effective and practical attack strategies.

Adversarial Attack

Emojis predict dropouts of remote workers: An empirical study of emoji usage on GitHub

no code implementations10 Feb 2021 Xuan Lu, Wei Ai, Zhenpeng Chen, Yanbin Cao, Qiaozhu Mei

This paper studies how emojis, as non-verbal cues in online communications, can be used for such purposes and how the emotional signals in emoji usage can be used to predict future behavior of workers.

Management

Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem

no code implementations1 Jan 2021 Jiaqi Ma, Junwei Deng, Qiaozhu Mei

This connection not only enhances our understanding on the problem of adversarial attack on GNNs, but also allows us to propose a group of effective black-box attack strategies.

Adversarial Attack

UMSIForeseer at SemEval-2020 Task 11: Propaganda Detection by Fine-Tuning BERT with Resampling and Ensemble Learning

no code implementations SEMEVAL 2020 Yunzhe Jiang, Cristina Garbacea, Qiaozhu Mei

We describe our participation at the SemEval 2020 {``}Detection of Propaganda Techniques in News Articles{''} - Techniques Classification (TC) task, designed to categorize textual fragments into one of the 14 given propaganda techniques.

Ensemble Learning Propaganda detection

CopulaGNN: Towards Integrating Representational and Correlational Roles of Graphs in Graph Neural Networks

2 code implementations ICLR 2021 Jiaqi Ma, Bo Chang, Xuefei Zhang, Qiaozhu Mei

In this paper, we distinguish the \textit{representational} and the \textit{correlational} roles played by the graphs in node-level prediction tasks, and we investigate how Graph Neural Network (GNN) models can effectively leverage both types of information.

SODEN: A Scalable Continuous-Time Survival Model through Ordinary Differential Equation Networks

1 code implementation19 Aug 2020 Weijing Tang, Jiaqi Ma, Qiaozhu Mei, Ji Zhu

In this paper, we propose a flexible model for survival analysis using neural networks along with scalable optimization algorithms.

Survival Analysis

Predicting Individual Treatment Effects of Large-scale Team Competitions in a Ride-sharing Economy

no code implementations7 Aug 2020 Teng Ye, Wei Ai, Lingyu Zhang, Ning Luo, Lulu Zhang, Jieping Ye, Qiaozhu Mei

Through interpreting the best-performing models, we discover many novel and actionable insights regarding how to optimize the design and the execution of team competitions on ride-sharing platforms.

Neural Language Generation: Formulation, Methods, and Evaluation

no code implementations31 Jul 2020 Cristina Garbacea, Qiaozhu Mei

Nevertheless, there is no standard way to assess the quality of text produced by these generative models, which constitutes a serious bottleneck towards the progress of the field.

Text Generation

Learning-to-Rank with Partitioned Preference: Fast Estimation for the Plackett-Luce Model

no code implementations9 Jun 2020 Jiaqi Ma, Xinyang Yi, Weijing Tang, Zhe Zhao, Lichan Hong, Ed H. Chi, Qiaozhu Mei

We investigate the Plackett-Luce (PL) model based listwise learning-to-rank (LTR) on data with partitioned preference, where a set of items are sliced into ordered and disjoint partitions, but the ranking of items within a partition is unknown.

Extreme Multi-Label Classification Learning-To-Rank +1

Towards More Practical Adversarial Attacks on Graph Neural Networks

2 code implementations NeurIPS 2020 Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei

Our theoretical and empirical analyses suggest that there is a discrepancy between the loss and mis-classification rate, as the latter presents a diminishing-return pattern when the number of attacked nodes increases.

Classification General Classification

Graph Representation Learning via Multi-task Knowledge Distillation

no code implementations11 Nov 2019 Jiaqi Ma, Qiaozhu Mei

In this work, we demonstrate that, if available, the domain expertise used for designing handcraft graph features can improve the graph-level representation learning when training labels are scarce.

Graph Representation Learning Knowledge Distillation +1

SEntiMoji: An Emoji-Powered Learning Approach for Sentiment Analysis in Software Engineering

1 code implementation4 Jul 2019 Zhenpeng Chen, Yanbin Cao, Xuan Lu, Qiaozhu Mei, Xuanzhe Liu

However, commonly used out-of-the-box sentiment analysis tools cannot obtain reliable results on SE tasks and the misunderstanding of technical jargon is demonstrated to be the main reason.

Representation Learning Sentiment Analysis

A Flexible Generative Framework for Graph-based Semi-supervised Learning

1 code implementation NeurIPS 2019 Jiaqi Ma, Weijing Tang, Ji Zhu, Qiaozhu Mei

In this work, we propose a flexible generative framework for graph-based semi-supervised learning, which approaches the joint distribution of the node features, labels, and the graph structure.

Missing Labels Variational Inference

Judge the Judges: A Large-Scale Evaluation Study of Neural Language Models for Online Review Generation

1 code implementation IJCNLP 2019 Cristina Garbacea, Samuel Carton, Shiyan Yan, Qiaozhu Mei

We conduct a large-scale, systematic study to evaluate the existing evaluation methods for natural language generation in the context of generating online product reviews.

Review Generation Text Generation

Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification

1 code implementation7 Jun 2018 Zhenpeng Chen, Sheng Shen, Ziniu Hu, Xuan Lu, Qiaozhu Mei, Xuanzhe Liu

To tackle this problem, cross-lingual sentiment classification approaches aim to transfer knowledge learned from one language that has abundant labeled examples (i. e., the source language, usually English) to another language with fewer labels (i. e., the target language).

Classification Cross-Lingual Sentiment Classification +5

Find the Conversation Killers: a Predictive Study of Thread-ending Posts

no code implementations22 Dec 2017 Yunhao Jiao, Cheng Li, Fei Wu, Qiaozhu Mei

In this study, we are particularly interested in identifying a post in a multi-party conversation that is unlikely to be further replied to, which therefore kills that thread of the conversation.

End-to-end Learning for Short Text Expansion

no code implementations30 Aug 2017 Jian Tang, Yue Wang, Kai Zheng, Qiaozhu Mei

A novel deep memory network is proposed to automatically find relevant information from a collection of longer documents and reformulate the short text through a gating mechanism.

Recommendation Systems text-classification +1

Deep Memory Networks for Attitude Identification

no code implementations16 Jan 2017 Cheng Li, Xiaoxiao Guo, Qiaozhu Mei

In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets.

BIG-bench Machine Learning General Classification

Less is More: Learning Prominent and Diverse Topics for Data Summarization

no code implementations29 Nov 2016 Jian Tang, Cheng Li, Ming Zhang, Qiaozhu Mei

With this reinforced random walk as a general process embedded in classical topic models, we obtain \textit{diverse topic models} that are able to extract the most prominent and diverse topics from data.

Data Summarization Topic Models

Context-aware Natural Language Generation with Recurrent Neural Networks

1 code implementation29 Nov 2016 Jian Tang, Yifan Yang, Sam Carton, Ming Zhang, Qiaozhu Mei

This paper studied generating natural languages at particular contexts or situations.

Text Generation

Identity-sensitive Word Embedding through Heterogeneous Networks

no code implementations29 Nov 2016 Jian Tang, Meng Qu, Qiaozhu Mei

Based on an identity-labeled text corpora, a heterogeneous network of words and word identities is constructed to model different-levels of word co-occurrences.

Network Embedding text-classification +3

DeepCas: an End-to-end Predictor of Information Cascades

1 code implementation16 Nov 2016 Cheng Li, Jiaqi Ma, Xiaoxiao Guo, Qiaozhu Mei

While many believe that they are inherently unpredictable, recent work has shown that some key properties of information cascades, such as size, growth, and shape, can be predicted by a machine learning algorithm that combines many features.

DeepGraph: Graph Structure Predicts Network Growth

no code implementations20 Oct 2016 Cheng Li, Xiaoxiao Guo, Qiaozhu Mei

Conventionally, a graph structure is represented using an adjacency matrix or a set of hand-crafted structural features.

Visualizing Large-scale and High-dimensional Data

5 code implementations1 Feb 2016 Jian Tang, Jingzhou Liu, Ming Zhang, Qiaozhu Mei

We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space.

graph construction Vocal Bursts Intensity Prediction

The DARPA Twitter Bot Challenge

no code implementations20 Jan 2016 V. S. Subrahmanian, Amos Azaria, Skylar Durst, Vadim Kagan, Aram Galstyan, Kristina Lerman, Linhong Zhu, Emilio Ferrara, Alessandro Flammini, Filippo Menczer, Andrew Stevens, Alexander Dekhtyar, Shuyang Gao, Tad Hogg, Farshad Kooti, Yan Liu, Onur Varol, Prashant Shiralkar, Vinod Vydiswaran, Qiaozhu Mei, Tim Hwang

A number of organizations ranging from terrorist groups such as ISIS to politicians and nation states reportedly conduct explicit campaigns to influence opinion on social media, posing a risk to democratic processes.

PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks

1 code implementation2 Aug 2015 Jian Tang, Meng Qu, Qiaozhu Mei

One possible reason is that these text embedding methods learn the representation of text in a fully unsupervised way, without leveraging the labeled information available for the task.

Representation Learning

LINE: Large-scale Information Network Embedding

8 code implementations12 Mar 2015 Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, Qiaozhu Mei

This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction.

Graph Embedding Link Prediction +2

"Look Ma, No Hands!" A Parameter-Free Topic Model

no code implementations10 Sep 2014 Jian Tang, Ming Zhang, Qiaozhu Mei

We show that the new parameter can be further eliminated by two parameter-free treatments: either by monitoring the diversity among the discovered topics or by a weak supervision from users in the form of an exemplar topic.

Model Selection Topic Models

GenDeR: A Generic Diversified Ranking Algorithm

no code implementations NeurIPS 2012 Jingrui He, Hanghang Tong, Qiaozhu Mei, Boleslaw Szymanski

In this paper, we consider a generic setting where we aim to diversify the top-k ranking list based on an arbitrary relevance function and an arbitrary similarity function among all the examples.

Information Retrieval Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.