1 code implementation • BigScience (ACL) 2022 • Sameera Horawalavithana, Ellyn Ayton, Shivam Sharma, Scott Howland, Megha Subramanian, Scott Vasquez, Robin Cosbey, Maria Glenski, Svitlana Volkova
Foundation models pre-trained on large corpora demonstrate significant gains across many natural language processing tasks and domains e. g., law, healthcare, education, etc.
no code implementations • EMNLP (sustainlp) 2021 • Maria Glenski, William I. Sealy, Kate Miller, Dustin Arendt
Traditional synonym recommendations often include ill-suited suggestions for writer’s specific contexts.
1 code implementation • 14 Apr 2022 • Sameera Horawalavithana, Ellyn Ayton, Anastasiya Usenko, Shivam Sharma, Jasmine Eshun, Robin Cosbey, Maria Glenski, Svitlana Volkova
Machine learning models that learn from dynamic graphs face nontrivial challenges in learning and inference as both nodes and edges change over time.
1 code implementation • 15 Mar 2022 • Rishabh Joshi, Vidhisha Balachandran, Emily Saldanha, Maria Glenski, Svitlana Volkova, Yulia Tsvetkov
Keyphrase extraction aims at automatically extracting a list of "important" phrases representing the key concepts in a document.
no code implementations • EMNLP (CINLP) 2021 • Maria Glenski, Svitlana Volkova
Drawing causal conclusions from observational real-world data is a very much desired but challenging task.
no code implementations • RDSM (COLING) 2020 • Maria Glenski, Ellyn Ayton, Robin Cosbey, Dustin Arendt, Svitlana Volkova
Our analyses reveal a significant drop in performance when testing neural models on out-of-domain data and non-English languages that may be mitigated using diverse training data.
no code implementations • NAACL (SocialNLP) 2021 • Maria Glenski, Ellyn Ayton, Robin Cosbey, Dustin Arendt, Svitlana Volkova
With the increasing use of machine-learning driven algorithmic judgements, it is critical to develop models that are robust to evolving or manipulated inputs.
no code implementations • 27 Sep 2020 • Brittany Davis, Maria Glenski, William Sealy, Dustin Arendt
However, the focus on trust is too narrow, and has led the research community astray from tried and true empirical methods that produced more defensible scientific knowledge about people and explanations.
BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)
no code implementations • 21 Sep 2020 • Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan Rossi, Tim Althoff
Across 648 experiments and two datasets, we evaluate every commonly used causal inference method and identify their strengths and weaknesses to inform social media researchers seeking to use such methods, and guide future improvements.
no code implementations • 1 Jul 2019 • Maria Glenski, Tim Weninger, Svitlana Volkova
Social media signals have been successfully used to develop large-scale predictive and anticipatory analytics.
no code implementations • ACL 2018 • Maria Glenski, Tim Weninger, Svitlana Volkova
In the age of social news, it is important to understand the types of reactions that are evoked from news sources with various levels of credibility.
no code implementations • 17 Oct 2017 • Maria Glenski, Ellyn Ayton, Dustin Arendt, Svitlana Volkova
We evaluate the predictive power of models trained on varied text and image representations extracted from tweets.