1 code implementation • 31 Mar 2024 • Shiwen Shan, Yintong Huo, Yuxin Su, Yichen Li, Dan Li, Zibin Zheng
Based on the insights gained from the preliminary study, we propose an LLM-based two-stage strategy for end-users to localize the root-cause configuration properties based on logs.
no code implementations • 6 Feb 2024 • Yichen Li, Yun Peng, Yintong Huo, Michael R. Lyu
We conducted preliminary experiments to validate the performance of IDECoder and observed that this synergy represents a promising trend for future exploration.
no code implementations • 8 Jun 2023 • Jinyang Liu, JunJie Huang, Yintong Huo, Zhihan Jiang, Jiazhen Gu, Zhuangbin Chen, Cong Feng, Minzhi Yan, Michael R. Lyu
System logs play a critical role in maintaining the reliability of software systems.
no code implementations • 12 Oct 2022 • Hongming Zhang, Yintong Huo, Yanai Elazar, Yangqiu Song, Yoav Goldberg, Dan Roth
We first align commonsense tasks with relevant knowledge from commonsense knowledge bases and ask humans to annotate whether the knowledge is enough or not.
1 code implementation • 13 Dec 2020 • Hongming Zhang, Yintong Huo, Xinran Zhao, Yangqiu Song, Dan Roth
Compared with pure text-based approaches, learning causality from the visual signal has the following advantages: (1) Causality knowledge belongs to the commonsense knowledge, which is rarely expressed in the text but rich in videos; (2) Most events in the video are naturally time-ordered, which provides a rich resource for us to mine causality knowledge from; (3) All the objects in the video can be used as context to study the contextual property of causal relations.