no code implementations • 16 Feb 2024 • Hossein Rajabzadeh, Mojtaba Valipour, Tianshu Zhu, Marzieh Tahaei, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh
Finetuning large language models requires huge GPU memory, restricting the choice to acquire Larger models.
no code implementations • 16 Sep 2023 • Hossein Rajabzadeh, Suyuchen Wang, Hyock Ju Kwon, Bang Liu
We employ a tool-interacting divide-and-conquer strategy enabling large language models (LLMs) to answer complex multimodal multi-hop questions.