no code implementations • 10 Nov 2023 • Jennifer Dodgson, Lin Nanzheng, Julian Peh, Akira Rafhael Janson Pattirane, Alfath Daryl Alhajir, Eko Ridho Dinarto, Joseph Lim, Syed Danyal Ahmad
Research into methods for improving the performance of large language models (LLMs) through fine-tuning, retrieval-augmented generation (RAG) and soft-prompting has tended to focus on the use of highly technical or high-cost techniques, making many of the newly discovered approaches comparatively inaccessible to non-technical users.
no code implementations • L4DC 2020 • Karl Pertsch, Oleh Rybkin, Jingyun Yang, Shenghao Zhou, Konstantinos G. Derpanis, Kostas Daniilidis, Joseph Lim, Andrew Jaegle
We propose a model that learns to discover these important events and the times when they occur and uses them to represent the full sequence.
1 code implementation • ICML 2018 • Shao-Hua Sun, Hyeonwoo Noh, Sriram Somasundaram, Joseph Lim
To empower machines with this ability, we propose a neural program synthesizer that is able to explicitly synthesize underlying programs from behaviorally diverse and visually complicated demonstration videos.
no code implementations • NeurIPS 2017 • Karol Hausman, Yevgen Chebotar, Stefan Schaal, Gaurav Sukhatme, Joseph Lim
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof.