no code implementations • 6 May 2024 • Xingyou Song, Yingtao Tian, Robert Tjarko Lange, Chansoo Lee, Yujin Tang, Yutian Chen
Their incorporation has been rapid and transformative, marking a significant paradigm shift in the field of machine learning research.
1 code implementation • 25 Mar 2024 • Yujin Tang, Peijie Dong, Zhenheng Tang, Xiaowen Chu, Junwei Liang
Combining CNNs or ViTs, with RNNs for spatiotemporal forecasting, has yielded unparalleled results in predicting temporal and spatial dynamics.
1 code implementation • 19 Mar 2024 • Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, David Ha
Surprisingly, our Japanese Math LLM achieved state-of-the-art performance on a variety of established Japanese LLM benchmarks, even surpassing models with significantly more parameters, despite not being explicitly trained for such tasks.
1 code implementation • 5 Mar 2024 • Robert Tjarko Lange, Yingtao Tian, Yujin Tang
Given a trajectory of evaluations and search distribution statistics, Evolution Transformer outputs a performance-improving update to the search distribution.
no code implementations • 28 Feb 2024 • Robert Tjarko Lange, Yingtao Tian, Yujin Tang
Large Transformer models are capable of implementing a plethora of so-called in-context learning algorithms.
no code implementations • 7 Feb 2024 • Yuji Roh, Qingyun Liu, Huan Gui, Zhe Yuan, Yujin Tang, Steven Euijong Whang, Liang Liu, Shuchao Bi, Lichan Hong, Ed H. Chi, Zhe Zhao
By combining two complementing models, LEVI effectively suppresses problematic features in both the fine-tuning data and pre-trained model and preserves useful features for new tasks.
1 code implementation • NeurIPS 2023 • Robert Tjarko Lange, Yujin Tang, Yingtao Tian
Recently, the Deep Learning community has become interested in evolutionary optimization (EO) as a means to address hard optimization problems, e. g. meta-learning through long inner loop unrolls or optimizing non-differentiable operators.
1 code implementation • 4 Oct 2023 • Yujin Tang, Jiaming Zhou, Xiang Pan, Zeying Gong, Junwei Liang
To address these limitations, we introduce the PostRainBench, a comprehensive multi-variable NWP post-processing benchmark consisting of three datasets for NWP post-processing-based precipitation forecasting.
2 code implementations • 1 Oct 2023 • Zeying Gong, Yujin Tang, Junwei Liang
Although the Transformer has been the dominant architecture for time series forecasting tasks in recent years, a fundamental challenge remains: the permutation-invariant self-attention mechanism within Transformers leads to a loss of temporal information.
Ranked #1 on Time Series Forecasting on ETTh2 (336) Multivariate
1 code implementation • 21 Apr 2023 • Shanchuan Wan, Yujin Tang, Yingtao Tian, Tomoyuki Kaneko
Exploration is a fundamental aspect of reinforcement learning (RL), and its effectiveness is a deciding factor in the performance of RL algorithms, especially when facing sparse extrinsic rewards.
1 code implementation • 28 Nov 2022 • So Kuroki, Tatsuya Matsushima, Jumpei Arima, Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu, Yujin Tang
While natural systems often present collective intelligence that allows them to self-organize and adapt to changes, the equivalent is missing in most artificial systems.
1 code implementation • 5 Aug 2022 • Aleksandar Stanić, Yujin Tang, David Ha, Jürgen Schmidhuber
We show that current agents struggle to generalize, and introduce novel object-centric agents that improve over strong baselines.
2 code implementations • 13 Apr 2022 • Federico Pigozzi, Yujin Tang, Eric Medvet, David Ha
We show experimentally that the evolved robots are effective in the task of locomotion: thanks to self-attention, instances of the same controller embodied in the same robot can focus on different inputs.
1 code implementation • 10 Feb 2022 • Yujin Tang, Yingtao Tian, David Ha
Evolutionary computation has been shown to be a highly effective method for training neural networks, particularly when employed at scale on CPU clusters.
no code implementations • 29 Nov 2021 • David Ha, Yujin Tang
In this review, we will provide a historical context of neural network research's involvement with complex systems, and highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence to advance its current capabilities.
3 code implementations • NeurIPS 2021 • Yujin Tang, David Ha
In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture.
no code implementations • 3 Aug 2020 • Yujin Tang, Jie Tan, Tatsuya Harada
In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.
3 code implementations • 18 Mar 2020 • Yujin Tang, Duong Nguyen, David Ha
Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight.