1 code implementation • 19 Mar 2024 • Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, David Ha
Surprisingly, our Japanese Math LLM achieved state-of-the-art performance on a variety of established Japanese LLM benchmarks, even surpassing models with significantly more parameters, despite not being explicitly trained for such tasks.
no code implementations • 25 Oct 2019 • Yusuke Niitani, Toru Ogawa, Shuji Suzuki, Takuya Akiba, Tommi Kerola, Kohei Ozaki, Shotaro Sano
Using this method, the team PFDet achieved 3rd and 4th place in the instance segmentation and the object detection track, respectively.
no code implementations • 1 Aug 2019 • Seiya Tokui, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, Shunta Saito, Shuji Suzuki, Kota Uenishi, Brian Vogel, Hiroyuki Yamazaki Vincent
Software frameworks for neural networks play a key role in the development and application of deep learning methods.
10 code implementations • 25 Jul 2019 • Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, Masanori Koyama
We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications.
3 code implementations • NeurIPS 2019 • Mitsuru Kusumoto, Takuya Inoue, Gentaro Watanabe, Takuya Akiba, Masanori Koyama
Recomputation algorithms collectively refer to a family of methods that aims to reduce the memory consumption of the backpropagation by selectively discarding the intermediate results of the forward propagation and recomputing the discarded results as needed.
no code implementations • CVPR 2019 • Yusuke Niitani, Takuya Akiba, Tommi Kerola, Toru Ogawa, Shotaro Sano, Shuji Suzuki
However, large datasets like Open Images Dataset v4 (OID) are sparsely annotated, and some measure must be taken in order to ensure the training of a reliable detector.
no code implementations • 4 Sep 2018 • Takuya Akiba, Tommi Kerola, Yusuke Niitani, Toru Ogawa, Shotaro Sano, Shuji Suzuki
We present a large-scale object detection system by team PFDet.
1 code implementation • 31 Mar 2018 • Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe
To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.
no code implementations • ICLR 2018 • Yusuke Tsuzuku, Hiroto Imachi, Takuya Akiba
We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments.
5 code implementations • 7 Feb 2018 • Yoshihiro Yamada, Masakazu Iwamura, Takuya Akiba, Koichi Kise
In this paper, to relieve the overfitting effect of ResNet and its improvements (i. e., Wide ResNet, PyramidNet, and ResNeXt), we propose a new regularization method called ShakeDrop regularization.
no code implementations • 12 Nov 2017 • Takuya Akiba, Shuji Suzuki, Keisuke Fukuda
We demonstrate that training ResNet-50 on ImageNet for 90 epochs can be achieved in 15 minutes with 1024 Tesla P100 GPUs.
1 code implementation • 31 Oct 2017 • Takuya Akiba, Keisuke Fukuda, Shuji Suzuki
One of the keys for deep learning to have made a breakthrough in various fields was to utilize high computing powers centering around GPUs.