no code implementations • 4 Mar 2024 • Xiaoliang Luo, Akilles Rechardt, Guangzhi Sun, Kevin K. Nejad, Felipe Yáñez, Bati Yilmaz, Kangjoo Lee, Alexandra O. Cohen, Valentina Borghesani, Anton Pashkov, Daniele Marinazzo, Jonathan Nicholas, Alessandro Salatiello, Ilia Sucholutsky, Pasquale Minervini, Sepehr Razavi, Roberta Rocca, Elkhan Yusifov, Tereza Okalova, Nianlong Gu, Martin Ferianc, Mikail Khona, Kaustubh R. Patil, Pui-Shee Lee, Rui Mata, Nicholas E. Myers, Jennifer K Bizley, Sebastian Musslick, Isil Poyraz Bilgin, Guiomar Niso, Justin M. Ales, Michael Gaebler, N Apurva Ratan Murty, Leyla Loued-Khenissi, Anna Behler, Chloe M. Hall, Jessica Dafflon, Sherry Dongqi Bao, Bradley C. Love
LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts.
no code implementations • 9 Feb 2024 • Martin Ferianc, Hongxiang Fan, Miguel Rodrigues
Ensembles of separate neural networks (NNs) have shown superior accuracy and confidence calibration over single NN across tasks.
1 code implementation • 9 Feb 2024 • Martin Ferianc, Miguel Rodrigues
YAMLE: Yet Another Machine Learning Environment is an open-source framework that facilitates rapid prototyping and experimentation with machine learning (ML) models and methods.
1 code implementation • 25 Aug 2023 • Reem I. Masoud, Ziquan Liu, Martin Ferianc, Philip Treleaven, Miguel Rodrigues
Our results quantify the cultural alignment of LLMs and reveal the difference between LLMs in explanatory cultural dimensions.
1 code implementation • 30 Jun 2023 • Martin Ferianc, Ondrej Bohdal, Timothy Hospedales, Miguel Rodrigues
Enhancing the generalisation abilities of neural networks (NNs) through integrating noise such as MixUp or Dropout during training has emerged as a powerful and adaptable technique.
1 code implementation • 24 Apr 2023 • Martin Wistuba, Martin Ferianc, Lukas Balles, Cedric Archambeau, Giovanni Zappella
We discuss requirements for the use of continual learning algorithms in practice, from which we derive design principles for Renate.
no code implementations • 19 May 2022 • Martin Ferianc, Miguel Rodrigues
We demonstrate the generality of the approach on combinations of toy data, SVHN/CIFAR-10, simple to complex NN architectures and different tasks.
no code implementations • 19 Dec 2021 • Martin Ferianc, Anush Sankaran, Olivier Mastropietro, Ehsan Saboori, Quentin Cappart
Neural networks (NNs) are making a large impact both on research and industry.
no code implementations • 24 Nov 2021 • Hongxiang Fan, Martin Ferianc, Zhiqiang Que, He Li, Shuanglong Liu, Xinyu Niu, Wayne Luk
Recent advances in algorithm-hardware co-design for deep neural networks (DNNs) have demonstrated their potential in automatically designing neural architectures and hardware designs.
no code implementations • 4 Jun 2021 • Martin Ferianc, Zhiqiang Que, Hongxiang Fan, Wayne Luk, Miguel Rodrigues
To further improve the overall algorithmic-hardware performance, a co-design framework is proposed to explore the most fitting algorithmic-hardware configurations for Bayesian RNNs.
no code implementations • 12 May 2021 • Hongxiang Fan, Martin Ferianc, Miguel Rodrigues, HongYu Zhou, Xinyu Niu, Wayne Luk
Neural networks (NNs) have demonstrated their potential in a wide range of applications such as image recognition, decision making or recommendation systems.
1 code implementation • 14 Apr 2021 • Martin Ferianc, Divyansh Manocha, Hongxiang Fan, Miguel Rodrigues
Fully convolutional U-shaped neural networks have largely been the dominant approach for pixel-wise image segmentation.
1 code implementation • 22 Feb 2021 • Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues
Bayesian neural networks (BNNs) are making significant progress in many research areas where decision-making needs to be accompanied by uncertainty estimation.
no code implementations • 12 Jul 2020 • Martin Ferianc, Hongxiang Fan, Miguel Rodrigues
In recent years, neural architecture search (NAS) has received intensive scientific and industrial interest due to its capability of finding a neural architecture with high accuracy for various artificial intelligence tasks such as image classification or object detection.