1 code implementation • 4 Aug 2021 • Jonathan Carlton, Andy Brown, Caroline Jay, John Keane
The results demonstrate that interaction data can be used to infer users' engagement during and after an experience, and the proposed techniques are relevant to better understand audience preference and responses.
no code implementations • 16 Feb 2021 • Elizabeth Fons, Paula Dawson, Xiao-jun Zeng, John Keane, Alexandros Iosifidis
Data augmentation methods have been shown to be a fundamental technique to improve generalization in tasks such as image, text and audio classification.
2 code implementations • 13 Feb 2021 • Cameron Shand, Richard Allmendinger, Julia Handl, Andrew Webb, John Keane
Here, we argue that synthetic datasets must continue to play an important role in the evaluation of clustering algorithms, but that this necessitates constructing benchmarks that appropriately cover the diverse set of properties that impact clustering algorithm performance.
no code implementations • 28 Oct 2020 • Elizabeth Fons, Paula Dawson, Xiao-jun Zeng, John Keane, Alexandros Iosifidis
In this paper we show that using transfer learning can help with this task, by pre-training a model to extract universal features on the full universe of stocks of the S$\&$P500 index and then transferring it to another model to directly learn a trading rule.
1 code implementation • 28 Oct 2020 • Elizabeth Fons, Paula Dawson, Xiao-jun Zeng, John Keane, Alexandros Iosifidis
Data augmentation methods in combination with deep neural networks have been used extensively in computer vision on classification tasks, achieving great success; however, their use in time series classification is still at an early stage.
no code implementations • 28 Feb 2019 • Elizabeth Fons, Paula Dawson, Jeffrey Yau, Xiao-jun Zeng, John Keane
The financial crisis of 2008 generated interest in more transparent, rules-based strategies for portfolio construction, with Smart beta strategies emerging as a trend among institutional investors.
1 code implementation • NeurIPS 2018 • Sebastian Flennerhag, Hujun Yin, John Keane, Mark Elliot
Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large.