no code implementations • 23 Mar 2022 • Tiffany Tuor, Joshua Lockhart, Daniele Magazzeni
Our proposed approach enhances conventional federated learning techniques to make them suitable for this asynchronous training in this intra-organisation, cross-silo setting.
no code implementations • 1 Jan 2021 • Tiffany Tuor, Shiqiang Wang, Kin Leung
Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones.
no code implementations • 22 Jan 2020 • Tiffany Tuor, Shiqiang Wang, Bong Jun Ko, Changchang Liu, Kin K. Leung
A challenge is that among the large variety of data collected at each client, it is likely that only a subset is relevant for a learning task while the rest of data has a negative impact on model training.
no code implementations • 22 May 2019 • Tiffany Tuor, Shiqiang Wang, Kin K. Leung, Bong Jun Ko
Monitoring the conditions of these nodes is important for system management purposes, which, however, can be extremely resource demanding as this requires collecting local measurements of each individual node and constantly sending those measurements to a central controller.
1 code implementation • 14 Apr 2018 • Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, Kevin Chan
Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches.