no code implementations • 15 Apr 2024 • YuXuan Jiang, Chen Feng, Fan Zhang, David Bull
Knowledge distillation (KD) has emerged as a promising technique in deep learning, typically employed to enhance a compact student network through learning from their high-performance but more complex teacher variant.
no code implementations • 4 Mar 2024 • Ruirui Lin, Nantheera Anantrasirichai, Alexandra Malyugina, David Bull
Distortions caused by low-light conditions are not only visually unpleasant but also degrade the performance of computer vision tasks.
no code implementations • 28 Feb 2024 • Joanne Lin, Nantheera Anantrasirichai, David Bull
Instance segmentation for low-light imagery remains largely unexplored due to the challenges imposed by such conditions, for example shot noise due to low photon count, color distortions and reduced contrast.
no code implementations • 10 Feb 2024 • Angeliki Katsenou, Xinyi Wang, Daniel Schien, David Bull
Adaptive video streaming is a key enabler for optimising the delivery of offline encoded video content.
no code implementations • 3 Feb 2024 • Nantheera Anantrasirichai, Ruirui Lin, Alexandra Malyugina, David Bull
Low-light videos often exhibit spatiotemporal incoherent noise, leading to poor visibility and compromised performance across various computer vision applications.
1 code implementation • 2 Feb 2024 • Ho Man Kwan, Fan Zhang, Andrew Gower, David Bull
In this paper we, for the first time, extend their application to immersive (multi-view) videos, by proposing MV-HiNeRV, a new INR-based immersive video codec.
no code implementations • 31 Dec 2023 • YuXuan Jiang, Jakub Nawala, Fan Zhang, David Bull
Deep learning techniques have been applied in the context of image super-resolution (SR), achieving remarkable advances in terms of reconstruction performance.
no code implementations • 19 Dec 2023 • Angeliki Katsenou, Xinyi Wang, Daniel Schien, David Bull
The environmental impact of video streaming services has been discussed as part of the strategies towards sustainable information and communication technologies.
no code implementations • 19 Dec 2023 • Zihao Qi, Chen Feng, Duolikun Danier, Fan Zhang, Xiaozhong Xu, Shan Liu, David Bull
In this work, we observe that existing full-/no-reference quality metrics fail to accurately predict the perceptual quality difference between transcoded UGC content and the corresponding unpristine references.
no code implementations • 14 Dec 2023 • Chen Feng, Duolikun Danier, Haoran Wang, Fan Zhang, Benoit Vallade, Alex Mackin, David Bull
Deep learning-based video quality assessment (deep VQA) has demonstrated significant potential in surpassing conventional metrics, with promising improvements in terms of correlation with human perception.
no code implementations • 14 Dec 2023 • Chen Feng, Duolikun Danier, Fan Zhang, Alex Mackin, Andy Collins, David Bull
Professionally generated content (PGC) streamed online can contain visual artefacts that degrade the quality of user experience.
no code implementations • 5 Dec 2023 • Tianhao Peng, Ge Gao, Heming Sun, Fan Zhang, David Bull
In recent years, end-to-end learnt video codecs have demonstrated their potential to compete with conventional coding algorithms in term of compression efficiency.
no code implementations • 16 Sep 2023 • Alexandra Malyugina, Nantheera Anantrasirichai, David Bull
Despite extensive research conducted in the field of image denoising, many algorithms still heavily depend on supervised learning and their effectiveness primarily relies on the quality and diversity of training data.
no code implementations • 13 Aug 2023 • Xinyi Wang, Angeliki Katsenou, David Bull
Preliminary results indicate that high correlations are achieved by using only deep features while adding saliency is not always boosting the performance.
1 code implementation • NeurIPS 2023 • Ho Man Kwan, Ge Gao, Fan Zhang, Andrew Gower, David Bull
Learning-based video compression is currently a popular research topic, offering the potential to compete with conventional standard video codecs.
Ranked #1 on Video Reconstruction on UVG
2 code implementations • 16 Mar 2023 • Duolikun Danier, Fan Zhang, David Bull
Existing works on video frame interpolation (VFI) mostly employ deep neural networks that are trained by minimizing the L1, L2, or deep feature space distance (e. g. VGG loss) between their outputs and ground-truth frames.
2 code implementations • 3 Oct 2022 • Duolikun Danier, Fan Zhang, David Bull
In order to narrow this research gap, we have developed a new video quality database named BVI-VFI, which contains 540 distorted sequences generated by applying five commonly used VFI algorithms to 36 diverse source videos with various spatial resolutions and frame rates.
no code implementations • 9 Aug 2022 • Alexandra Malyugina, Nantheera Anantrasirichai, David Bull
The loss function is a combination of $\ell_1$ or $\ell_2$ losses with the new persistence-based topological loss.
1 code implementation • 18 Jul 2022 • Chen Feng, Zihao Qi, Duolikun Danier, Fan Zhang, Xiaozhong Xu, Shan Liu, David Bull
In this work, we modify the MFRNet network architecture to enable multiple frame processing, and the new network, multi-frame MFRNet, has been integrated into the EBDA framework using two Versatile Video Coding (VVC) host codecs: VTM 16. 2 and the Fraunhofer Versatile Video Encoder (VVenC 1. 4. 0).
no code implementations • 17 Jul 2022 • Duolikun Danier, Fan Zhang, David Bull
Video frame interpolation (VFI) serves as a useful tool for many video processing applications.
no code implementations • 19 May 2022 • Duolikun Danier, Chen Feng, Fan Zhang, David Bull
This paper describes a CNN-based multi-frame post-processing approach based on a perceptually-inspired Generative Adversarial Network architecture, CVEGAN.
no code implementations • 4 Mar 2022 • Odysseas Pappas, Juliet Biggs, David Bull, Alin Achim, Nantheera Anantrasirichai
Monitoring of ground movement close to the rail corridor, such as that associated with landslips caused by ground subsidence and/or uplift, is of great interest for the detection and prevention of possible railway faults.
no code implementations • 25 Feb 2022 • Angeliki Katsenou, Fan Zhang, David Bull
In recent years, resolution adaptation based on deep neural networks has enabled significant performance gains for conventional (2D) video codecs.
no code implementations • 17 Feb 2022 • Chen Feng, Duolikun Danier, Fan Zhang, David Bull
In recent years, deep learning techniques have shown significant potential for improving video quality assessment (VQA), achieving higher correlation with subjective opinions compared to conventional approaches.
no code implementations • 15 Feb 2022 • Duolikun Danier, Fan Zhang, David Bull
Video frame interpolation (VFI) is one of the fundamental research areas in video processing and there has been extensive research on novel and enhanced interpolation algorithms.
no code implementations • 15 Feb 2022 • Duolikun Danier, Fan Zhang, David Bull
This paper presents a new deformable convolution-based video frame interpolation (VFI) method, using a coarse to fine 3D CNN to enhance the multi-flow prediction.
3 code implementations • CVPR 2022 • Duolikun Danier, Fan Zhang, David Bull
Video frame interpolation (VFI) is currently a very active research topic, with applications spanning computer vision, post production and video encoding.
Ranked #1 on Video Frame Interpolation on SNU-FILM (easy)
no code implementations • 30 Nov 2021 • Chen Feng, Duolikun Danier, Charlie Tan, Fan Zhang, David Bull
This paper presents a deep learning-based video compression framework (ViSTRA3).
no code implementations • 1 Jun 2021 • Annika Wong, Nantheera Anantrasirichai, Thanarat H. Chalidabhongse, Duangdao Palasuwan, Attakorn Palasuwan, David Bull
This paper presents an automated process utilising the advantages of machine learning to increase capacity and standardisation of cell abnormality detection, and its performance is analysed.
no code implementations • 18 Mar 2021 • Alex Mackin, Di Ma, Fan Zhang, David Bull
Bit depth adaptation, where the bit depth of a video sequence is reduced before transmission and up-sampled during display, can potentially reduce data rates with limited impact on perceptual quality.
no code implementations • 10 Mar 2021 • Fan Zhang, Angeliki Katsenou, Christos Bampis, Lukas Krasula, Zhi Li, David Bull
VMAF is a machine learning based video quality assessment method, originally designed for streaming applications, which combines multiple quality metrics and video features through SVM regression.
no code implementations • 26 Feb 2021 • Duolikun Danier, David Bull
Our study shows that video texture has significant impact on the performance of frame interpolation models and it is beneficial to have separate models specifically adapted to these texture classes, instead of training a single model that tries to learn generic motion.
no code implementations • 5 Jan 2021 • N. Anantrasirichai, David Bull
Experimental results show that our method outperforms existing approaches in terms of subjective quality and that it is robust to variations in brightness levels and noise.
no code implementations • 3 Oct 2020 • Fan Zhang, David Hall, Tao Xu, Stephen Boyle, David Bull
Methods for environmental image capture, 3D reconstruction (photogrammetry) and the creation of foreground assets are presented along with a flexible and user-friendly simulation interface.
no code implementations • 24 Jul 2020 • Nantheera Anantrasirichai, David Bull
We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity.
no code implementations • 7 May 2020 • Nantheera Anantrasirichai, Juliet Biggs, Krisztina Kelevitz, Zahra Sadeghi, Tim Wright, James Thompson, Alin Achim, David Bull
The large volumes of Sentinel-1 data produced over Europe are being used to develop pan-national ground motion services.
no code implementations • 22 Dec 2019 • Jing Gao, N. Anantrasirichai, David Bull
This paper describes a novel deep learning-based method for mitigating the effects of atmospheric distortion.
1 code implementation • 17 May 2019 • Nantheera Anantrasirichai, Juliet Biggs, Fabien Albino, David Bull
As only a small proportion of volcanoes are deforming and atmospheric noise is ubiquitous, the use of machine learning for detecting volcanic unrest is more challenging.
1 code implementation • 1 Apr 2019 • N. Anantrasirichai, David Bull
As a data-driven method, the performance of deep convolutional neural networks (CNN) relies heavily on training data.
no code implementations • 10 Aug 2018 • N. Anantrasirichai, Alin Achim, David Bull
This paper describes a new method for mitigating the effects of atmospheric distortion on observed sequences that include large moving objects.