no code implementations • 13 Mar 2024 • Mohammad Rahman, Manzur Murshed, Shyh Wei Teng, Manoranjan Paul
Conventional feature selection algorithms applied to Pseudo Time-Series (PTS) data, which consists of observations arranged in sequential order without adhering to a conventional temporal dimension, often exhibit impractical computational complexities with high dimensional data.
no code implementations • 28 Aug 2022 • Priyabrata Karmakar, Manzur Murshed, Manoranjan Paul, David Taubman
Specifically, we have constructed motion-compensated current frame using the cuboidal partitioning information of the anchor frame in a group-of-picture (GOP).
no code implementations • 2 Feb 2021 • Ashek Ahmmed, Manoranjan Paul, Manzur Murshed, David Taubman
This is because video coding targets human perception, while feature coding aims for machine vision tasks.
no code implementations • 31 Dec 2020 • Tasfia Shermin, Shyh Wei Teng, Ferdous Sohel, Manzur Murshed, Guojun Lu
In this paper, we propose to explore global and direct attribute-supervised local visual features for both EL and FS categories in an integrated manner for fine-grained GZSL.
no code implementations • 30 Dec 2020 • Tasfia Shermin, Shyh Wei Teng, Ferdous Sohel, Manzur Murshed, Guojun Lu
Bidirectional mapping-based generalized zero-shot learning (GZSL) methods rely on the quality of synthesized features to recognize seen and unseen data.
Generalized Zero-Shot Learning Generative Adversarial Network
no code implementations • 1 Jul 2020 • Tasfia Shermin, Guojun Lu, Shyh Wei Teng, Manzur Murshed, Ferdous Sohel
The proposed multi-classifier structure introduces a weighting module that evaluates distinctive domain characteristics for assigning the target samples with weights which are more representative to whether they are likely to belong to the known and unknown classes to encourage positive transfers during adversarial training and simultaneously reduces the domain gap between the shared classes of the source and target domains.
no code implementations • 25 Mar 2019 • Tasfia Shermin, Shyh Wei Teng, Manzur Murshed, Guojun Lu, Ferdous Sohel, Manoranjan Paul
Thus, we hypothesize that the presence of this layer is crucial for growing network depth to adapt better to a new task.
no code implementations • 19 Nov 2018 • Tasfia Shermin, Manzur Murshed, Guojun Lu, Shyh Wei Teng
Although CNNs have gained the ability to transfer learned knowledge from source task to target task by virtue of large annotated datasets but consume huge processing time to fine-tune without GPU.