Search Results for author: Luis Herranz

Found 44 papers, 21 papers with code

Trust your Good Friends: Source-free Domain Adaptation by Reciprocal Neighborhood Clustering

no code implementations1 Sep 2023 Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui, Jian Yang

We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity.

Clustering Source-Free Domain Adaptation

SlimSeg: Slimmable Semantic Segmentation with Boundary Supervision

no code implementations13 Jul 2022 Danna Xue, Fei Yang, Pei Wang, Luis Herranz, Jinqiu Sun, Yu Zhu, Yanning Zhang

Accurate semantic segmentation models typically require significant computational resources, inhibiting their use in practical applications.

Knowledge Distillation Segmentation +1

Slimmable Video Codec

no code implementations13 May 2022 Zhaocheng Liu, Luis Herranz, Fei Yang, Saiping Zhang, Shuai Wan, Marta Mrak, Marc Górriz Blanch

Neural video compression has emerged as a novel paradigm combining trainable multilayer neural networks and machine learning, achieving competitive rate-distortion (RD) performances, but still remaining impractical due to heavy neural architectures, with large memory and computational demands.

Video Compression

Main Product Detection with Graph Networks for Fashion

no code implementations25 Jan 2022 Vacit Oguz Yazici, LongLong Yu, Arnau Ramisa, Luis Herranz, Joost Van de Weijer

Computer vision has established a foothold in the online fashion retail industry.

DCNGAN: A Deformable Convolutional-Based GAN with QP Adaptation for Perceptual Quality Enhancement of Compressed Video

no code implementations22 Jan 2022 Saiping Zhang, Luis Herranz, Marta Mrak, Marc Gorriz Blanch, Shuai Wan, Fuzheng Yang

Deformable convolutions can operate on multiple frames, thus leveraging more temporal information, which is beneficial for enhancing the perceptual quality of compressed videos.

Generative Adversarial Network Quantization

A Novel Framework for Image-to-image Translation and Image Compression

no code implementations25 Nov 2021 Fei Yang, Yaxing Wang, Luis Herranz, Yongmei Cheng, Mikhail Mozerov

Thus, we further propose a unified framework that allows both translation and autoencoding capabilities in a single codec.

Image Compression Image Restoration +4

Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition

1 code implementation9 Nov 2021 Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui, Joost Van de Weijer

We propose an approach to IML, which we call Episodic Replay Distillation (ERD), that mixes classes from the current task with class exemplars from previous tasks when sampling episodes for meta-learning.

Continual Learning Knowledge Distillation +1

HCV: Hierarchy-Consistency Verification for Incremental Implicitly-Refined Classification

1 code implementation21 Oct 2021 Kai Wang, Xialei Liu, Luis Herranz, Joost Van de Weijer

To overcome forgetting in this benchmark, we propose Hierarchy-Consistency Verification (HCV) as an enhancement to existing continual learning methods.

Classification Continual Learning +1

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation

2 code implementations NeurIPS 2021 Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui

In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data.

Source-Free Domain Adaptation

DVC-P: Deep Video Compression with Perceptual Optimizations

1 code implementation22 Sep 2021 Saiping Zhang, Marta Mrak, Luis Herranz, Marc Górriz, Shuai Wan, Fuzheng Yang

In this paper, we introduce deep video compression with perceptual optimizations (DVC-P), which aims at increasing perceptual quality of decoded videos.

Video Compression

Generalized Source-free Domain Adaptation

1 code implementation ICCV 2021 Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui

In this paper, we propose a new domain adaptation paradigm called Generalized Source-free Domain Adaptation (G-SFDA), where the learned model needs to perform well on both the target and source domains, with only access to current unlabeled target data during adaptation.

Source-Free Domain Adaptation

ACAE-REMIND for Online Continual Learning with Compressed Feature Replay

no code implementations18 May 2021 Kai Wang, Luis Herranz, Joost Van de Weijer

Methods are typically allowed to use a limited buffer to store some of the images in the stream.

Continual Learning

MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains

1 code implementation28 Apr 2021 Yaxing Wang, Abel Gonzalez-Garcia, Chenshen Wu, Luis Herranz, Fahad Shahbaz Khan, Shangling Jui, Joost Van de Weijer

Therefore, we propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs.

Transfer Learning

DANICE: Domain adaptation without forgetting in neural image compression

no code implementations19 Apr 2021 Sudeep Katakol, Luis Herranz, Fei Yang, Marta Mrak

Neural image compression (NIC) is a new coding paradigm where coding capabilities are captured by deep models learned from data.

Domain Adaptation Image Compression

Continual learning in cross-modal retrieval

no code implementations14 Apr 2021 Kai Wang, Luis Herranz, Joost Van de Weijer

We found that the indexing stage pays an important role and that simply avoiding reindexing the database with updated embedding networks can lead to significant gains.

Continual Learning Cross-Modal Retrieval +2

Slimmable Compressive Autoencoders for Practical Neural Image Compression

1 code implementation CVPR 2021 Fei Yang, Luis Herranz, Yongmei Cheng, Mikhail G. Mozerov

Neural image compression leverages deep neural networks to outperform traditional image codecs in rate-distortion performance.

Image Compression

On Implicit Attribute Localization for Generalized Zero-Shot Learning

no code implementations8 Mar 2021 Shiqi Yang, Kai Wang, Luis Herranz, Joost Van de Weijer

Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their attribute-based descriptions.

Attribute Generalized Zero-Shot Learning

Casting a BAIT for Offline and Online Source-free Domain Adaptation

2 code implementations23 Oct 2020 Shiqi Yang, Yaxing Wang, Joost Van de Weijer, Luis Herranz, Shangling Jui

When adapting to the target domain, the additional classifier initialized from source classifier is expected to find misclassified features.

Source-Free Domain Adaptation Unsupervised Domain Adaptation

Bookworm continual learning: beyond zero-shot learning and continual learning

no code implementations26 Jun 2020 Kai Wang, Luis Herranz, Anjan Dutta, Joost Van de Weijer

We propose bookworm continual learning(BCL), a flexible setting where unseen classes can be inferred via a semantic model, and the visual model can be updated continually.

Attribute Continual Learning +1

Simple and effective localized attribute representations for zero-shot learning

no code implementations10 Jun 2020 Shiqi Yang, Kai Wang, Luis Herranz, Joost Van de Weijer

Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions.

Attribute Zero-Shot Learning

Distributed Learning and Inference with Compressed Images

no code implementations22 Apr 2020 Sudeep Katakol, Basem Elbarashy, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez

Moreover, we may only have compressed images at training time but are able to use original images at inference time, or vice versa, and in such a case, the downstream model suffers from covariate shift.

Autonomous Driving Cloud Computing +3

Semantic Drift Compensation for Class-Incremental Learning

2 code implementations CVPR 2020 Lu Yu, Bartłomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, Joost Van de Weijer

The vast majority of methods have studied this scenario for classification networks, where for each new task the classification layer of the network must be augmented with additional weights to make room for the newly added classes.

Class Incremental Learning General Classification +1

MineGAN: effective knowledge transfer from GANs to target domains with few images

2 code implementations CVPR 2020 Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, Joost Van de Weijer

We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs.

Transfer Learning

Variable Rate Deep Image Compression with Modulated Autoencoder

1 code implementation11 Dec 2019 Fei Yang, Luis Herranz, Joost Van de Weijer, José A. Iglesias Guitián, Antonio López, Mikhail Mozerov

Addressing these limitations, we formulate the problem of variable rate-distortion optimization for deep image compression, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific rate-distortion tradeoff via a modulation network.

Image Compression Navigate +1

Controlling biases and diversity in diverse image-to-image translation

no code implementations23 Jul 2019 Yaxing Wang, Abel Gonzalez-Garcia, Joost Van de Weijer, Luis Herranz

The task of unpaired image-to-image translation is highly challenging due to the lack of explicit cross-domain pairs of instances.

Image-to-Image Translation Translation

Multifaceted Analysis of Fine-Tuning in Deep Model for Visual Recognition

no code implementations11 Jul 2019 Xiang-Yang Li, Luis Herranz, Shuqiang Jiang

In this paper, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition.

Mix and match networks: cross-modal alignment for zero-pair image-to-image translation

no code implementations8 Mar 2019 Yaxing Wang, Luis Herranz, Joost Van de Weijer

This paper addresses the problem of inferring unseen cross-modal image-to-image translations between multiple modalities.

Image-to-Image Translation Segmentation +2

Memory Replay GANs: Learning to Generate New Categories without Forgetting

1 code implementation NeurIPS 2018 Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu

In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.

Cross-Modulation Networks for Few-Shot Learning

no code implementations1 Dec 2018 Hugo Prol, Vincent Dumoulin, Luis Herranz

A family of recent successful approaches to few-shot learning relies on learning an embedding space in which predictions are made by computing similarities between examples.

Few-Shot Learning

Learning Effective RGB-D Representations for Scene Recognition

no code implementations17 Sep 2018 Xinhang Song, Shuqiang Jiang, Luis Herranz, Chengpeng Chen

We show that this limitation can be addressed by using RGB-D videos, where more comprehensive depth information is accumulated as the camera travels across the scene.

Scene Recognition Video Recognition

Memory Replay GANs: learning to generate images from new categories without forgetting

1 code implementation6 Sep 2018 Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost Van de Weijer, Bogdan Raducanu

In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion.

LIUM-CVC Submissions for WMT18 Multimodal Translation Task

no code implementations WS 2018 Ozan Caglayan, Adrien Bardet, Fethi Bougares, Loïc Barrault, Kai Wang, Marc Masana, Luis Herranz, Joost Van de Weijer

This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation.

Machine Translation Translation

Transferring GANs: generating images from limited data

1 code implementation ECCV 2018 Yaxing Wang, Chenshen Wu, Luis Herranz, Joost Van de Weijer, Abel Gonzalez-Garcia, Bogdan Raducanu

Transferring the knowledge of pretrained networks to new domains by means of finetuning is a widely used practice for applications based on discriminative models.

10-shot image generation Domain Adaptation +1

Mix and match networks: encoder-decoder alignment for zero-pair image translation

1 code implementation CVPR 2018 Yaxing Wang, Joost Van de Weijer, Luis Herranz

We address the problem of image translation between domains or modalities for which no direct paired data is available (i. e. zero-pair translation).

Colorization Segmentation +3

Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting

2 code implementations8 Feb 2018 Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, Andrew D. Bagdanov

In this paper we propose an approach to avoiding catastrophic forgetting in sequential task learning scenarios.

Food recognition and recipe analysis: integrating visual content, context and external knowledge

no code implementations22 Jan 2018 Luis Herranz, Weiqing Min, Shuqiang Jiang

The central role of food in our individual and social life, combined with recent technological advances, has motivated a growing interest in applications that help to better monitor dietary habits as well as the exploration and retrieval of food-related information.

Food Recognition Food recommendation +1

Scene recognition with CNNs: objects, scales and dataset bias

no code implementations CVPR 2016 Luis Herranz, Shuqiang Jiang, Xiang-Yang Li

Thus, adapting the feature extractor to each particular scale (i. e. scale-specific CNNs) is crucial to improve recognition, since the objects in the scenes have their specific range of scales.

Scene Recognition

Depth CNNs for RGB-D scene recognition: learning from scratch better than transferring from RGB-CNNs

1 code implementation21 Jan 2018 Xinhang Song, Luis Herranz, Shuqiang Jiang

However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features.

Scene Recognition

Domain-adaptive deep network compression

2 code implementations ICCV 2017 Marc Masana, Joost Van de Weijer, Luis Herranz, Andrew D. Bagdanov, Jose M. Alvarez

We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.

Low-rank compression

LIUM-CVC Submissions for WMT17 Multimodal Translation Task

no code implementations WS 2017 Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Marc Masana, Luis Herranz, Joost Van de Weijer

This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.