1 code implementation • 17 Feb 2024 • Anxhelo Diko, Danilo Avola, Marco Cascio, Luigi Cinque
Vision Transformer (ViT) self-attention mechanism is characterized by feature collapse in deeper layers, resulting in the vanishing of low-level visual features.
Ranked #491 on Image Classification on ImageNet
no code implementations • 18 Mar 2022 • Danilo Avola, Marco Cascio, Luigi Cinque, Alessio Fagioli, Gian Luca Foresti, Marco Raoul Marini, Daniele Pannone
Nowadays, machine and deep learning techniques are widely used in different areas, ranging from economics to biology.
no code implementations • 11 Mar 2022 • Danilo Avola, Marco Cascio, Luigi Cinque, Alessio Fagioli, Gian Luca Foresti
The latter conditions signal-based features in the visual domain to completely replace visual data.
no code implementations • 12 Nov 2019 • Danilo Avola, Marco Cascio, Luigi Cinque, Daniele Pannone
With the increasing need for wireless data transfer, Wi-Fi networks have rapidly grown in recent years providing high throughput and easy deployment.
Signal Processing Networking and Internet Architecture