no code implementations • NeurIPS 2019 • Karim Ahmed, Lorenzo Torresani
Capsule networks have been shown to be powerful models for image classification, thanks to their ability to represent and capture viewpoint variations of an object.
no code implementations • ECCV 2018 • Karim Ahmed, Lorenzo Torresani
Such simple connectivity rules are unlikely to yield the optimal architecture for the given problem.
5 code implementations • ICLR 2018 • Karim Ahmed, Nitish Shirish Keskar, Richard Socher
State-of-the-art results on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion.
Ranked #23 on Machine Translation on WMT2014 English-French
no code implementations • ICLR 2018 • Karim Ahmed, Lorenzo Torresani
While much of the work in the design of convolutional networks over the last five years has revolved around the empirical investigation of the importance of depth, filter sizes, and number of feature channels, recent studies have shown that branching, i. e., splitting the computation along parallel but distinct threads and then aggregating their outputs, represents a new promising dimension for significant improvements in performance.
no code implementations • 20 Apr 2017 • Karim Ahmed, Lorenzo Torresani
We introduce an architecture for large-scale image categorization that enables the end-to-end learning of separate visual features for the different classes to distinguish.
no code implementations • 20 Apr 2016 • Karim Ahmed, Mohammad Haris Baig, Lorenzo Torresani
The training of our "network of experts" is completely end-to-end: the partition of categories into disjoint subsets is learned simultaneously with the parameters of the network trunk and the experts are trained jointly by minimizing a single learning objective over all classes.