Big-Little Modules are blocks for image models that have two branches: each of which represents a separate block from a deep model and a less deep counterpart. They were proposed as part of the BigLittle-Net architecture. The two branches are fused with a linear combination and unit weights. These two branches are known as Big-Branch (more layers and channels at low resolutions) and Little-Branch (fewer layers and channels at high resolution).
Source: Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 2 | 18.18% |
Speech Synthesis | 1 | 9.09% |
Text-To-Speech Synthesis | 1 | 9.09% |
Fine-Grained Image Classification | 1 | 9.09% |
Fine-Grained Visual Recognition | 1 | 9.09% |
General Classification | 1 | 9.09% |
Image Retrieval | 1 | 9.09% |
Retrieval | 1 | 9.09% |
Object Recognition | 1 | 9.09% |
Component | Type |
|
---|---|---|
1x1 Convolution
|
Convolutions | |
Convolution
|
Convolutions | |
Linear Layer
|
Feedforward Networks | |
Residual Connection
|
Skip Connections |