Transformer for Polyp Detection
In recent years, as the Transformer has performed increasingly well on NLP tasks, many researchers have ported the Transformer structure to vision tasks ,bridging the gap between NLP and CV tasks. In this work, we evaluate some deep learning network for the detection track. Because the ground truth is mask, so we can try both the current detection and segmentation method. We select the DETR as our baseline through experiment. Besides, we modify the train strategy to fit the dataset.
PDF AbstractTasks
Datasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
Absolute Position Encodings •
Adam •
BPE •
Convolution •
Dense Connections •
Detr •
Dropout •
Feedforward Network •
Label Smoothing •
Layer Normalization •
Linear Layer •
Multi-Head Attention •
Position-Wise Feed-Forward Layer •
Residual Connection •
Scaled Dot-Product Attention •
Softmax •
Transformer