Recent advances in language-image pre-training has witnessed the emerging field of building transferable systems that can effortlessly adapt to a wide range of computer vision & multimodal tasks in the wild. This also poses a challenge to evaluate the transferability of these models due to the lack of easy-to-use evaluation toolkits and public benchmarks. "Segmentation in the Wild (SegInW)" Challenge is a part of X-Decoder, that proposed a new benchmark to evaluate the transfer ability of pre-trained vision models. This benchmark presents a diverse set of downstream segmentation datasets, measuring the ability of pre-training models on both the segmentation accuracy and their transfer efficiency in a new task, in terms of training examples and trainable parameters. This SegInW Challenge consists of 25 free public Segmentation datasets, crowd-sourced on roboflow.com. For more details about the challenge submission format, please refer to X-Decoder for SGinW.
14 PAPERS • 1 BENCHMARK
MatSeg Dataset for Zero-Shot Material States Segmentation: The dataset contains large-scale synthetic images for training data and highly diverse real-world image benchmarks for testing. Focusing on zero-shot class-agnostic segmentation of materials and their states. This means finding the region of materials states without pre-training on the specific material classes or states. The benchmark contains a wide range of real-world materials and states. For example: wet regions of the surface, scattered dust, minerals of rocks, the sediment of soils, rotten parts of fruits, degraded and corrosive surface regions, food and liquid states, and many others. The focus is on scattered and fragmented materials, as well as soft boundaries partial transition, and partial similarity between regions. It contains both hard segmentation maps and soft and partial similarity annotations for similar but not identical materials.
1 PAPER • NO BENCHMARKS YET
A dataset made of 3D image data and their embeddings to test TomoSAM