3D confocal stacks with corresponding 2D Light-field microscope images
1 PAPER • NO BENCHMARKS YET
A synthetic dataset for evaluating non-rigid 3D human reconstruction based on conventional RGB-D cameras. The dataset consist of seven motion sequences of a single human model.
Dataset of paired thermal and RGB images comprising ten diverse scenes—six indoor and four outdoor scenes— for 3D scene reconstruction and novel view synthesis (e.g. with NeRF).
Please see our website and code repository for detailed description.
CVGL Camera Calibration Dataset consists of 49 camera configurations with town 1 having 25 configurations while town 2 having 24 configurations. The parameters modified for generating the configurations include fov, x, y, z, pitch, yaw, and roll. Here, fov is the field of view, (x, y, z) is the translation while (pitch, yaw, and roll) is the rotation between the cameras. The total number of image pairs is 79, 320, out of which 18, 083 belong to Town 1 while 61, 237 belong to Town 2, the difference in the number of images is due to the length of the tracks.
0 PAPER • NO BENCHMARKS YET
We propose a new light field image database called “PINet” inheriting the hierarchical structure from WordNet. It consists of 7549 LIs captured by Lytro Illum, which is much larger than the existing databases. The images are manually annotated to 178 categories according to WordNet, such as cat, camel, bottle, fans, etc. The registered depth maps are also provided. Each image is generated by processing the raw LI from the camera by Light Field Toolbox v0.4 for demosaicing and devignetting.