--------------------------------------------- # Dataset for "Array Camera Image Fusion using Physics-Aware Transformers" Preferred citation (DataCite format): Huang, Qian (2022). Dataset for "Array Camera Image Fusion using Physics-Aware Transformers". University of Arizona Research Data Repository. Dataset. https://doi.org/10.25422/azu.data.20217140 Corresponding Author: David J Brady, Wyant College of Optical Sciences, djbrady@arizona.edu License: CC By 4.0 DOI: https://doi.org/10.25422/azu.data.20217140 --------------------------------------------- ## Summary We demonstrate a physics-aware transformer for feature-based data fusion from cameras with diverse resolution, color spaces, focal planes, focal lengths, and exposure. We also demonstrate a scalable solution for synthetic training data generation for the transformer using open-source computer graphics software. We demonstrate image synthesis on arrays with diverse spectral responses, instantaneous field of view and frame rate Paper: https://arxiv.org/abs/2207.02250 GitHub repo: https://github.com/djbradyatopticalsciencesarizona/physicsawaretransformer --------------------------------------------- ## Files and Folders - blender_dataset.zip: Contains the training and validation data used by Physics-Aware Transformer. Within those folders, the whitex1 and whitex2 subfolders contain .png images rendered to virtual cameras 1 and 2. - preview_DC-whitex2-754.jpg: A preview image generated from the file /BLENDER2K_train_DUAL_COLORED/whitex2/754.png