NYU ABU DHABI CENTER FOR SPACE SCIENCE Upon transforming all images to a planar map, we realized that the variation in altitude and SZA from one EXI image to another is easily noticeable in terms signifcant diferences in brightness and feature defnition, making it evident where data from one image ends and the other one begins. To correct for these variations across images, the following methods described below were used: Method 1: Machine learning We calculated the percentage overlap between two images covering the same geographic location. Using this overlap region, we train a neural network machine to generate a correcting model that can predict the corrected pixel values of non-overlapping pixels using the training performed on the overlap region. Method 2: Fine Pixel Stitching We divided our map into sets of 2x2 pixel squares where all four pixels were from the same image. We then started an iterative process where pixels from neighboring square sets are compared to determine a multiplying factor that would adjust the brightness of square two to match that of square one without losing fne details. 99
THE ATLAS OF MARS ACKNOWLEDGEMENTS We thank the entire team of the Emirates Mars Mission for providing us with these observations. All data was obtained from the EMM Science Data Center (SDC). This work was supported by the New York University Abu Dhabi (NYUAD) Institute Research Grant G1502 and the ASPIRE Award for Research Excellence (AARE) Grant S1560 by the Advanced Technology Research Council (ATRC). Image processing was done on High Performance Computing (HPC) resources of NYUAD. We thank Professor K. R. Sreenivasan for his constant encouragement and support for the project. 100