While wide area motion imagery provides short-timescale temporal information, e.g., individual vehicle tracking, it lacks broader contextual information on the ambient distribution of populations within that area. We present a fusion approach to augment Iris video with broader-scale population data. Spectral, geometric, and geospatial limitations of the Iris video preclude the use of Iris video directly; this is overcome by photogrammetric registration of robust Deimos-2 imagery and ancillary processed products using a high performance sensoragnostic, multi-temporal registration workflow. We assess the accuracy and precision of the proposed workflow (∼15 m; Euclidean) and demonstrate the potential to leverage the fusion of these data towards rapid, global-scale population distribution modeling. This has important implications to effective response to emergencies, especially in urban environments, where population density is driven largely by building heights, and a complementary, multi-scale understanding of the distribution and dynamics of people within that geographic area is required.