Designing and testing algorithms to process hyperspectral imagery is a difficult process due to the sheer volume of the data that needs to be analyzed. It is not only time-consuming and memory-intensive, but also consumes a great amount of disk space and is difficult to track the results. We present a system that addresses these issues by storing all information in a centralized database, routing the processing of the data to compute servers, and presenting an intuitive interface for running experiments on multiple images with varying parameters.