The progress in deep learning methods has bolstered the development of automated vehicles during the last decade. However, the deployment of deep learning methods in safety-critical applications raised questions on their safety. Like other vehicle components, a testing process has to prove the reliability of perception systems. Scalability issues arise when using real-world data to validate perception algorithms due to the immense amount of sensor data that needs to be tested. Simulation tools can complement this testing process, as they can fabricate synthetic data based on variable specifications of test cases. While simulative tools can produce a vast amount of test data, at some point, the testing process is limited by the available computing resources. Identifying test specifications that pose risks to the perception algorithms is crucial for efficiently utilizing these computing resources and estimating functional reliability. We present a pipeline for adaptive test case selection to expose the faults of a deep learning system using synthetic image data generated by a simulation framework. We apply our concept to a state-of-the-art object detector and implement multiple adaptive sampling strategies to demonstrate their ability of early fault detection. Our experiments show that our pipeline can achieve a 95% coverage of system faults while reducing the number of test executions by 90%.