With the growth of interest in the attack and defense of deep neural networks, researchers have increasingly focused on the robustness of their application to devices with limited memory, in order to deal with unknown-budget (blind) adversarial attacks under different compression ratios. We analyze the existing pruning methods and find that the robustness of the pruned models varies drastically with different pruning processes, and the robustness of the pruned model with adversarial training exhibits a high sensitivity to the budget of the adversarial examples. These methods cannot obtain models that are comprehensively robust when confronting blind adversarial attacks with different compression ratios. To address this problem, we propose an approach called blind adversarial pruning (BAP) that introduces the approach of blind adversarial training into the gradual pruning process, to ultimately obtain pruned models with comprehensive robustness under different compression ratios. The experimental results obtained using BAP for pruning classification models based on several benchmarks demonstrate the competitive performance of this method; the robustness of BAP models is more stable compared to various pruning processes, and BAP exhibits better comprehensive robustness against blind adversarial attacks.