Over a long period, large character set CAPTCHAs are widely used to defend against automated attack programs on the Internet. However, with the development of deep learning techniques, some attacks for large character set CAPTCHAs have been proposed, proving that they are no longer secure. To defend against black-box attacks on these CAPTCHAs, we propose a novel defense method based on transferable adversarial example techniques. On the one hand, we defend against character recognition attacks by adding adversarial perturbations to the characters of CAPTCHAs combining three strategies: Gradient-based Attacks, Input Transformations and Attention Mechanism. On the other hand, we defend against character detection attacks by leveraging an ensemble method to generate adversarial perturbations on the background of CAPTCHAs. To the best of our knowledge, this is the first study to improve the security of large character set CAPTCHAs against black-box attacks based on transferable adversarial example techniques. Using the eight most popular Chinese CAPTCHA schemes as examples, we conduct comprehensive experiments. Results show that our method improves the security of large character set CAPTCHAs by making the average success rate of black-box attacks significantly drop from 53.33% to 3.49%. Overall, our method can be helpful to the design of more secure large character set CAPTCHAs.