Recently, a number of researches have been proposed to protect the intellectual property (IP) of Deep Neural Network (DNN) models. However, most existing works are passive protection methods as they attempt to extract watermark from the pirated model after piracy occurs. In this paper, we propose an active IP protection method for DNN in which we utilize a variant of sample-specific backdoor attack to implement active authorization control for DNN models. During training, we mislabel all the clean images and keep the labels of backdoor instances as their ground-truth labels. Different from general backdoor trigger, we train a U-Net model to generate sample-specific trigger. This kind of trigger is sample-specific and invisible, which works as the secret key for each image and is hard to be noticed. Moreover, compared with existing active DNN IP protection methods, the proposed method can be applied in the black-box scenario. Experimental results on ImageNet and YouTube Aligned Face datasets demonstrate the effectiveness and robustness of the proposed method.