The outstanding performance of modern deep learning systems resulted in their widespread adoption in various application domains, which include security-critical applications. However, recent works have shown that these systems are vulnerable to backdoor attacks. This paper proposed a novel approach to perform latent backdoor attacks. Instead of designing the exogenetic trigger backdoor on the pixel space, which has been done by existing works, this paper explored the connection between latent space manipulation and endogenic backdoor trigger generation by utilising deep generative models to generate the backdoor trigger in the latent space. The effectiveness of the proposed attack is demonstrated on several neural network architectures trained on three well-known datasets, which are MNIST, CIFAR-10 and GTSRB. This study is undertaken to provide a new viewpoint for better understanding the endogenic vulnerability of the deep neural networks due to the lack of training data and test data, instead of creating new exogenetic misclassification behaviours for existing backdoor attacks.