Vertical Federated Learning (VFL) stands out as a promising approach to safeguard privacy in collaborative machine learning, allowing multiple entities to jointly train models on vertically partitioned datasets without revealing private information. While recent years have seen substantial research on privacy vulnerabilities and defense strategies for VFL, the focus has primarily been on passive scenarios where attackers adhere to the protocol. This perspective undermines the practical threats since the attackers can deviate from the protocol to improve their inference capabilities. To address this gap, our study introduces two innovative data reconstruction attacks designed to compromise data privacy in an active setting. Essentially, both attacks modify the gradients computed during the training phase of VFL to breach privacy. Our first attack uses an Active Inversion Network exploiting a small portion of known data in the training set to coerce the passive participants into training an auto-encoder for the reconstruction of their private data. The second attack, Active Generative Network, utilizes the knowledge of the training data distribution to guide the system into training a conditional generative network (C-GAN) for feature inferences. Our experiments confirm the efficacy of both attacks in inferring private features from real-world datasets.