Deep reinforcement learning (DRL) has been widely applied to design computation offloading policies in mobile edge computing (MEC) systems. This paper investigates the computation offloading optimization for delay-sensitive mobile applications with interdependent tasks in MEC-enabled wireless networks. Specifically, a constrained Markov decision process problem is formulated, with the goal of minimizing the overall energy consumption of mobile users while ensuring the completion of mobile applications within their deadlines. The graph neural network (GNN) is introduced to capture the diverse interdependent relationships between tasks in a scalable and expressive manner. Furthermore, the original constrained problem is converted to an unconstrained one using Lagrangian relaxation, and then solved by the primal-dual method. In particular, the Lagrangian variables are updated in dual space by the subgradient method, and the policy network parameters are updated in primal space by the policy gradient method. At last, a task-level offloading algorithm is proposed and simulation results demonstrate that it outperforms the baselines significantly. Specifically, it can maintain the application timeout within constraints at the cost of only a slight energy consumption increase, compared with the baseline.