Multi-task deep learning has gradually evolved from general layer parameter sharing to connecting and switching between specific layers. However, due to the limited knowledge of task relationships and hierarchies, it is difficult to reasonably set up shared modules in the network, which leads to insufficient model accuracy and generalization ability. To address this problem, a multi-task learning method based on the combination of causal learning and adaptive strategy is given here: at the feature level adopts causal discovery, causal feature reflux to extract stable task features required for model training to improve the accuracy and generalization ability of the model; and then adaptive learning strategy is used to dynamically learn the shared network modules between tasks, which avoids inaccuracy of the manual settings. The experimental results and analyses validate the effectiveness of the approach through rich experiments on the NYU_V2.