Multi-task learning (MTL), a learning paradigm that leverages the relationships among different tasks to concurrently improve their prediction performance, has been widely applied in the medical image processing field. To tackle the challenge of inconsistent task-specific predictions in early MTL methods, recent works primarily focus on incorporating certain consistency constraints into model training for specific MTL settings, which is difficult to generalize to heterogeneous medical image processing tasks. Meanwhile, the synergy of different consistency objectives has not been well explored. In this paper, we address these gaps by proposing a novel multi-task consistency regularization scheme. It consists of two complementary consistency constraints: Task-level Output Consistency (TOC) which constrains different tasks to give consistent predictions and Feature-level Representation Consistency (FRC) which enforces feature representation consistency between target and auxiliary tasks. Furthermore, we present an optimal strategy to select appropriate constraints for any given set of tasks. Extensive experiments on three popular benchmark datasets were conducted in both the few-shot and semi-supervised settings. The results demonstrate our proposed multi-task consistency regularization scheme improves MTL performance by 13.7% in Dice score on average. Additionally, we show that FRC is effective under all MTL settings, achieving an average 6.3% gain in accuracy.