Although deep convolution neural networks (DC-NNs) can achieve remarkable success on medical image segmentation, their performance might significantly deteriorate when confronting testing data with the new distribution. Recent studies suggest that one major cause of this issue is the strong inductive bias of DCNNs, which towards image styles (e.g., superficial texture) that are sensitive to change, instead of the invariant content (e.g., object shapes). Inspired by this, we propose a novel method, named Invariant Content Synergistic Learning (ICSL), to improve the generalization ability of DCNNs on unseen data by controlling the inductive bias. Specifically, ICSL first mixes the style of training instances to perturb the training distribution, so that more diverse domains or styles would be made available for training DCNNs. Then, based on the perturbed distribution, we carefully design a dual-branches invariant content synergistic learning strategy to prevent style-biased predictions and maintain the invariant content. Extensive experimental results demonstrate the superior performance of the proposed method over state-of-the-art domain generalization methods on two typical medical segmentation tasks.