Significant progress has been made in developing accurate automatic segmentation systems for various biomedical applications using convolutional networks (CNNs). However, these systems often lose effectiveness when they encounter a domain shift caused by variations in imaging protocols. Traditional supervised transfer learning approaches are not ideal because manually annotating new data for every testing domain is impractical, while unsupervised domain adaptation is a challenging subject in the field of biomedical image analysis. In this paper, we introduce a new framework for unsupervised domain adaptation called Shared Encoder and Feature Adaptation (SEFA). This framework focuses on cross-sequences adaptation and is more resilient to disparities in input data and does not require any annotations on the test domain. Specifically, our segmenter model is based on a compact fully convolutional network designed for breast mask prediction. First, the shared encoder is designed to minimize the distribution disparity between the source and target domains. Then, a domain discriminator is set up to discriminate the feature space of both domains. Our model is optimized by using unpaired TSE/DCE sequences in an unsupervised manner. This eliminates the need for labeling additional medical datasets. Using our unsupervised approach, we achieve segmentation accuracies that are comparable to those obtained through supervised training. These accuracies are measured by metrics such as Case/Global Dice Score and average symmetric surface distance (ASD), which are close to the best possible performance.