Extractive Question Answering (EQA) is one of fundamental problems in Natural Language Understanding. This paper deals with the problem of transferring an EQA model trained on a single (probably large) dataset, known as a source, to multiple new and unlabeled datasets, known as targets. Specifically, a novel single-source to multiple-target domain adaptation method is proposed to address the cross-domain EQA task. The method forms the shared feature space across different domains via minimizing the training loss on the source and the feature discrimination loss between source and target samples, and importantly, a syntax alignment loss is also considered to regulate sample representations from the source-and-target domains. Experimental results on several highly-competitive EQA datasets demonstrate the proposed method outperforms state-of-the-art models by a large margin. Intensive ablation studies are also offered to examine the impact from the integration of source-target domains, investigate the model breakdown, and visualize the intermediate shared latent subspace.