Federated learning is inherently vulnerable to poisoning attacks, since no training samples will be released to and checked by trustworthy authority. Poisoning attacks are widely investigated in centralized learning paradigm, however distributed poisoning attacks, in which more than one attacker colludes with each other, and injects malicious training samples into local models of their own, may result in a greater catastrophe in federated learning intuitively. In this paper, through real implementation of a federated learning system and distributed poisoning attacks, we obtain several observations about the relations between the number of poisoned training samples, attackers, and attack success rate. Moreover, we propose a scheme, Sniper, to eliminate poisoned local models from malicious participants during training. Sniper identifies benign local models by solving a maximum clique problem, and suspected (poisoned) local models will be ignored during global model updating. Experimental results demonstrate the efficacy of Sniper. The attack success rates are reduced to around 2% even a third of participants are attackers.