We address in this paper the challenge of data poisoning attacks on Federated Learning. We consider a particularly challenging attack scenario in which a single poisoning attack is coordinated over a set of clients to complicate its detection. In response, the federated learning server assigns a weight to each client’s model update with the aim of mitigating the effects of the poisoning on the global model. To address this challenge, we first design a trust mechanism that enables the federated learning server to assess the trustworthiness of each client on the basis of the client’s adherence to the federated learning protocol and the quality of data contributed by the client. Capitalizing on the trust mechanism, we model the interactions between the attacker and federated learning server as a security max-min game. The outcome of the game guides the server on the optimal weight assignment strategy over the set of clients’ model updates, so as to minimize the effects of the data poisoning on the global model. Simulations conducted on the MNIST and CIFAR-10 datasets suggest that our proposed solution decreases the coordinated attack success rate, as well as the false positive and false negative percentages compared to two baseline solutions.