Federated learning (FL) is an efficient, scalable, and privacy-preserving technology in which clients collaborate on machine learning or deep learning model training. However, malicious clients can send poisoned model updates to the central server without being identified, which makes FL vulnerable to backdoor attacks. In this work, we propose a novel defence approach, FLSec, to mitigate backdoor attacks caused by adversarial local model updates. FLSec utilizes an original measurement, GradScore, computed from the loss gradient norm of the final layer of the local models for backdoor defence. We show that GradScore is efficient and robust in identifying malicious model updates through analysis and experiments. Our extensive evaluation also demonstrates FLSec is highly effective in mitigating three state-of-the-art backdoor attacks on well-known datasets, MNIST, LOAN, and CIFAR-10. The accuracy on a benign dataset with the proposed defence approach is nearly unchanged, with the accuracy on the backdoor dataset being reduced to 0%. In addition, our experiments show that FLSec significantly outperforms existing backdoor defences in multi-round backdoor attacks.