Federated Learning (FL) is a method that allows multiple entities to jointly train a machine learning model using data located in various places. Unlike the conventional approach of gathering private data from distributed locations to a central place, federated learning involves solely exchanging and aggregating the machine learning models. Each party shares only a machine learning model trained locally on their private data, ensuring that the sensitive data remains within the respective silos throughout the process. However, these shared models in FL may still leak sensitive information about the training data in the form of e.g. membership disclosure. To mitigate these residual privacy risks in federated learning, one has to use additional defence techniques such as Differential Privacy (DP), which introduces noise into the training data or the model. Differential Privacy provides a mathematical definition of privacy and can be applied in machine learning via different perturbation mechanisms. This work focuses on the analysis of Differential Privacy in federated learning through (i) output perturbation of the trained machine learning models and (ii) a differentially-private form of stochastic gradient descent (DP-SGD). We consider these two approaches in various settings and analyse their performance in terms of model utility and achieved privacy. To evaluate a model’s privacy risk, we empirically measure the success rate of a membership inference attack. We observe that DP-SGD allows for a better trade-off between privacy and utility in most of the considered settings. In some settings, however, output perturbation can provide a better or similar privacy-utility trade-off and at the same time better communication and computational efficiency.