Federated learning is a machine learning paradigm where many clients collaboratively train a machine learning model while ensuring the nondisclosure of local data sets. Existing federated learning methods conduct optimization over the same model structure, which ensures the convenience of parameter updates. However, the same structure among clients and the server may pose risks of privacy leakage as parameters from one’s model can fit in others’ models. In this paper, we propose a heterogeneous federated learning method to preserve privacy. Each client utilizes neural architecture search to determine distinct models via local data and update the server model via a federated learning framework with knowledge distillation. Besides, we develop a privacy-preserving binary low-rank matrix decomposition method (Blow), i.e., decomposing the output matrix into two low-rank binary matrices, to further ensure the secrecy of distilled information. A simple but efficient alternating optimization method is proposed to address a key subproblem arising from the binary low-rank matrix decomposition, which falls into the category of the Np-hard bipartite boolean quadratic programming. Based on extensive experiments over the image classification task, we show our algorithm provides satisfactory accuracy and outperforms baseline algorithms in both privacy protection and communication efficiency.