Recently, user privacy in distributed computing has received increasing attention. Matrix multiplication is one of the fundamental high-frequency operations in distributed machine learning (e.g., gradient descent, linear regression). This paper studies the batch Fully Private distributed Matrix Multiplication (FPMM) problem. In batch FPMM, the user aims to compute multiple desired matrix multiplications from two public libraries which are stored across all workers. Batch FPMM requires that colluding workers cannot obtain the indexes of all desired ma-trices in the two public libraries (i.e., privacy constraint). In this paper, we first propose the Bath fullY privatE schemE (BYEE) to encode multiple desired matrix multiplications together based on the construction of bilinear complexity. We then perform a solid theoretical analysis to prove the privacy and decodability of BYEE. Multi-aspect analysis and extensive simulation results illustrate that BYEE can significantly reduce the computation latency than state-of-the-art schemes that do not jointly encode multiple desired matrix multiplications, especially when the number of desired matrix multiplications and the number of colluding workers are large.