Text matching is a critical task in natural language processing to measure semantic similarity between two texts. A significant portion of online texts are labeled with a variety of coarse topic responses. These supervised topic indicators can provide prior structured and explicable semantics for textual similarity modeling. However, most existing state-of-the-art neural network methods cannot benefit from such complementary topic signals. Therefore, we propose a novel Topic Supervision BERT-based model (TSB) for text matching. TSB provides a reference multi-task joint training framework involving two types of topic supervision, including explicit and implicit topic supervision. To constrain consistent topic correspondences between texts, we introduce a supervised auxiliary learning task to incorporate explicit pre-defined topic supervision. Furthermore, to adapt to latent topic structures for mutual benefit between text representations and multiple tasks, we integrate a topic model into a contextual text representation model BERT to mine and incorporate implicit self-learnable topic supervision. Experimental results show that TSB supplements explicit and implicit topic information through a multi-task learning approach, which significantly improves the performance of text matching on two public datasets, especially on challenging short text matches.