This paper analyzes the fundamental limit of se-mantic communications over the infinite discrete memoryless channel where separated source-channel coding (SSCC) is op-timal. We consider the scenario to send a semantic source, which consists of an observation state and its corresponding semantic state, and both of them are recovered at the receiver. To characterize the performance of this system, we adopt semantic rate-distortion function (SRDF) to study the relationship among the minimum compression rate, the observation distortion, the semantic distortion, and the channel capacity for generally distributed semantic sources. Specifically, we propose a neural-network-based estimation method for SRDF: First, we show that SRDF can be rewritten as an inf-sup problem by using its dual form; then, by leveraging generative networks to learn the semantic source distribution, we solve this problem and propose a neural estimator, called NESRD, to estimate the SRDF. Finally, experimental results validate our proposed method for the cases with the joint Gaussian semantic source and some typical datasets.