Trajectory prediction is essential in improving the safety of automated vehicles (AVs), However, most learning-based models only aim to improve the trajectory prediction accuracy and are tested offline in evaluations. When additional data come from a new environment, the offline models need to be re-trained with both the new and old data to avoid catastrophic forgetting of previously learned knowledge. Moreover, all data from a new environment is assumed to be available simultaneously, conflicting with the online data collection of AVs in the real world. Considering these problems, this paper rethinks the research orientation of trajectory prediction. First, a novel learning paradigm named online task-free continual learning (OTFCL) is proposed, highlighting new goals, including learning online data from new environments efficiently and avoiding catastrophic forgetting without re-training. Then, according to the goals of OTFCL, a testing methodology is designed for a comprehensive evaluation of trajectory prediction. Finally, a state-of-the-art model is evaluated in experiments by applying the proposed testing methodology based on the INTERACTION dataset. Experimental results reveal limitations of the state-of-the-art model in real-world applications, and potential solutions based on OTFCL to overcome these limitations are also discussed.