Retrieving event videos based on textual description is a promising research topic in the fast-growing data field. However, traffic data increases every day, so it is essential to need intelligent traffic system management in conjunction with humans to speed up the search. We propose a multi-module system that delivers accurate results that meet objectives, including explainability and scalability at the same time. Our solution considers neighbors entities related to the mentioned object to represent an event by rule-based, which can represent an event by the relationship of multiple objects. In our proposed retrieval method, we add our modified model of Alibaba solution with the post-processing techniques from HCMUS method in AI City Challenge 2021 to boost the explainability of the obtained results. As the traffic data is vehicle-centric, we apply two language and image modules to analyze the input data and obtain the global properties of the context and the internal attributes of the vehicle. We introduce a one-on-one dual training strategy for each representation vector to optimize the interior features for the query. Finally, a refinement module gathers previous results to enhance the final retrieval result. We benchmarked our approach on the data of the AI City Challenge 2022 and obtained the competitive results at an MMR of 0.3611. We were ranked in the top 4 on 50% of the test set and in the top 5 on the full set.