This paper covers adjustment and optimization of conventional adaptable prediction and event-triggered replanning using non-model based methods. The project has optimized the model-based adaptable prediction and event-triggered replanning from three different aspects. Firstly, the adaptable prediction model will be updated based on a trained neural network to increase the prediction performance. Secondly, event-triggered replanning algorithms will be trained as a reinforcement learning system, the ego vehicle is expected to activate fewer times of safe control and construct a smoother path. Lastly, parallel computing and GPU acceleration will be implemented during the data training to increase the training efficiency. All of the obtained results will be analyzed and compared with model-based results. Limitations of each model will also be described in the context. This paper proposes a non-model based prediction and replanning algorithm for vehicle interactions in unstructured environments.