Object detection based on LiDAR and camera fusion has been successfully proved in the field of deep learning in recent years. However, LiDAR is affected by sensor noise in severe weather conditions such as rain, snow and mist, and the detection quality of the camera is limited in sparse lighting conditions. Considering radar is relatively more reliable in most harsh environments, we propose an attention-based deep learning framework for camera and radar fusion, which significantly improves object detection accuracy in severe weather conditions. Specifically, our framework is composed of three parts: 1) a radar branch for predicting the location of objects in the radar plane, 2) a projected attention module, and 3) a camera branch trained based on the attention fusion feature from the projected attention module. In contrast to previous approaches, our approach focuses the learning attention to object locations based on radar data projections, reducing the effect of image blurring in rainy and foggy weather conditions. We evaluate our model on the RADIATE dataset [1], focusing specifically on rainy days when the camera image is obscured by raindrops. The experimental results demonstrate that our proposed method shows significantly improved performance in rainy weather compared to other methods.