CT-based attenuation image (μmap) is used in a PET/CT study. However, the motion introduced mismatch between PET and CT, and artifacts in CT itself can result in PET image artifacts. Synthesizing μmap using AI from PET data draws increasing attention. Compared to using MLAA or non-AC images for a neural network input, histo-images obtained from excellent PET systems, e.g., uMI Panorama (sub 200 ps TOF), provides sufficient structure information with 18 F-FDG and is fast to calculate. In this study, we propose to apply a deep neural network and take angular-view grouped histo-images as the network input, to synthesize the μmap. Four-channel angular-grouped histo-image and one-channel histo-image were taken as the input independently. Due to alignment of the line-integral projection loss (LIP) direction and average angle of grouping LORs, two different angular-view grouped histo-images, matched and mismatched, were investigated. In addition, a network with the NonAC image input was also trained as the reference. The evaluation results and visual inspection demonstrate that by maintaining direction information, the network using the four-channel LIP-matched histo-images, yielded more accurate and robust synthesized μmap as compared to using the one-channel (full-view-angle) histo-image, which also lead to lower reconstruction bias with respect to normal reconstruction using CT-μmap. However, the mismatch between the LIP loss and histo-image view angle (the four-channel LIP-mismatched histo-image network) resulted in the worst network prediction and reconstruction performance. Moreover, the four-channel LIP-matched histo-images acquired with ~200ps TOF resolution demonstrated no significance difference in comparison to the NonAC method. Hence, these results prove our method is a capable and fast method of producing promising μmap for further PET image reconstructions.