Most current neural networks for reconstructing surfaces from point clouds ignore sensor poses and only operate on point locations. Sensor visibility, however, holds meaningful information regarding space occupancy and surface orientation. In this paper, we present two simple ways to augment point clouds with visibility information, so it can directly be leveraged by surface reconstruction networks with minimal adaptation. Our proposed modifications consistently improve the accuracy of generated surfaces as well as the generalization capability of the networks to unseen domains. Our code, data and pretrained models can be found online: https://github.com/raphaelsulzer/dsrv-data.