Gaussian boson sampling is a promising candidate for showing experimental quantum advantage. While there is evidence that noiseless Gaussian boson sampling is hard to efficiently simulate using a classical computer, the current Gaussian boson sampling experiments inevitably suffer from loss and other noise models. Despite a high photon loss rate and the presence of noise, they are currently claimed to be hard to classically simulate with the best-known classical algorithm. In this work, we present a classical tensor-network algorithm that simulates Gaussian boson sampling and whose complexity can be significantly reduced when the photon loss rate is high. By generalizing the existing thermal-state approximation algorithm of lossy Gaussian boson sampling, the proposed algorithm allows us to achieve increased accuracy as the running time of the algorithm scales, as opposed to the algorithm that samples from the thermal state, which can give only a fixed accuracy. This generalization enables us to simulate the largest scale Gaussian boson sampling experiment so far using relatively modest computational resources, even though the output state of these experiments is not believed to be close to a thermal state. By demonstrating that our new classical algorithm outperforms the large-scale experiments on the benchmarks used as evidence for quantum advantage, we exhibit evidence that our classical sampler can simulate the ground-truth distribution better than the experiment can, which disputes the experimental quantum advantage claims.
Comment: 23 pages, 13 figures