In this paper, we propose an innovative method for 3D reconstruction of indoor scenes, which subverts the reliance of traditional 3D reconstruction on known intrinsic and extrinsic camera parameters. While traditional methods achieve impressive reconstruction results by dividing them into two phases, depth estimation and depth fusion, this process usually relies on a priori knowledge, in particular on precise intrinsic and extrinsic camera parameters. For this reason, this paper employs a classification model and Gaussian Fourier mapping approach to skillfully predict the camera's intrinsic and extrinsic parameters. In the Truncated Symbolic Distance Function (TSDF) fusion stage, this paper efficiently utilizes the depth information through cascade operations and transforms the TSDF fusion into an end-to-end form, which effectively reduces the noise and reduces the processing complexity. These innovations enable the method in this paper to achieve 3D reconstruction with ideal accuracy despite the lack of a priori knowledge. Experiments on the ScanNetv2 and 7-Scenes datasets show that the method in this paper achieves results comparable to or better than the current state-of-the-art in 3D reconstruction. This not only confirms the effectiveness of the method in this paper in both theoretical and practical applications, but also opens a new avenue for future research on 3D reconstruction in the environment of unknown camera parameters.