Camera and global navigation satellite system (GNSS)/inertial measurement unit (IMU) are widely used in autonomous vehicles and robots due to their lightweight and low cost, in these cases, an accurate extrinsic calibration is essential for sensor fusion. Traditionally, specific vehicle movements or scenarios with known fiducial markers are required for the sophisticated calibration process, however, the former is difficult to eliminate the impact of drift made by the techniques used to estimate the motion of each sensor, and the latter is poorly automated and needs specialist human expertise. To tackle the above problems, this article proposes a target-free stereo camera-GNSS/IMU self-calibration method based on iterative refinement. The initial calibration parameters are calculated with relative pose constraints derived from visual odometry. To eliminate the impact of visual odometry drift, parameters are further refined by iteratively using paired relative pose constraints and absolute pose constraints, which are derived from scan-global map matching. Quantitative calibration results on simulated data show the robustness and accuracy of this approach with root mean squared errors (RMSEs) of ${10}^{-{3}}$ deg and ${10}^{-{3}}$ m for rotation and translation, respectively. Further experiments in the real world confirm the superiority of our method in natural environments.