This paper proposes a novel expensive global optimization method, namely Stacked Ensemble of Metamodels for Expensive Global Optimization (SEMGO ††), which aims to improve the accuracy and robustness of the surrogate. Since the existing metamodel ensemble methods leverage fixed linear weighting strategies, they are likely to result in bias when facing various problems. SEMGO employs a learning-based second-layer model to combine the predictions of the first-layer metamodels adaptively. The proposed SEMGO is compared with three state-of-the-art metamodel ensemble methods on seventeen widely used benchmark problems. The experimental results on seventeen benchmark problems show that SEMGO outperforms three state-of-the-art metamodel ensemble methods. The results show that SEMGO performs the best. In addition, the proposed method is applied to solve a practical chip packaging problem, and the previous optimization result is improved over a large margin.