Hypervolume subset selection (HSS) is a hot topic in the evolutionary multi-objective optimization (EMO) community since hypervolume is the most widely-used performance indicator. In the literature, most HSS algorithms were designed for small-scale HSS (e.g., environmental selection: select $N$ solutions from $2N$ solutions where $N$ is the population size). Few researchers focus on large-scale HSS as a post-processing procedure in an unbounded external archive framework (i.e., subset selection from all examined solutions). In this paper, we propose a two-stage lazy greedy inclusion HSS (TGI-HSS) algorithm for large-scale HSS. In the first stage of TGI- HSS, a small solution set is selected from a large-scale candidate set using an efficient subset selection method (which is not based on exact hypervolume calculation). In the second stage, the final subset is selected from the small solution set using an existing efficient HSS algorithm. Experimental results show that the computational time can be significantly reduced by the proposed algorithm in comparison with other state-of-the-art HSS algorithms at the cost of only a small deterioration of the selected subset quality.