The recent California Consumer Privacy Act (CCPA) requires that personal data shall be limited to what is necessary for business purposes. Business services shall “implement technical safeguards that prohibit re-identification of the consumer to whom the information may pertain”. For recommender systems, we believe the legal concepts of limitation and technical safeguard are not specific enough to operationalize in practice. This study makes efforts to map the legislative challenges to practice of reducing personal data. More importantly, we borrowed the notion of uncertainty from the machine learning community, and added it as another aspect of recommendation utility, in addition to recommendation accuracy, to guide the data reduction process. The benefit of using uncertainty is that we have more comprehensive consideration while reducing the personal data. In addition, two major types of uncertainty in machine learning models: aleatoric uncertainty and epistemic uncertainty, helped us formulate two groups of data reduction strategies: within-user and between-user. We conducted a series of analyses regarding uncertainty change and accuracy loss caused by different data reduction strategies. We found that at the aggregate level, data reduction is feasible with certain data reduction strategies. At the individual level, the recommendation utility (both uncertainty and accuracy) loss incurred by data reduction disparately impacts different users — a finding which has implications for fairness and transparency of AI models. Our results reveal the difficulty and intricacy of the data reduction problem in the context of recommender systems.