Distributed data processing by clients with reports to a central server is an important component of contemporary discovery systems, e.g., federated learning. Although such client processing is generally considered privacy-enhancing, the client reports may still reveal attributes of the client to an adversary. We examine optimal randomization methods for obfuscating reports to preserve the privacy of client attributes while maintaining utility at the central server. Using total variation to bound the performance of the adversary in breaking privacy, we explore in detail at how privacy might possibly be leaked in the federated learning scenario. We demonstrate the difficulty of enforcing zero leakage of attribute values without significant utility loss followed by consideration of optimal obfuscation with bounded privacy leakage. Numerical results demonstrate the privacy versus utility trade-off and validate the utility approximations employed.