Processing-in-Memory (PIM) devices integrated into general-purpose systems demand virtual memory support. In this way, these devices can be seamlessly coupled to the software stack, while maintaining compatibility and security provided by address management via the Operating System (OS) without requiring disruptive programming efforts. Typically, PIM intends to access large volumes of data via vector operations, and thus can suffer severe penalties due to the high cost of page misses in the Translation Look-aside Buffer (TLB). Our study demonstrates the criticality of such penalties on the system's performance and that PIM must resort to large page sizes. The presented results exploit the native large pages available on the host, and they show substantial performance improvements $(84\times)$ for wide-vector PIM operations with large pages.