Digital Processing-in-Memory (PIM) architectures have recently unleashed significant potential in Deep Neural Network (DNN) acceleration not only by addressing memory-wall bottlenecks but also by offering impressive performance improvement compared to the von-Neumann architecture. Different flavors of DNN ASIC accelerators have also been developed and fabricated, with remarkable performance and efficiency. This paper conducts a comparative study of PIM and Gemmini-generated accelerators for low-bit-width DNN inference and underscores their key architectural constraints, opportunities, and security challenges. To this end, we compare multiple low-power accelerators with our recently taped-out PIM macro to provide a guideline for the community.