The various forms of Deep Neural Networks, such as CNN, RNN, and GNN find applications in many modern devices. The two primary operations, multiply and accumulate, dominate calculations in Neural Network applications. Existing computation engines like CPUs and GPUs fail to yield the desired performance and energy efficiency for these AI workloads. The opportunity arises for domain-specific architectures, and a new computation class called Neural Network Accelerators emerges. Processing-In-Memory is an in-situ analog computation architecture that accelerates MAC operations by evading the memory wall. However, the fabrication process limits the size of the PIM circuits. Therefore, large monolithic ICs become commercially impractical. The solution comes in the form of 2.5 D integration or Chiplet-based architecture.In this paper, we study a Chiplet-based PIM accelerator and identify several of its architectural parameters that impact the accelerator’s area, energy consumption, and running time of different networks on the device. We varied the parameters and compared the effects on the accelerator deploying various DNN models. We used a simulator that models Chiplet-based in-memory-computing accelerators for conducting the study.