After the COVID-induced lock-downs, augmented/virtual reality turned from leisure to desired reality. Real-time 3D audio is a crucial enabler for these technologies. Nevertheless, systems offering object spatialization in 3D audio fall in two limited cases. They either require long-running pre-renders or involve powerful computing platforms. Furthermore, they mainly focus on active audio sources, while humans rely on the sound's interactions with passive obstructions to sense their environment. We propose a hardware co-processor for real-time 3D audio spatialization supporting passive obstructions. Our solution attains similar latency w.r.t. workstations while draining a tenth of the power, making it suitable for embedded applications.