Automatic object recoloring or swatch generation aims to change color of an object or the entire scene without altering the structural consistency. The challenges exacerbate while dealing with retail items, such as clothing, shoes and accessories due to the presence of complex patterns, folds and shadows, deformation caused by the human model, and small components such as frills, buttons, belts, etc. The challenge associated with this application increases further for multi-colored item and multi-apparel setup. In this work, we aim to address these problems with our novel architecture SwatchNet based on generative adversarial network (GAN). It stems from the proposed apparel components aware feature extraction module to create rich feature embedding, which guides the proposed dual attention u-net to synthesize recolored product image. For seamless information flow, we have also proposed a dual attention module at the bottleneck of encoder and decoder. Finally, we unify a diverse set of recoloring applications using a fixed training and inference pipeline. The experimental results of fashion item recoloring for several test setups using four large-scale datasets demonstrate the effectiveness of our proposed approach.