This work presents an approach to enhance the quality of high-resolution images obtained by means of systems relying on synthetic aperture radar (SAR). For this purpose, a deep learning method called conditional generative adversarial networks (cGAN) is applied to the imager outcome when it is prone to suffer artifacts. This is especially the case of novel systems pushing the limits of SAR (e.g., irregular sampling and multilayered media) resulting in very chaotic clutter and image artifacts that cannot be easily removed with conventional approaches. The cGAN can be trained to detect high-level characteristic features in the image (e.g., parts of a scissor blade) so another output based on these detected features can be tailored. In other words, it can translate features contaminated by artifacts into clean features, effectively improving the quality of SAR images. Unlike other deep learning approaches, the training of the involved neural networks tends to be stable thanks to the structure based on two competing subsystems. The proposed approach is illustrated using simulated and measurement data in the context of two advanced near-field SAR systems considering: 1) cylindrical multilayered media and 2) freehand acquisitions. Results show that cGANs clearly outperform conventional approaches removing most of the artifacts, enabling to produce a clean output image.