Compared to traditional focused transmissions, plane wave (PW) ultrasound imaging has the potential to enable higher frame rates, which is clinically relevant to real-time applications and ultrafast imaging. However, when reducing the number of PW transmissions to reduce image formation times, PW imaging is confounded by image quality degradation, acoustic clutter, and speckle noise. To tackle this challenge, we present a deep learning-based method to analyze raw radiofrequency (RF) channel data acquired by the ultrasound probe and convert this signal to the final B-mode image, bypassing the traditional beamforming procedure. The deep learning architecture for this approach relies on a conditional generative adversarial network (cGAN), in which the generative model and classifying model work simultaneously to produce an indistinguishable output from a ground truth. The cGAN was trained to predict B-mode images that look like beamformed PW results after multiple insonifications. This network was trained and tested utilizing a publicly accessible PICMUS database composed of in vivo and ex vivo ultrasound inclusions with randomly distributed scatterers in various combinations. The proposed method produces signal-to-noise ratio (SNR) enhancements from 1.112 to 1.540 when compared with conventional delay-and-sum (DAS) beamforming of a single PW insonification. The cross-correlation coefficient between a 75 PW image and cGAN-predicted data was 0.976, an improvement over the 0.641 obtained when the 75 PW image was cross-correlated with a DAS PW image created after a single insonification. These results demonstrate the potential of generative adversarial networks to substitute traditional DAS beamforming in future applications.