Previous research suggests that biomusic, a type of biosignal sharing, is effective at promoting empathy and closeness among individuals. However, it is unclear whether these effects are due to the information it encodes or other emotional aspects of its resulting music. To explore this question, we developed a Generative Adversarial Network (GAN) to create synthetic biomusic that approximates real biomusic, and employed deception to evaluate its effects on 24 pairs of participants engaged in real-time emotional disclosure. Users reported that both real and synthetic biomusic provided the same amount of information about their conversational partner as observing body language, facial expressions, or vocal tone. Further, both conditions increased users’ ratings of closeness and empathy with each other compared to listening to no music. However, we found no statistically significant differences between the two biomusic conditions across any of our metrics. We discuss the implications of these results for the design of future biomusic systems.