Detrimental online behavior such as harassment and cyberbullying is becoming a serious, large-scale problem damaging people’s lives. This phenomenon is creating a need for automated, data-driven techniques for analyzing and detecting such behaviors. We propose a machine learning method for simultaneously inferring user roles in harassment-based bullying and new vocabulary indicators of bullying. The learning algorithm considers social structure and infers which users tend to bully and which tend to be victimized. To address the elusive nature of cyberbullying, the learning algorithm only requires weak supervision. Experts provide a small seed vocabulary of bullying indicators, and the algorithm uses a large, unlabeled corpus of social media interactions to extract bullying roles of users and additional vocabulary indicators of bullying. The model estimates whether each social interaction is bullying based on who participates and based on what language is used, and it tries to maximize the agreement between these estimates, i.e., participant-vocabulary consistency (PVC). We evaluate PVC on three social media data sets, demonstrating quantitatively and qualitatively its effectiveness in cyberbullying detection.