The use of social media has become pervasive across many aspects of our lives. We now depend on social media platforms more than ever before. Our dependence on social media has created a greater demand for a higher quality of service, but also for this quality to apply in new areas. As the social media experience and “real world” experience merge, there is an increased expectation that the norms of society will also apply in social media settings. There is an increasing demand for social media platforms to empower users with tools to report hate speech and other forms of dangerous content. There is also an increasing demand for greater quality of service in the way these reports are managed. The approach of social media companies to this problem, which is to largely avoid the issue by not publishing the data needed to assess the relevant quality of service, is being overcome by third party solutions. This paper discusses one such solution which is currently under development, as well as some of the challenges to improving quality of service in this area.