Determining the relatedness of publications by detecting similarities and connections between researchers and their outputs can help science stakeholders worldwide to find areas of common interest and potential collaboration. To this end, many studies have tried to explore authorship attribution and research similarity detection through the use of automatic approaches. Nonetheless, inferring author research relatedness from imperfect data containing errors and multiple references to the same entities is a long-standing challenge. In a previous study, we conducted an experiment where a homogeneous crowd of volunteers contributed to a set of author name disambiguation tasks. The results demonstrated an overall accuracy higher than 75% and we also found important effects tied to the confidence level indicated by participants in correct answers. However, this study left many open questions regarding the comparative accuracy of a large heterogeneous crowd with monetary rewards involved. This paper seeks to address some of these unanswered questions by repeating the experiment with a crowd of 140 online paid workers recruited via MTurk’s microtask crowdsourcing platform. Our replication study shows high accuracy for name disambiguation tasks based on authorship-level information and content features. These findings can be of greater informative value since they also explore hints of crowd behavior activity in terms of time duration and mean proportion of clicks per worker with implications for interface and interaction design.