Web spamming, the practice of introducing artificial text and links into web pages to affect the results of searches, has been recognized as a major problem for search engines. But it is mainly a serious problem for web users because they tend to confuse trusting the search engine with trusting the results of a search. In this paper, we propose "backwards propagation of distrust,'' as an approach to finding spamming untrustworthy sites. Our approach is inspired by the social behavior associated with distrust. In society, recognition of an untrustworthy entity (person, institution, idea, etc) is a reason for questioning the trustworthiness of those that recommended its entity. People that are found to strongly support untrustworthy entities become untrustworthy themselves. So, in society distrust is propagated backwards. Our algorithm simulates this social behavior on the web graph with considerable success. Moreover, by respecting the user's perception of trust through the web graph, our algorithm makes it possible to resolve the moral question of who should be making the decision of weeding out web spammers in favor of the user, not the search engine or a higher authority. Our approach can lead to browser-level or personalized server-side web spam filters that work in synergy with the powerful search engines to deliver personalized, trusted web results.