Word sense disambiguation (WSD), which entails figuring out a target word's meaning based on its context, has become an increasingly significant task. However, none of them has been able to achieve an F1-score higher than 85% since Wordnet is used as the sense inventory. The primary cause of this problem is most likely the over-fine granularity of the lexical sense inventory, which makes it challenging for even skilled human annotators to distinguish certain senses of the word. Furthermore, there is a lack of sufficient annotation data for some senses. To address this issue, we build a more practical dataset based on current English-English dictionaries and then create a prompt-based contextual word representation (PCWR) that can better make use of the embedded semantic information in the pre-trained model and enhance the performance of the WSD model in low-resource scenarios. Additionally, we validate the performance of our method on the FEWS dataset in both the zero-shot and few-shot cases.