Melanoma is one of the most deadly skin diseases. Deep learning has achieved excellent performance in natural image classification and is increasingly used in the medical field. Recognition of melanoma is challenging, due to imbalanced classes, insufficient labeled skin disease data, and noisy data obtained from heterogeneous sources. This paper has designed the Swin-SimAM network, utilizing the powerful feature extraction capabilities of Swin Transformer, and incorporating the parameter-free attention module SimAM to make it pay more attention to the helpful parts of skin lesions. Focal loss is adopted to address class imbalance such that contribution loss of nonmelanoma to the entire network becomes small even if their number is large. We conduct experiments on the ISIC-2017 skin dataset. Our results indicate that the designed Swin-SimAM network can adaptively extracts features on the discriminative parts of skin lesions, and thus improves the performance of melanoma detection.