Person Re-Identification (Re-ID) is a challenging task due to large different appearances of people across disjoint views heavily influenced by many factors such as illumination variations, person pose variations, and view variations. Therefore, it is imperative to learn more discriminative feature representation for Re-ID. Existing deep learning models focus on developing feature representation with different scales to characterize person better, which tend to incur interferential and redundant feature output. In this paper, we develop a dynamical and adaptive selective feature fusion module (SFFM) for feature learning, which depends on the input images entirely and facilitates to extract more discriminative features for Re-ID. Moreover, we further improve the discriminative capability of the proposed deep network by incorporating the island loss (IL) with the triplet loss and the softmax loss. The newly proposed loss function is beneficial to increase the distance between those samples belonging to different classes. Validation experiments performed on two standard benchmarks called Market-ISOI and DukeMTMC-reID datasets demonstrate the effectiveness of the proposed method.