This work proposes a framework to train and deploy neural network-based gesture recognition algorithms in wearable devices. The approach is demonstrated for a high-density electromyography (HD-EMG) gesture recognition system, where a Siamese convolution neural network (SCNN) learns to associate and dissociate muscle activity patterns from the same or distinct gesture classes. This optimizes learning in low-data environments such as gesture recognition and myoelectric control, where training data must be provided by the end user. Then, using a cosine similarity-based few-shot classifier and inter-session-intra-user transfer of the SCNN’s learning, the proposed model is intended to achieve state-of-the-art results in a framework that is realistically applicable in wearable devices. For an experimented myoelectric interface user, the proposed model achieved 86.49 % accuracy in 6-class gesture recognition, using inter-session-intra-user transfer learning by the SCNN. Few-shot learning demonstrates convergence to this accuracy within 20 shots with the cosine similarity classifier. Across a group of 13 able-bodied participants, 90.66 % and 97.34 % mean and median accuracy were obtained from the 6-way 20-shot learning approach with a majority vote (100 votes) over time-distributed inferences.