Use of deep learning techniques for generating music has gained popularity in the recent years owing to the high amount of compute power being available and the evolution of deep learning architectures which are well suited for learning patterns from sequential data. In this paper, our goal is to generate musical notes that will follow a given input primer sequence such that the entire generated musical sequence sounds continuous and melodious. For this, we leverage Google Magenta's inbuilt models and propose new methods for data ingestion on Long Short Term Memory (LSTM) based models. In the most basic magenta model, the current note is passed as an input and the distribution of the next note is predicted as the output, this then is used to generate the next note. We propose variants of this off-the-shelf Magenta model. One of the proposed variants feeds in extra information about the targets into the model to aid the prediction process. Another variant is based on the observation that the melody is relatively independent of the starting note, and the difference in pitches of notes from the start note preserves the characteristics of a song in a compact manner, amenable for machine learning modeling. Along with introducing new models, the existing work explores the inbuilt magenta model with different hyperparameters settings, presenting quantitative results that serve as a benchmark.