This paper presents the framework for the creation of touch in virtual reality (VR) with the processing of emotions from multiple sensors, and the processing of touch and sensation generation. We use Gradient Boosting (GB) and Long Short-Term Memory (LSTM) models to process the aggregated emotion data from brain waves, physiological changes, pupillometry, facial recognition, speech, and background effects to recreate the emotional state of a person in VR. In our real life day-to-day existence we connect with our environment, and be it real life or VR, we connect. Emotion creates that connection that enhances in person communication and by adding these two to the virtual world, we are pulling more of the real world into the experience which can only enhance it. Touch is the first sense we experience at birth. The contribution of this paper is the introduction of a multi-sensory collection framework called Thelxinoë that collects emotion data from multiple sensors including brain waves, facial expression, pupillometry, speech, and body movement. This is then aggregated within a black box that generates an output matching the emotions at the time of collection. Those emotions are used to facilitate the action of touch generated by the parties by creating sensations based on the initiated touch.