The world is in an aging process that will significantly lift the average age of its population over the course of the next decades. This phenomenon, more acute in the developed countries but present everywhere, will make the elderly care sector a very relevant one from all possible points of view, the economical one included. The elderly care sector has, so far, automated a low number of tasks. However, a higher number of them will need to be automated if universal elderly care at reasonable costs is desired. One of the tasks candidate to this automation will be elderly surveillance and, within that field, fall detection. Although developed over different technologies, all automatic detection systems can be associated to one of the following groups, wearables, ambient and vision based. Vision-based fall detection systems have experienced fast development over the last years, mainly propelled by the incorporation of artificial neural networks able to characterize human activities in general and falls in particular. All fall detections systems based on artificial vision developed so far extract static or kinematic descriptors from visual or near infrared images to assess fall probability. Within the framework of this PhD, for the very first time, a proof of concept of a fall detection system able to infer dynamic descriptors from far infrared (FIR) images is developed. This way, the problems associated to insufficient amount of human fall real world data, privacy protection and suboptimal system performances under poor illumination conditions, main identified deficiencies in today's systems, can be overcome.