Evaluación y análisis de una aproximación a la fusión sensorial neuronal mediante el uso de sensores pulsantes de visión / audio y redes neuronales de convolución

  1. Ríos Navarro, José Antonio
Zuzendaria:
  1. Alejandro Linares Barranco Zuzendaria
  2. Ángel Jiménez Fernández Zuzendaria
  3. Gabriel Jiménez Moreno Zuzendaria

Defentsa unibertsitatea: Universidad de Sevilla

Fecha de defensa: 2017(e)ko uztaila-(a)k 19

Epaimahaia:
  1. Julio Abascal González Presidentea
  2. Saturnino Vicente Díaz Idazkaria
  3. Antonio Abad Civit Balcells Kidea
  4. Arturo Morgado Estévez Kidea
  5. Enrique Cabello Pardos Kidea

Mota: Tesia

Teseo: 484101 DIALNET lock_openIdus editor

Laburpena

In this work it is intended to advance on the knowledge and possible hardware implementations of the Deep Learning mechanisms, as well as on the use of sensory fusión efficiently using such mechanisms. At the beginning, it is performed an analysis and study of the current parallel programing, furthermore of the Deep Learning mechanisms for audiovisual sensory fusion using neuromorphic sensor on FPGA platforms. Based on these studies, first of all it is proposed solution implemented on OpenCL as well as dedicated hardware, described on systemverilog, for the acceleration of Deep Learning algorithms, starting with the use of a vision sensor as input. The results are analysed and a comparison between them has been made. Next, an audio sensor is added and classic statistical mechanisms are proposed, which, without providing learning capacity, allow the integration of information from both sensors, analysing the results obtained along with their limitations. Finally, in order to provide the system with learning capacity, Deep Learning mechanisms, in particular CNN, are used to merge audiovisual information and train the model to develop a specific task. In the end, the performance and efficiency of these mechanisms have been evaluated, obtaining conclusions and proposing improvements that will be indicated to be implemented as future works