Desarrollo de sistemas de tiempo real mediante el uso de aceleradores gráficos y específicos aplicados a la óptica visual humana

  1. Mompean Esteban, Juan
unter der Leitung von:
  1. Pablo Artal Soriano Doktorvater
  2. Juan Luis Aragón Alcaraz Doktorvater

Universität der Verteidigung: Universidad de Murcia

Fecha de defensa: 15 von Januar von 2021

Gericht:
  1. Antonio González Colás Präsident/in
  2. Antonio Cuenda Sekretär/in
  3. Jesús Lancis Vocal
Fachbereiche:
  1. Física

Art: Dissertation

Teseo: 154373 DIALNET

Zusammenfassung

This Thesis is motivated by the need to process big amounts of data in a short time. The structure of the data to be processed is, in most of the cases, uniform and repetitive, although it is not always the case. Therefore, the parallelization of the processing tasks has been mainly carried out by means of graphics processors (GPUs), which achieve a very high performance for this kind of algorithms, and in some other cases we have implemented the image processing as an specific accelerator in a FPGA. The work accomplished in this Thesis is focused on the parallelization of different compute-intensive processing algorithms used in Optics. In particular, we target the following applications: high performance pupil tracking, automatic and real time presbyopia correction glasses, real time processing of Hartmann-Shack images, and the parallelization of OCT image processing. Regarding pupil tracking several algorithms have been implemented and parallelized to obtain a high-performance version. Furthermore, a set of images has been characterized and used to perform an accuracy and performance analysis of the different implemented algorithms. Multiple optimizations have been applied to the GPGPU implementation to improve its performance. As a result, pupil tracking at speeds higher than 1000 images/second have been achieved in some configurations. Using the tools developed for pupil tracking, an automatic and real time presbyopia correction glasses have been developed. This wearable device is completely autonomous and controlled with a smartphone. The smartphone processes the images of the eyes and, from the obtained data, it calculates the correction that should be applied by the optoelectronic lenses. The whole device is powered by the smartphone battery and another small battery used to power the infrared illumination. To validate the results, a test with 8 subjects has been carried out, obtaining an average improvement of 0.3 in their visual acuity in decimal scale using a test placed at 35cm. Furthermore, a specific OpenCL implementation for FPGAs has been created and optimized with the purpose of testing the suitability of such platforms to control this wearable device. To do so, their power consumption and performance have been measured and compared with the numbers obtained by the smartphone. Regarding the real-time processing of Hartmann-Shack images, three algorithms have been parallelized and optimized for their execution in graphics processors. A novel pupil tracking algorithm for Hartmann-Shack images has been developed, parallelized and optimized. Both accuracy and performance have been characterized, using H-S images generated by simulations, obtaining good results in both metrics. A wide variety of configurations and resolutions have been tested, and some of them have resulted in a significant speedup of up to 100x when comparing the sequential implementation and the parallel GPGPU one. Finally, the processing of OCT (Optical Coherence Tomography) images has been parallelized and optimized. OCT systems generate a great amount of data on each measurement, easily reaching more than 2GB of data per volume, as it is the case of the OCT system that we have employed. This huge amount of data requires a highly computing-intensive processing to generate the final 3D images of the eyes. Our parallel implementation of the processing algorithms by means of graphics processors has resulted in speedups of up to 8x.