Half‑time seminar with Erwan Leria

Mon 18 Dec 09.00–11.00
Campus Sundsvall - L401, Tampere University - TC210 or via Zoom

Welcome to half-time seminar where PhD-student Erwan Leria will present his work on Photorealistic Real-Time Rendering for Light Field Display devices.

Half-time seminar with  Erwan Leria - Photorealistic Real-Time Rendering for Light Field Display devices.

Title: Photorealistic Real-Time Rendering for Light Field Display devices

Respondent: Erwan Leria

Opponent: Prof. Ulf Assarsson (Chalmers Technical University)

Supervisors: Prof Tingting Zhang (MIUN), Assoc. Prof Pekka Jääskelainen (TAU), Prof Mårten Sjöström (MIUN), Assistant Prof Robert Bregovic (TAU), Dr Markku Mäkitalo (TAU)

Place:
Mid Sweden University, Campus Sundsvall: L401
Tampere University: TC210

Join the seminar on Zoom

 

Erwan Leria is an Early Stage Researcher in the Plenoptima project, a European H2020 Marie Sklodowska-Curie Innovative Training Network, a cross-disciplinary collaboration on plenoptic imaging. As part of a double doctoral degree between Tampere University and Mid-Sweden University within the Plenoptima project, Erwan's research work focuses on path tracing multiple viewpoints in real-time on multiple GPUs.

Abstract

A light field can be captured by sampling the incoming flow of rays from different positions in space. Light field rendering in Computer Graphics is a recent topic that drags the attention of the research community because of its significance to virtual/augmented reality and perception. Indeed, light field technology allows visualize images on a 3D display thanks to the multiple views providing angular information. Acquiring a light field can be done by capturing a scene with a camera or using physically based rendering techniques like path tracing. As a light field is composed of multiple views, rendering all of them in real-time must be done under some time constraints such that the rendering frequency to render successive light fields must be between 50-90 Hz to match the human critical flicker frequency (CFF), which is the minimal frequency interval at which the eyes do not perceive sporadic sequence of consecutive images nor flickering light intensity. Simulating accurately the transport of light of a single view is computationally expensive on today's GPUs. As a consequence, achieving real-time rendering of multiple views is a computational hurdle. To tackle this problem, one might use multiple computers or multiple graphics processors. The research topics Erwan is working on are related to parallel rendering, multiview rendering optimization and adaptive tuning for rendering. The major gaps in the research literature concerns lack of correlation between field path tracing and multi-GPU nodes. Since light field computation is not properly mapped to the computing platform it induces a performance miss. This is due to the absence of a sufficient model to describe the relations between views and the GPUs. In his project, Erwan proposes to scale the multiview rendering workload to multiple GPUs in a single node through a configurable abstract multiview layer which provides enough degree of freedom to structure workload mapping algorithm and to optimize dependencies between views and GPUs and configure rendering parameters for command buffers in order to get closer to the target CFF interval. This doctoral project paves the way for pioneering innovative research in scalable real-time stereo and light field rendering technologies, widening new research horizons.

The page was updated 11/20/2023