Dr. Simone Milani from Italy visits STC

Thu 17 May 2018 16:10

This week Dr. Simone Milani from the University of Padova, Italy, has been our guest lecturer through the Erasmus+ programme.

Simone Milani, Assistant Professor University of Padova

In addition to a number of lectures, Dr. Simone has held two seminars for Master and Ph.D. students and researchers: Recent Advances in Augmented and Virtual Reality, and Reconstruction of 3D models from heterogeneous sources. Simone is an Assistant Professor at the Department of Information Engineering, teaching Source Coding, Computer vision and 3D Graphics, 3D Augmented Reality, and Digital Forensics. Thank you Simone for your week here at Mid Sweden University!

About Dr. Simone Milani

Simone Milani received his Laurea Degree (5 years course) from the University of Padova, Italy, on December 2002. On March 2007 he received a Ph.D. title in Electronic and Telecommunications Engineering by the same institution. From January 2007 until April 2014, he was Post-Doc researcher at the University of Padova, University of Udine, and at Politecnico di Milano. Since May 2014 he has been Assistant Professor at the Department of Information Engineering of the University of Padova, teaching Source Coding, Computer vision and 3D Graphics, 3D Augmented Reality, and Digital Forensics.

His main research topics are digital signal processing, image and video compression, 3D acquisition and coding, virtual and augmented reality applications. He is also active in the fields of multimedia forensics and security.
He is also a IEEE member of Information Theory and Signal Processing Societies and he has also been a reviewer for many international magazines and conferences. He has also served as international scientific expert for the Agence Nationale de la Recherce (ANR). Since 2018, he is member of the IEEE SPS Regional Committee (Region 8).
Home page: www.dei.unipd.it/~sim1mil 

Abstract  seminar: Reconstruction of 3D models from heterogeneous sources

3D  data  acquisition has become ubiquitous, fast and relatively cheap over the last  decade. This evolution has been possible thanks to the development of sensing strategies, the decrement of costs for platforms and processing hardware, and the availability of innovative reconstruction algorithms that merge multiple heterogeneous data. Among these, multi-view reconstruction strategies (spanning from standard single-camera Structure-from-Motion to multiple-camera  Simultaneous Localization and Mapping) have been applied in different scenarios (camera arrays, patrols or swarms of unmanned aerial vehicles, etc.).

During the last years, these solutions have been applied to large sets of heterogeneous web-harvested images shared by different users. Meanwhile, the need to generate detailed and accurate 3D models (to enable a correct semantic classification as well as a precise localization, e.g., automotive applications) has led researchers to investigate new sensor fusion strategies combining different kinds of sensed data (standard images, depth maps, point clouds, thermal data, etc.) into an enriched representation of the surrounding reality.

Abstract seminar: When Virtual Meets Real: Recent Advances in Augmented and Virtual Reality

The last years have witnessed a widespread growing of Augmented Reality (AR) applications in different ICT fields, ranging from entertainment and industrial design to health care. Such diffusion has been fostered by the availability of wearable AR devices, which integrate effective 3D acquisition sensors with engaging displays and highly-accurate classification algorithms. These have enabled a seamless integration of both virtual and real objects allowing the user to relate with them in a natural way and access a huge set of additional information.

As a matter of fact, recent augmented reality systems have been addressed as “mixed” reality systems. The talk is going to present some of the latest technologies employed in AR portable and wearable devices. In detail, the focus will be on 1) modeling the surrounding reality; 2) enabling an accurate Human-Computer interaction via gesture and voice interfaces; 3) providing a reliable visual and aural feedback to the user. 


The page was updated 5/18/2018