Half‑time seminar with Meng Jiang

Tue 28 May 08.00–10.00
C312 or Zoom

Welcome to a half-time seminar with Meng Jiang.

Kvinna i blå skjorta, med hörlurar på huvudet, lutar sig över en uppställning med mikrofoner.

Meng Jiang is a PhD student in electronics working with sound source localization and classification. Her field of interest is designing tailored mixed methodologies to study “what is sounding where and when” with a focus on real-world experimentation measurement and application.

Title: Array Systems for Sound Source Localization

Respondent: Meng Jiang

Opponent: Assistant Prof Archontis Politis, ITC Faculty, Computing Sciences Unit, Tampere University, Finland.

Supervisors: Associate Prof Göran Thungström, Prof Mårten Sjöström, and Dr. Chibuzo Nnonyelu, STC Research Center, Mid Sweden University, Sweden.

The half time seminar will be held in English. It is possible to attend the seminar in room C312, Campus Sundsvall or via Zoom.

Attend via Zoom



This Half-Time Seminar Report presents the progress and future direction of research in sound source localization using array microphones. The first part of the report summarizes research results from 3 studies with 5 projects, addressing diverse aspects such as the investigation of novel approach of using ambient noise for silent object localization, the comparison of effectiveness of omni-directional and cardioid-directional microphones for indoor localization, further carry on the application of modified Array Manifold Interpolation for computational efficiency, as well as assist heart murmur diagnosis with array technique in cardiac auscultation, and the implications of improved measurement quality on the performance of pre-trained neural networks in sound classification. The second part of the report proposes continued research of this PhD study, for more accurate and efficient sound source localization. The second part research will focus on integrating MUSIC beamforming and Channel State Information (CSI) enhancements with machine learning to propose a real-time, machine learning-enhanced fingerprint method. These synergies between hardware, signal processing algorithms, and machine learning offer promising directions for applications in room acoustics and more, advancing the field toward more precise and adaptive sound localization solutions.

The page was updated 5/8/2024