Multi-modal analysis of human behaviour: 2 student

The healthcare system demands effective autonomous solutions to improve service and provide individualized care. Most of these solutions require a multidisciplinary approach that combines healthcare with computational abilities. This project explores wearable and non-intrusive multimodal sensor networks to evaluate each application. This project aims to analyze the strengths and the limitations of each sensor (learning how to represent and summarize multimodal data in a way that exploits the complementary and redundancy), evaluate different approaches used to fuse these modalities (e.g. decision level or feature level integration) and identify the direct relations between (sub)elements from two or more different modalities.

Multimodal data: visible images (body and faces), infrared cameras, temperature, accelerometers, electroencephalogram (EEG), arterial oxygen level (SpO2), photoplethysmography (PPG), electrodermal activity (EDA), polysomnography (PSG). 

Dataset: public and own datasets (e.g. physionet, MASS, SSC, DREAMER, Data61HR) 

Contact: Ahmedt Aristizabal, David (Data61, Black Mountain)

Updated:  10 August 2021/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing