Computer Vision News - November 2021

3 Summary 4 Natasha Jaques Understanding human affective signals: Learning from humans’ affective cues requires recognizing them first. Throughout my PhD, I developed machine learning methods for automatically interpreting human data and recognizing affective and social signals suchas stress, happiness, and conversational rapport (e.g. [6, 7, 8, 9] ). We developed novel techniques for analyzing physiological sensor data such as Electrodermal Activity (EDA), accelerometry, and temperature [10, 11, 12] , and built a popular open-source tool which deployed them (https://eda-explorer. media.mit.edu/) . However, I noticed that the accuracy of our affect detection models appeared to be severely limited by the degree of inter- individual variability in emotion and wellbeing. What makes one person stressed can have the exact opposite effect on someone else. Therefore, my collaborators and I designed methods for personalizing machine learning models using multi-task learning, enabling the predictions for one person to gain statistical strength from the data of others, to the degree that it is relevant. In a series of papers [13, 14, 15] , we showed that personalization via multi-task learning achieves large performance gains and state-of-the-art accuracy in predicting outcomes like happiness, stress, and health (see Figure 2). Figure 1: Samples of cat, crab, and rhinoceros drawings produced by (a) the original Sketch RNN, and (b) our model trained on a small amount of facial expression feedback.

RkJQdWJsaXNoZXIy NTc3NzU=