4 Oral Presentation DA I L Y Qifeng Chen is an assistant professor at HKUST . He speaks to us ahead of his oral today which he is presenting on behalf of main author, Chen Chen , who was unfortunately unable to attend due to visa issues. The work is about generating high- qualityvideo ina low-lightenvironment. Their model is trained on low-light raw sensor data from static environments but can be well generalized to dynamic environments where some objects might be moving. Qifeng says that the model produces much better video than popular cameras like Sony DSR and those on iPhones. The key idea in their approach is they start with the raw video sensor data. The raw sensor data is not the RGB images usually used in computer vision tasks; it is what the camera actually sees. The camera will produce the RGB images or videos. Qifeng explains: “We start with the raw video data and then train a convolutional neural network to reconstruct a high-quality video out of it. We created a dataset specifically for this task with ground truth and our model can be generalized to generate a nice video. The algorithmic part of this project is that we are proposing a new type of loss. The loss we use in the model is we randomly pick two frames and we want to make sure these two frames will be consistent. We also make sure all the frames will be consistent with the ground truth, so we can actually make multi- consistency in our generated video.” Seeing Motion in the Dark "The raw sensor data is not the RGB images usually used in computer vision tasks; it is what the camera actually sees." "In the real world there aremultipleapplications for night-time computer vision tasks."