Computer Vision News - May 2020

3 Summary Illumination Estimation Challenge 31 problems, the data set has to be much larger. Up to 5,000 images or more. “Secondly, we will have not only the easier version of the problem with one illuminant, but we’ll have two illuminants. For example, we’ll have the sun and a blue sky that illuminates our image.” Simon, who spends most of his time working in industry as a machine learning researcher and a programmer, agrees the bigger data set is very important and that two illuminants in particular is more practical because having only one illuminant in a scene is very rare. Egor adds that a larger data set will help algorithm developers to find improvements that they wouldn’t otherwise see. He says they are also working on image annotation. “For each image, we need to answer questions like, is it an image from nature or the city? Is it indoors or outdoors? These interesting details and others are not only useful for learning, but for evaluating the quality of algorithms too. One algorithm could be good enough for a usual outdoor case, while a second one could be good enough for indoor cases. This is an important part.” He thinks future challenges may consider applications under different categories, such as the best indoor algorithm , best training algorithm , and best night-time algorithm . What was the team’s winning formula last year? Alex thinks it was their method of parameter tuning that corresponded to the metric they wanted to achieve. “We had a good algorithm for improving the median answer,” he says. “That was the goal of the challenge. Also, we had many experiments. We tried different neural network architectures and loss functions, but it was not as big an improvement.” Simon tells us that rather than using a neural network with a specific architecture as others did, they used a simple convolutional neural network . He explains: “I understood that the convolutional neural network has a very strong locality bias, which would help us in this local problem. Having such a small data set, it wouldn’t understand everything, but we could try using the simple convolutional architecture and hope that this locality bias could be used as a baseline. We started out with that and then my hope was that we could beat that baseline using some more sophisticated, more interesting architecture, or some additional data. However, nothing helped.” Do the team have any advice for people planning to participate in this year’s contest? “Have many experiments,” Alex suggests. Egor hopes that it will be fun, and scientists will be keen to participate. As for what he wants to see, he draws