Computer Vision News - August 2020

Deep Learning Tool 4 Invariance is the ability of convolutional neural networks (CNNs) to classify objects even when they are placed in different orientations. Data augmentation is a way of creating new ‘data’ with different orientations. This adds the ability to generate ‘more data’ from a limited dataset and secondly it prevents over fitting. Mo st deep learning libraries use a step by step method of augmentation whilst “ fastai2 utilizes methods that combine various augmentation parameters to reduce the number of computations and reduce the number of lossy operations ”. Transforms are composed using different Pipelines. A Pipeline is created using a passing a list of Transforms and it will then compose those. This month’s article will explore how those transforms are conducted and their effect on image quality and efficiency. Pipelines are sorted by the internal order attribute (more discussed below) using a default order of 0. In the following example the PET dataset is used, just for a change from medical imaging datasets. All the techniques are equally applicable to many different kinds of training data of course! Let’s include the required libraries and include some useful plotting functions (although later we show how to plot with in-line code for demonstrative purposes) Welcome back! This article will be a part of series introducing the library fast.AI , of which this month’s article will begin to explore data augmentation for training deep learning networks. A common problem when training deep learning (medical) networks is the limited amount of data. Techniques such as data augmentation can have positive results only if the augmentation enhances the current data set and it makes sense in the context. Unwrap your deep learning with fast.AI ! by Ioannis Valasakis, King’s College London (@wizofe)

RkJQdWJsaXNoZXIy NTc3NzU=