Computer Vision News - August 2018

Step II: Transfer Modeling Given a source task s and a target task t , where s ∈ S and t ∈ T : the transfer network learns transformation function ( Ds→t ) parameterized by θs→t minimizing the loss Lt: () ( ) is the encoder of s based on the image and ( ) is ground truth of t for image Step III: Ordinal Normalization using Analytic Hierarchy Process (AHP) The graph computed in step II is normalized. Since the tasks are completely different, normalization is a methodological problem. You can read the authors’ detailed solutions in the paper. Step IV: Computing the Global Taxonomy Given the normalized task affinity matrix, the task taxonomy dictionary is defined as V=T ∪ S where T is the set of tasks which need to be solved (target), and S is the set of tasks that can be trained (source). ● Target-only -- T − T ∩ S are the tasks that need to be solved but cannot be trained. ● Source-only -- S − T ∩ S are the tasks available to be used if they improve performance on T, but that don’t need to be solved. ● T ∩ S are the tasks that need to be solved and can be trained; thus, they can be used as sources. Research 7 Research Computer Vision News

RkJQdWJsaXNoZXIy NTc3NzU=