Computer Vision News - November 2021

67 For comparison, I implemented a simple student-teacher approach using two equally sized models. Network A is trained on labelled images and pseudo labels for network B are generated by network A with the highest f-score. Something I found really useful while organising the code was creating block diagrams and flowcharts and would recommend everyone to do it if it's not already part of your workflow. Living in London I want to end my article a bit off-topic with recommendations if you get the chance to live in the beautiful city of London! I can totally recommend musicals like “Back to the Future ”, get street food in Camden market, walk along Regent's Canal or Hyde Park and watch out for free public events like open-air cinemas or concerts. During my time here, I was able to meet up with Chen Chen , whom I met online at MICCAI 2020. I really enjoyed learning about her work in the medical image analysis field at ICL and sharing my progress on my vessel segmentation project. Last, but not least, I would like to thank Sophia Bano, Francisco Vasconcelos, and the rest of the WEISS team for supporting and welcoming me! ... on semi-supervised learning at UCL Self-training network: (1) Initial training on labelled images. (2) Generate pseudo labels for unlabelled images. (3) Re-train on combined dataset.

RkJQdWJsaXNoZXIy NTc3NzU=