MICCAI 2021 Daily – Wednesday
14 DAILY MICCAI Wednesday Poster Presentation Whilst it is challenging to evaluate counterfactual images because there is no ground truth to compare them to, these images could be used as input to another machine learning model or as decision support to improve a treatment plan. However, there are some theoretical problems that need to be worked out. “ One issue is the identifiability of the causal effect in our model, ” Jacob tells us. “ Because we use a variational autoencoder with latent variables, the cause- effect relationships are not identifiable, so alternatives should be explored to avoid that issue, which might improve the quality of the counterfactual images. ” Although this work has not been validated for clinical use yet, Jacob says that understanding causality in medical images will ultimately lead to improved patient outcomes . “ Work like this using causality and causal inference is the next step the medical imaging community can take to generate more clinically useful models . ” To learn more about Jacob’s work [Paper ID 1013], you are invited to visit his poster during Session We-S3 Computer Aided Diagnosis today at 16:00 – 17:30 UTC. This is the diagram of the (conditional) variational autoencoder, which is the model used to generate the images conditioned on brain volume, ventricle volume, lesion volume, and slice number, as shown in the graphical model.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=