ISBI Daily - Friday

large fluctuations because we are using heavy-tailed non-Gaussian distributions, or more specifically generalised Gaussian distribution (L2 norm raised to the power p), sub-Gaussian with p equal to 0.5. We have also designed the dictionary to be subject-invariant. For this, we have used a similarity transform. Why a similarity transform? It’s because the functional networks that can be observed from seed-based correlation maps of the R-fMRI data are similar across subjects, but the time series are not similar across subjects. This indicates that the time series may be related by a similarity transform. So, we have used a similarity transform to make the dictionary subject-invariant as well .” Prachi goes on to tell us that the functional networks are a parts-based representation, indicating that the dictionary coefficients could be sparse, and to enforce sparsity they are using epsilon-regularised quasi-norms. They also have spatial regularisation, because the correlations are similar within the functional networks and different across the networks. So, they have discontinuity-adaptive spatial regularisation in the form of the Huber loss function. For simulated data, they have 30 timeframes and 4 atoms using which the data has been generated. Generalised Gaussian noise is added to it to simulate the BOLD signals. They have taken the R-fMRI data from the Human Connectome Project, which is high-quality denoised R-fMRI data. Some noise has been added to it in the k-space to model acquisition noise. For the simulated data, the quantitative measure used is RRMSE i.e. relative root-mean-square error; and mSSIM i.e. mean structural similarity. For the R- fMRI data, only mSSIM has been used because ground truth data is not available. Prachi says that qualitatively and quantitatively their method works better than other priors on the signal like the wavelet prior or the k-t FASTER prior. They have good reconstructions of the simulated data and the correlation maps in the R-fMRI data from that. Prachi concludes by saying: “ All of these features, they enable us to get better results than the other priors. Removing any of the these special prior components, like the generalised Gaussian distribution, the sparsity through quasi-norm, the spatial regularisation, or the subject- invariance, worsens the reconstruction, so all these features are important in the model. Further, since these prior components enable good reconstruction, our method is able to support subsampling not only in k-space, but also in time (8X K-T subsampling), thereby potentially enabling 8X higher spatial resolution. ” ISBI DAILY Friday 21 Prachi H. Kulkarni

RkJQdWJsaXNoZXIy NTc3NzU=