The maturity of registration methods, in combination with the increasing processing power of computers, has made multi-atlas segmentation methods practical. The problem of merging the deformed label maps from the atlases is known as label fusion. Even though label fusion has been well studied for intramodality scenarios, it remains relatively unexplored when the nature of the target data is multimodal or when its modality is different from that of the atlases. In this paper, we review the literature on label fusion methods and also present an extension of our previously published algorithm to the general case in which the target data are multimodal. The method is based on a generative model that exploits the consistency of voxel intensities within the target scan based on the current estimate of the segmentation. Using brain MRI scans acquired with a multiecho FLASH sequence, we compare the method with majority voting, statistical-atlas-based segmentation, the popular package FreeSurfer and an adaptive local multi-atlas segmentation method. The results show that our approach produces highly accurate segmentations (Dice 86.3% across 22 brain structures of interest), outperforming the competing methods.
Lecture Notes in Computer Science: Second International Workshop, Mbia 2012, Held in Conjunction With Miccai 2012, Nice, France, October 1-5, 2012. Proceedings, 2012
Main Research Area:
Lecture Notes in Computer Science
15th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2012) : Workshop on Multimodal Brain Image Analysis (MBIA)Medical Image Computing and Computer Assisted Intervention