Registration and segmentation methodology for MR image analysis application to cardiac and renal images 1

130 325 0
Registration and segmentation methodology for MR image analysis  application to cardiac and renal images 1

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Registration and Segmentation Methodology for MR Image Analysis: Application to Cardiac and Renal Images Dwarikanath Mahapatra Department of Electrical and Computer Engineering A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS OF DOCTOR OF PHILOSOPHY National University of Singapore, 2011 Abstract Magnetic resonance imaging (MRI) has emerged as a reliable tool for functional analysis of internal organs like the kidney and the heart Due to the considerable length of time taken to acquire MR images, they are affected by patient motion Besides, MR images are characterized by low spatial resolution, noise and rapidly changing intensity Rapid intensity change is the primary challenge that MR image registration methods need to address In this thesis, first we investigate a saliency based method for rigid registration of renal perfusion images A neurobiology based visual saliency model is used for this purpose Saliency acts as a contrast invariant metric, and a mutual information framework is used The second part of our work deals with elastic registration of cardiac perfusion images The saliency model is modified to reflect the local similarity property at every pixel Markov random fields (MRFs) were used to integrate saliency and gradient information for elastic registration Apart from being a contrast invariant metric, saliency also influences the smoothness of the registration field and speeds up registration by identifying pixels relevant for registration In the final part of our work we investigate a joint registration and segmentation (JRS) method for the perfusion images JRS is particularly important for MR images in order to fully exploit the available temporal information from the image sequence MRFs were used to combine the mutual dependency of registration and segmentation information by formulating the data penalty and smoothness cost as a function of registration and segmentation labels The displacement vector and segmentation class of every pixel was obtained from multi-resolution graph cut optimization This eliminates the need for a separate segmentation step and also increases computation speed Experimental results show that our works improve upon current techniques that solve registration and segmentation separately Future work will involve making our proposed JRS method robust to different datasets, and investigating the possibility of using learning techniques to solve the registration and segmentation problem Acknowledgements Over the course of my PhD there have been many people who have taught me the importance of perseverance and focus in research work I want to thank my PhD supervisor, Dr Ying Sun, for her commitment She had the patience to deal with my many mistakes and guided me very well during my research work I also want to thank Dr Stefan Winkler, who was my PhD supervisor in my first year His involvement was important for me in cultivating interest on various topics of computer vision I have learnt a lot from my interactions with both of my supervisors, especially in the way to conduct research I also express my deepest gratitude to members of my thesis committee, Dr Ashraf Kassim and Dr Sim-Heng Ong, for their input at different stages of my PhD Completing a PhD is not possible without the support of family and friends I want to thank my parents and brother for their unconditional moral support at all times during my graduate student life All my friends were extremely generous with their encouragement and motivation, especially Dr Sujoy Roy I enjoyed my numerous discussions with Chao Li, Ruchir Srivastava, Mukesh Kumar Saini and Ajay Kumar Mishra on miscellaneous topics including, my research As officer-in-charge of the Embedded Video Lab, Mr Jack Ng made my stay there an enjoyable one I also want to thank Badarinath Karri for being a good friend and example Finally, I thank the many anonymous reviewers whose comments helped to improve my research work Publication List Journals • D Mahapatra and Y Sun “Integrating Segmentation Information for Improved Elastic Registration of Perfusion Images using an MRF framework” Accepted in IEEE Trans Image Processing with minor revisions • D Mahapatra and Y Sun “MRF Based Intensity Invariant Elastic Registration of Cardiac Perfusion Images Using Saliency Information” IEEE Trans Biomedical Engineering, 58 (4), pp-991-1000,2011 • D Mahapatra and Y Sun “Rigid Registration of Renal Perfusion Images using a Neurobiology based Visual Saliency model.” EURASIP Journal on Image and Video Processing, 2010 Conferences • D Mahapatra and Y Sun “Joint Registration and Segmentation of Dynamic Cardiac Perfusion Images using MRFs” In Proc Medical Image Computing and Computer Assisted Intervention (MICCAI), Beijing, September 2010, pp 493-501 • D Mahapatra and Y Sun “An MRF Framework for Joint Registration and Segmentation of Natural and Perfusion Images” In Proc IEEE International Conference on Image Processing (ICIP), Hong Kong, September 2010, pp 1709-1712 • D Mahapatra and Y Sun “Non-rigid registration of Dynamic renal MR images using a Saliency based MRF model” In Proc MICCAI, New York City, September 2008, pp 771-779 • D Mahapatra and Y Sun “Registration of dynamic renal MR images using neurobiological model of saliency” In Proc IEEE International Sym- v posium on Biomedical Imaging (ISBI), Paris, May 2008, pp 1119-1122 • D Mahapatra, M.K Saini and Y Sun ”Illumination invariant tracking in office environments using neurobiology-saliency based particle filter” In Proc IEEE International Conference on Multimedia and Expo (ICME ), Hannover, June 2008, pp 953-956 • D Mahapatra and Y Sun “Using Saliency Features for Graphcut Segmentation of Perfusion Kidney Images” In Proc International Conference on Biomedical Engineering (ICBME), Singapore, December 2008, pp 639642 • D Mahapatra, S Roy and Y Sun “Retrieval of MR Kidney Images by Incorporating Spatial Information in Histogram of Low Level Features” In Proc ICBME, Singapore, December 2008, pp 661-664 • D Mahapatra, S Winkler and S.C Yen “Motion saliency outweighs other low-level features while watching videos” In SPIE Human Vision and Electronic Imaging (HVEI ) 2008, San Jose, CA vi Contents List of Figures xi List of Tables xix Introduction 1.1 Motivation 1.1.1 Our Contribution 1.1.2 Thesis Overview Background 2.1 Anatomy of Kidney and Heart 2.1.1 Heart Anatomy 2.1.2 Kidney Anatomy 2.1.3 Basics of Perfusion MR Imaging Saliency 11 2.2.1 Itti-Koch Saliency Model 12 2.2.1.1 Extraction of Early Visual Features 14 2.2.1.2 The Saliency Map 14 2.2.1.3 Strengths and Limitations 16 Scale-Space Maps 17 2.3 Mutual Information 18 2.4 Markov Random Fields 19 2.4.1 Visual Labeling 20 2.4.2 Markov Random Fields 21 2.4.3 Gibbs Random Fields 21 2.2 2.2.2 vii CONTENTS 2.4.4 22 2.4.5 Bayes Labeling of MRFs 23 2.4.6 Regularization 24 2.4.7 Energy Function Optimization 25 2.4.7.1 Graph Cuts 26 2.4.7.2 α − β Swap 27 2.4.7.3 α-Expansion 28 Description of Datasets 29 2.5.1 Kidney Data 29 2.5.2 2.5 Markov-Gibbs Equivalence Cardiac Data 31 Rigid Registration 33 3.1 Introduction 33 3.2 Theory 36 3.2.1 Saliency Model 36 3.2.1.1 Saliency Map in 3D 37 Rigid Registration 37 3.2.2.1 Quantitative-qualitative Mutual Information 38 3.2.3 Saliency based Registration 39 3.2.4 Optimization 40 3.2.4.1 Derivative Based Optimizer 41 Experiments 45 3.3.1 Registration Procedure 45 Results 46 3.4.1 Saliency Maps for Pre- and Post-contrast Enhanced Images 47 3.4.2 Registration Functions 49 3.4.3 Robustness of Registration 53 3.4.4 Registration Accuracy for Real Patient Data 56 3.4.5 Computation Time 58 Discussion and Conclusion 58 3.2.2 3.3 3.4 3.5 viii CONTENTS Non-Rigid Registration 4.1 63 Introduction 63 4.1.1 Elastic Registration of Dynamic Contrast Enhanced Images 65 4.1.2 Saliency Based Registration 65 Modified Saliency Model 66 4.2.1 Saliency Maps for Cardiac MRI 68 4.2.2 Saliency Map in 3D 68 4.2.3 Limitations of Saliency 69 Method 69 4.3.1 Saliency Based Non-Rigid Registration 69 4.3.2 Markov Random Fields 70 4.3.2.1 Data Penalty Term 71 4.3.2.2 Pairwise Interaction Term 72 4.3.3 Optimization Using Modified Narrow Band Graph Cuts 74 4.3.4 Extension to 3D 75 4.3.5 Calculation of Registration Error 76 4.4 Experiments and Results 79 4.5 Conclusion 79 4.2 4.3 Joint Registration and Segmentation 5.1 81 81 5.1.1 Our Contribution 85 Theory 85 5.2.1 Joint Registration and Segmentation 85 5.2.1.1 Overview of Method 88 Markov Random Fields 88 5.2.2.1 Data Penalty Term 89 5.2.2.2 Pairwise Interaction Term 91 5.2.2.3 5.2 Introduction Optimization using Graph Cuts 92 Extension to 3D 92 Experiments and Results 95 5.3.1 96 5.2.2 5.2.3 5.3 Synthetic Images 5.3.1.1 Interdependence of Registration and Segmentation 98 ix CONTENTS 5.3.2 5.3.3 5.3.4 5.4 5.5 5.3.1.2 Accuracy of Cine Cardiac MRI Natural Images Computation Time Segmentation 99 101 104 104 Discussion 105 Conclusion 108 Experimental Validations 111 6.1 Saliency Based Registration 111 6.1.1 Cardiac Perfusion MRI 111 6.1.1.1 Effect of Saliency Based Narrow Band Graph Cuts 115 6.1.2 3D Registration Results on Liver Datasets 116 6.2 Joint Registration and Segmentation 118 6.2.1 6.2.2 6.2.3 Cardiac Perfusion MRI 118 Kidney Perfusion Images 122 3D Registration Results Liver Data 125 Conclusion and Future Work 7.1 7.2 139 Conclusion 139 Future Work 141 Bibliography 143 x JOINT REGISTRATION AND SEGMENTATION diac, liver and kidney perfusion images Results are also shown for relevant experiments on synthetic images having simulated deformations, and natural images Use of synthetic images allows us to simulate different degrees of deformation and investigate the performance of our algorithm We show the mutual influence of registration and segmentation information and examine their robustness to the choice of mask Then we show results for cardiac perfusion, cine cardiac, kidney, liver and natural images Our method’s registration accuracy is compared with MRF based registration where the cost function comprised of no segmentation information and is described as E(f ) = ∑ D1 (fs ) + s∈P ∑ Vst (fs , ft ), (5.9) (s,t)∈N where fs = xs = {x1 , x2 } and Vst is given by s s { Vst (fs , ft ) = √ 0.002, |fs − ft | ≤ 2, 2, otherwise (5.10) This method is henceforth referred to as M RF s Registration performance was also compared with the demons algorithm (144) The optimal parameters for demons was 100 iterations, α = 1.5, fluid regularization with σ = 12 and no diffusion regularization The experiments for synthetic images were carried out under different degrees of added noise By comparing with a conventional registration method we aim to show the advantages of including segmentation information for registration With the intention of highlighting the advantages of including registration information for segmentation, we show comparative performance of segmentation accuracy between our algorithm and those of conventional methods not using registration information 5.3.1 Synthetic Images Synthetic images containing simple shapes like triangles, squares, rectangles, cylinders and circles were created The number of objects in any image was between one and four The intensity of each object in the image was different but constant Elastic deformations on the images were simulated using B-splines 96 5.3 Experiments and Results (95), Gaussian noise added, and an initial mask defined around the OOI in the floating image (Fig 5.1) For registering the multiple objects in an image we have two approaches In the first approach each shape is individually registered, i.e., each shape is separately chosen as OOI The number of segmentation classes in this approach is two, object and background In the second approach the mask should include all the objects to be registered The intensity distribution of the area inside the mask will have multiple modes (peaks) and each mode with mean value greater than zero belongs to one object (assuming that background has zero intensity value) After adding noise to the image the peaks in the intensity histogram are smoothed We assume a GMM for the intensity distribution inside the mask and determine the different parameters corresponding to each object i.e mean and variance The number of Gaussians is equal to the number of objects in the mask plus one (for the background) This is determined by identifying the number of distinct peaks in the mask’s intensity histogram through appropriate thresholding The deformed, noisy floating images are now registered to the original noise free reference images using three registration methods, namely, 1) our proposed method, joint registration and segmentation (JRS); 2) M RF s; and 3) Demons algorithm Demons are used to compare the performance of our method with other popular techniques FFDs, another popular registration method, has already been used for simulating deformations and is therefore not used for comparison The noisy images are represented by I = I + Iσ , where I is the noise free image, Iσ is Gaussian noise of zero mean and variance σ varying from 0.01 to 0.1 The image intensities are between and An × grid of control points was used to simulate elastic deformation on images whose dimensions were in the range (70 − 90) × (80 − 110) pixels The spacing between the control points varied from − 13 pixels depending upon image dimensions The perturbations of the control points took random but known values between ±6 pixels From 2D B-spline equations (95) and the new position of control points the image is deformed Thereafter the actual displacements of every pixel with respect to the original image is calculated 97 JOINT REGISTRATION AND SEGMENTATION 5.3.1.1 Interdependence of Registration and Segmentation Figure 5.1 shows results on an example test image The images were all in gray scale Figure 5.1 (a) shows the reference image without noise and Fig 5.1 (b) shows the floating image, with intensity changes, added noise and simulated deformations Approximate masks corresponding to the different shapes are drawn manually Figure 5.1 (c) shows the registered image using demons method followed by the registered images using M RF s (Fig 5.1 (d)) and JRS (Fig 5.1 (e)) respectively For testing purpose eight such images with ten sets of deformation parameters and four noise levels for each image are used giving us a total of 320 image pairs For the set of images shown in Fig 5.1 the added noise was equivalent to σ = 0.08 Each of the objects in the image (circle, cylinder and rectangle) are separately registered using JRS Even though we use a GMM to estimate the probability density function of the selected mask, it is found that registering objects individually is more accurate than trying to register all of them simultaneously because a mask encompassing a single object gives a more accurate intensity distribution All our subsequent results are for individual registration Our method was robust up to noise level of σ = 0.09 For demons the average registration error increases above 1.0 pixel when σ > 0.06, while for M RF s the error is above 1.0 pixel when σ > 0.07 The corresponding threshold for JRS is 0.085 Despite of intensity changes we are able to successfully register the image pair using gradient information in the registration penalty From the registered images we observe that the demons algorithm does not fully register deformations in the triangle and circular disc M RF s performs better than demons but the deformations of the circular boundary within the rectangle are still improperly registered This is attributed to the absence of segmentation information The contribution of segmentation information in registering noisy images is evident from the registered image using JRS where all the shortcomings of demons and M RF s are overcome The average registration errors for different values of σ are shown in Table 5.1 The performance of the three methods is comparable up to noise level of σ = 0.06 i.e., their average registration error is less than pixel Our method’s robustness up to σ = 0.085 comes from including segmentation 98 5.3 Experiments and Results σ 0.04 0.065 0.075 0.1 Displacement Error (in pix.) JRS M RF s Demons 0.8±0.5 0.9±0.4 0.9±0.2 0.8±0.2 0.9±0.2 1.4±0.6 0.9±0.4 1.4±0.7 1.9±0.6 1.7±0.8 2.6±1.5 3.9±1.6 Table 5.1: Means and standard deviations of registration errors for synthetic image datasets at different noise levels Values are in pixels information Figure 5.3 (a) shows the registration error in pixels for the three methods with change of σ (a) (b) (c) (d) (e) Figure 5.1: Registration results for synthetic images (a) reference image; (b) floating image; registered images using (c) demons; (d) M RF s; and (e) JRS The added noise is equivalent to σ = 0.08 The objects in the image were individually registered by defining a mask around each of them 5.3.1.2 Accuracy of Segmentation Besides, displacement vectors, the labels of the JRS method also give the segmentation class of each pixel In a separate step the floating images are segmented using a conventional graph cut approach (described in (126)) This method is denoted as GC Note that the optimization for JRS and GC is performed using the graph cut algorithm in (1) The Dice Metric (DM) (145) has been used to evaluate the similarity between manual (reference) and automatic segmentations Let Aa denote the segmented area (using JRS or any segmentation method), Am the area of manual segmentation and Aam the intersection area between Aa and Am The DM is given as DM = 2Aam Aa + Am 99 (5.11) JOINT REGISTRATION AND SEGMENTATION DM takes values between and 1, with indicating a perfect match and denotes no match Generally a DM higher than 0.80 indicates excellent agreement with manual segmentation The segmentations using active contours (AC) in (146), GC and JRS are compared with expert manual segmentations Default parameters are used for AC except for the number of iterations that was adjusted to give the best results In GC, the intensity distributions from the masks were used to calculate data penalty values Results for different noise levels are shown in Table 5.2 After registration using JRS there is no separate segmentation step because the final labels also give the segmentation class The segmentation accuracy degrades with increasing noise level However, the DM value is above 80% for JRS, GC and AC for σ ≤ 0.1 When σ > 0.15, DM goes below 80% for all three methods In Figs 5.2 (a)-(c) we show segmentation results for noisy synthetic images using JRS, AC and GC The outlines of the segmented masks are shown in different colors GC successfully extracts line segments in most noisy images but falls short for circular boundaries Note that for GC segmentation we separately identify each shape as the object because of their different intensity characteristics In general, the performance of AC was slightly inferior to GC But Fig 5.2 (b) highlights the hazards of improper curve initialization This is particularly relevant for noisy images While AC could segment circular boundaries better than GC, it is sensitive to curve initialization and cannot handle noisy images as well as GC and JRS The displayed images in Fig 5.2 has σ = 0.14 and inaccurate segmentation due to GC and AC is observed in the disc and in the curved area within the rectangle By using registration information JRS gives better segmentation results The outline of the different objects are shown in different colors to indicate that each object was separately chosen as the OOI Figure 5.3 (b) shows the change in DM values for JRS and graph cuts with increasing σ Robustness to Initial Mask : For active contour based joint registration and segmentation (128) the initial curve evolved to match the object edges as the optima of the energy function favored the boundary edges However, accurate segmentation depends to a large extent on the position of this initial curve As shown in Fig 5.2 (b), the noisy images make AC more sensitive to curve initialization This problem is overcome by using MRFs because the graph cut 100 5.3 Experiments and Results (a) (b) (c) (d) (e) Figure 5.2: Segmentation results for a synthetic image using (a) JRS; (b) AC and (c) GC; (d) inaccurate initial masks shown in different colors and (e) superimposed outline of segmented objects using JRS from masks in (d) σ = 0.04 Aam DM JRS (%) 90.7 91.8 GC(%) 89.4 89.9 AC(%) 89.1 90.2 σ = 0.065 Aam DM 90.1 90.4 88.2 88.4 88.6 89.0 σ = 0.075 Aam DM 88.5 89.1 86.2 86.4 85.7 85.9 σ = 0.1 Aam DM 82.1 83.1 80.1 81.2 79.0 78.7 σ = 0.15 Aam DM 78.5 79.4 77.2 78.3 76.7 77.9 Table 5.2: Segmentation performance for synthetic images Average DM values for JRS, GC and AC at different noise levels are shown Values shown are in % optimization method results in either a global or strong local minima In Fig 5.2 (d) we show the initial masks in different colors which although inaccurate contain part of the objects and Fig 5.2 (e) shows the final segmentation by JRS Since the main purpose of the mask is to obtain segmentation information of the OOI, the subsequent registration result was not hampered due to the mask as long as we are able to identify the intensity distribution of the individual objects 5.3.2 Cine Cardiac MRI In Fig 5.5 we show results for registering the LV in cine cardiac MRI Cine MRI is characterized by large deformations of the LV and nearly no intensity change Since the intensity change can be ignored, the registration energy is a function of intensity difference (Eqn (5.3)) The first frame of the sequence is taken as the reference frame Ir Since very large deformations are observed over the course of the image acquisition process, registering a frame directly to the reference frame increases the number of labels and hence the computation time Therefore we 101 JOINT REGISTRATION AND SEGMENTATION JRS MRF Demons Registration Error in Pixels 3.5 2.5 1.5 0.5 0.02 0.04 0.06 0.08 Variance of Added Noise 0.1 (a) 95 Average DM values in % JRS GC AC 90 85 80 75 0.05 0.1 Variance of Added Noise 0.15 (b) Figure 5.3: (a) Change in registration error (pixels) with increasing noise levels for the registration methods; (b) Change in DM values for JRS and graph cuts The x-axis shows the variance of added noise and y-axis shows (a) average registration error in pixels and (b) DM values first register a frame to the previous unregistered frame in the sequence to get an intermediate registered frame which is then registered to the original reference frame Ir This approach reduces the time taken to register the entire image sequence Figure 5.5 shows different frames (floating images) of a typical cine cardiac dataset where the LV is contracting The red contours show the LV boundary of the reference frame deformed using the displacement field The blue contours show the original LV boundary in the reference frame This is to illustrate the degree of motion that was recovered in each method The deformed boundary su- 102 5.3 Experiments and Results WC NMI Err Before Registration 0.63 1.39 7.1±2.3 After Registration Demons M RF JRS 0.29 0.24 0.13 1.61 1.63 1.72 1.2±0.6 1.1±0.4 0.7±0.3 Table 5.3: Quantitative performance measures for cine cardiac image registration: N M I-normalized mutual information (no units); W C-Woods criteria value in (3) (no units); and Err-displacement error in mm The values indicate the average measures over all datasets perimposed on the floating image gives an idea of registration accuracy The first row shows results using the proposed JRS method, the second row shows results using only M RF s (Eqn (5.9)), and the third row shows registration results using the demons algorithm The maximum displacement error of the LV with respect to the reference frame before registration of cine cardiac images was 11 pixels and the average error was 7.1 ± 2.3 mm This high value shows the large amount of motion observed in cardiac images After registration the average registration errors are 0.7 ± 0.4 mm using JRS, 1.1 ± 0.4 mm using M RF s and 1.2 ± 0.6 mm using demons The respective maximum errors are 1.2, 1.6 and 2.3 mm Results for other registration criteria are summarized in Table 5.3 Figure 5.5 shows that the use of segmentation information achieves better registration where M RF s and demons show less than optimal performance JRS, M RF s and demons perform equally well where there is good contrast at the object boundaries The advantage of JRS is seen for medical images with poor contrast at object boundaries Noise and low contrast is common for cine cardiac MRI and therefore it is desirable to use segmentation information to improve registration We determined the segmentation accuracy of each image of the dataset by calculating DM values with expert manual segmentations as reference The average DM value was 93.7% for JRS, 92.3% for GC, and 90.9% for AC As a matter of fact cine cardiac images are easy to segment because of good contrast between OOI and background Even in such a situation JRS achieves better segmentation than GC Segmentation results by the three methods is shown in Fig 5.4 All the methods give reasonably accurate results although GC and JRS fare better, with JRS showing the best results 103 JOINT REGISTRATION AND SEGMENTATION (a) (b) (c) Figure 5.4: Segmentation results for cine cardiac images Outlines of the segmented LV is shown in green using (a) AC; (b) GC; and (c) JRS 5.3.3 Natural Images Our method was also tested on pairs of natural images that exhibit non-rigid motion between them Figure 5.6 shows results on a pair of images having relative motion of the hand Figure 5.6 (a) and (b) show respectively the reference image and the floating image with the outline of mask in blue The difference image before registration is shown in Fig 5.6 (c) and the difference image after registration using JRS is shown in Fig 5.6 (d) Although only part of the hand was chosen as the OOI, most of the motion has been recovered except for minor movement of the fingers The segmented mask of the OOI from the floating image using graph cuts is shown in Fig 5.6 (e) and the mask obtained using JRS is shown in Fig 5.6 (f) Although the segmentation accuracy in this case is not very important, JRS with the help of registration information produces better segmentation than using only graph cuts 5.3.4 Computation Time All our experiments were carried using MATLAB 7.5 on PC having a Pentium GHz processor For perfusion images of size 70 × 90 pixels and 121 labels (corresponding to a displacement range of ±5 along both axes), the average computation time taken for the entire JRS procedure was 27.4 s and 13.4 s for MRFs The time for demons was 51.2 seconds For natural images of dimension 128×128 pixels and 121 labels the computation time was 37.4 s for JRS, 15.2 s for MRFs and 56.4 s for demons 104 5.4 Discussion Figure 5.5: Registration results for cine cardiac images The boundary of the LV is deformed (in reference image) using the obtained motion field (in red) and overlaid on floating image First row shows results when using JRS, second row shows results using M RF s and third row shows results for demons algorithm Blue line shows the outline of the LV in the reference frame This gives an idea of the degree of deformation recovered using JRS 5.4 Discussion To register multiple objects simultaneously we need their individual intensity distributions If the mask includes part of more than one object then the intensity histogram has multiple peaks The number of objects plus the background will generally be equal to the number of peaks Each peak gives the mean intensity of different objects, and by using a suitable number of Gaussians in a GMM the distribution parameters of the OOI can be determined Different segmentation labels are then assigned to the objects, which facilitates appropriate calculations for data penalty The smoothness cost formulation does not change with an increase in the number of segmentation labels 105 JOINT REGISTRATION AND SEGMENTATION (a) (b) (c) (d) (e) (f) Figure 5.6: Registration and Segmentation performance for natural images (a) reference image; (b) floating image with mask outline; (c) difference image before registration; (d) difference image after registration using JRS; (e) segmented mask from floating image using only graph cuts; (f) segmented mask using JRS Using an MRF based approach provides us with certain computational benefits Depending upon the range of displacements the number of labels can be changed For example, we not expect a lot of motion along the z-axis for 3D perfusion datasets and hence the number of labels can be reduced In such datasets, motion along the horizontal direction is limited allowing for a further decrease in the number of labels We can register more than one OOI by interactively defining appropriate segmentation labels The number of Gaussians to be used in modeling the object or the background can be specified interactively Since MRFs work on discrete labels and digital images are a collection of discrete pixels, faster implementation of the method is possible making it suitable for clinical environments The 3D volumes in our datasets were a collection of equally spaced slices As a result using curve based approaches was difficult because of the absence of sufficiently continuous data This being a common phenomenon for MRI data our method could prove to be advantageous over active contours There are also other factors that influence the performance of our JRS method The modeling of the intensity distribution is critical to the accuracy of segmentation labels We have opted for a multivariate Gaussian mixture model to model the intensity distributions Generally to model the individual parts of cardiac images (e.g., LV, RV, myocardium) a single Gaussian is sufficient while for renal perfusion images multiple Gaussians are necessary Such a situation arises for renal perfusion images because different types of tissues are present in the kidney, e.g., cortex and medulla With progressive contrast enhancement, the wash-in of the contrast agent is different for the different tissues, i.e., at different 106 5.4 Discussion stages of contrast enhancement the cortex and medulla have different intensities Radiologists make use of this varying intensity patterns for identifying the cortex and medulla The collecting system is another distinct part within the kidney and the intensity profile for a pixel in it is quite different from those of the cortex and medulla Therefore a GMM with Gaussians is a safe bet for modeling the kidney Modeling the background is also an important aspect In kidney images where the background is more or less uniform, it can be modeled using a single Gaussian while for cardiac images, two Gaussians generally gives best results The number of labels also influences the computation time, with an increase in number of labels increasing the optimization time However, the increase in number of nodes does not lead to a significant increase in computation time For JRS of cardiac images the total computation time is less than one minute for up to 882 labels, which corresponds to a displacement range of ±10 pixels and two segmentation classes for each displacement Adopting a multi-resolution search scheme ensures that not only is a large region searched for the optimal parameters, the computation time also remains within a minute Incorporating the mutual dependency of registration and segmentation leads improved registration and segmentation performance The square-root of the two posterior probabilities helps to truly capture the relation of segmentation information from an image pair Further, defining the labels based on displacement vector and segmentation class captures their mutual dependence The smoothness of the deformation field obtained using graph cuts can be improved as it leads to occasional “blockness” in the registered images This is due to the discrete labels inherent to the MRF formulation Even though we define labels for sub-pixel displacement (up to 0.5 pixels) other registration methods can give smoother deformation fields However, this does not adversely affect the registration performance In future work we aim to investigate techniques that would lead to a smoother deformation field Prospective avenues of research include constructing irregular graph structures for different smoothness formulations and a faster algorithm for 3D data However one novelty over general MRF based registration formulations has been formulating smoothness constraints based on segmentation information As we observe from kidney perfusion images (Fig 6.8), 107 JOINT REGISTRATION AND SEGMENTATION without segmentation based smoothness constraints there is a clear case of folding at the kidney boundaries 5.5 Conclusion We have proposed an MRF based method to incorporate segmentation information in registration where a manually drawn initial mask of the OOI is used There is no need for a prior trained model of the OOI An active contour framework has the disadvantage of using multiple iterations, likelihood of getting trapped in local minima and sensitivity to the initial curve An optimization method based on max-flow techniques (graph cuts) can find a global minima or strong local minima thereby reducing the number of iterations But it also presents us with some challenges The mutual dependency of registration and segmentation information has to be considered because we not know the type of mapping function, as in (128), between the discrete valued labels We successfully overcome this by combining probability of joint occurrence of registration and segmentation labels from the reference and floating image The results are less sensitive to the choice of initial mask The problem was formulated as one of finding the appropriate labels for each pixel such that they encode registration information as well as segmentation class The cost function depends on the observed data (intensity or contrast invariant edge information), probability of labels conditioned on pixel intensity, and smoothness of registration and segmentation labels The final labels obtained by minimizing the cost function in a multi-resolution graph cut implementation give both the displacement vector and segmentation class for each pixel By a coarse to fine graph cut implementation, sub-pixel accuracy for registration and segmentation is achieved We have tested our method on synthetic images with different levels of added noise and deformations, and on natural and medical images having real deformations Our results were compared with an MRF based registration method not using segmentation information and the demons algorithm In the case of synthetic images we observe that all methods perform equally well till a particular level of added noise (σ = 0.06) Beyond that a gradual decrease in registration accuracy of the three methods is observed with our method, JRS, 108 5.5 Conclusion having least registration error and accurate up to a higher noise level (σ = 0.085) We first establish the importance of registration and segmentation aiding each other by calculating different error measures JRS was found to give the most accurate registration results compared to traditional registration schemes where no segmentation information is used The segmentation labels from JRS exhibit greater accuracy than graph cuts for noisy images An important feature of our method is that we can obtain the labels with fewer iterations using a multiresolution graph cut implementation Experimental results also demonstrate the effectiveness of our method under intensity change and its robustness to the initial segmentation mask 109 JOINT REGISTRATION AND SEGMENTATION 110 ... 11 1 6 .1. 1 Cardiac Perfusion MRI 11 1 6 .1. 1 .1 Effect of Saliency Based Narrow Band Graph Cuts 11 5 6 .1. 2 3D Registration Results on Liver Datasets 11 6 6.2 Joint Registration. .. Renal hilum Renal pelvis Ureter Minor calyx Renal capsule 10 Inferior renal capsule 11 Superior renal capsule 12 Interlobar vein 13 Nephron 14 Minor calyx 15 Major calyx 16 Renal papilla 17 Renal. .. important pixels for registration Formulating a MRF framework for joint registration and segmentation of perfusion images A pixel’s displacement vector and segmentation class are 1. 1 Motivation determined

Ngày đăng: 11/09/2015, 10:17

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan