DSpace Repository

3D Image Segmentation of Deformable Objects with Shape-Appearance Joint Prior Models

Show simple item record

dc.contributor.author Yang Jing
dc.contributor.author Duncan James S
dc.date.accessioned 2018-01-15T20:49:09Z
dc.date.available 2018-01-15T20:49:09Z
dc.date.issued 2003
dc.identifier.uri http://hdl.handle.net/123456789/5584
dc.description.abstract We propose a novel method for 3D image segmentation, where a Bayesian formulation, based on joint prior knowledge of the shape and the image gray levels, along with information derived from the input image, is employed. Our method is motivated by the observation that the shape of the object and the gray level variation in an image have consistent relations that provide configurations and context that aid in segmentation. We define a Maximum A Posteriori(MAP) estimation model using the joint prior information of the shape and image gray levels to realize image segmentation. We introduce a representation for the joint density function of the object and the image gray level values, and define joint probability distribution over the variations of object shape and the gray levels contained in a set of training images. By estimating the MAP shape of the object, we formulate the shape-appearance model in terms of level set function as opposed to landmark points of the shape. We found the algorithm to be robust to noise, able to handle multidi-mensional data, and avoiding the need for point correspondences during the training phase. Results and validation from various experiments on 2D/3D medical images are demonstrated.
dc.format application/pdf
dc.title 3D Image Segmentation of Deformable Objects with Shape-Appearance Joint Prior Models
dc.type generic


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account