Linear Solution to Scale and Rotation Invariant Object Matching
Hao Jiang and Stella X. Yu
Computer Science Department
Boston College, Chestnut Hill, MA 02467, USA
Images of an object undergoing ego- or camera- motion
often appear to be scaled, rotated, and deformed versions
of each other. To detect and match such distorted patterns
to a single sample view of the object requires solving a hard
computational problem that has eluded most object match-
ing methods. We propose a linear formulation that simulta-
neously finds feature point correspondences and global ge-
ometrical transformations in a constrained solution space.
Further reducing the search space based on the lower con-
vex hull property of the formulation, our method scales well
with the number of candidate features. Our results on a va-
riety of images and videos demonstrate that our method is
accurate, efficient, and robust over local deformation, oc-
clusion, clutter, and large geometrical transformations.
Images of a bee flapping its wings and moving around
a daisy appear to be related by global translation, rotation,
scaling, and local deformation. Our goal is to detect and
match the 2D pattern of the object with a template built from
a single sample image (Fig. 1).
Figure 1. Our goal is to find the correspondence between the model
image of a deformable object and the target image of the same
object with unknown scaling, rotation, and local deformation.
The basic idea in pattern matching is that distinctive fea-
ture points should maintain both local appearances and rel-
ative spatial relationships. The spatial consistency has to be
enforced if we need point-to-point correspondences rather
than the mere pattern detection . Geometrical transfor-
mations such as scaling and rotation introduce such a com-
putational complexity that few methods have been able to
deliver fast, accurate, and robust solutions.
Hough transform [2, 3] and RANSAC  have been
widely used in shape m