Top Banner
MULTI-HYPOTHESIS PROJECTION-BASED SHIFT ESTIMATION FOR SWEEPING PANORAMA RECONSTRUCTION Tuan Q. Pham Philip Cox Canon Information Systems Research Australia (CiSRA) 1 Thomas Holt drive, North Ryde, NSW 2113, Australia. tuan.pham@cisra.canon.com.au ABSTRACT Global alignment is an important step in many imaging appli- cations for hand-held cameras. We propose a fast algorithm that can handle large global translations in either x- or y- direction from a pan-tilt camera. The algorithm estimates the translations in x- and y-direction separately using 1D corre- lation of the absolute gradient projections along the x- and y- axis. Synthetic experiments show that the proposed multiple shift hypotheses approach is robust to translations up to 90% of the image width, whereas other projection-based alignment methods can handle up to 25% only. The proposed approach can also handle larger rotations than other methods. The ro- bustness of the alignment to non-purely translational image motion and moving objects in the scene is demonstrated by a sweeping panorama application on live images from a Canon camera with minimal user interaction. Index Termsshift estimation, image projection, sweep panorama 1. INTRODUCTION Global alignment is an important task for many imaging ap- plications such as image quality measurement, video stabi- lization, and moving object detection. For applications on embedded devices, the alignment needs to be both accurate and fast. Robustness against difficult imaging conditions such as low light, camera motion blur or motion in the scene is also desirable. In this paper, we describe a low-cost global shift estimation algorithm that addresses these needs. The algorithm’s robustness against difficult imaging conditions and its real-time performance is demonstrated on a sweep- ing panorama application using live images from a hand-held Canon camera. In particular, our global alignment algorithm performs separable shift estimation using one-dimensional (1D) pro- jections of the absolute gradient images along the sampling axes. For each image dimension, multiple shift hypotheses are maintained to avoid misdetection due to non-purely trans- lational motion, independent moving objects, or distractions from the non-overlapping areas. The final shift estimate is the one that produces the highest two-dimensional (2D) Normal- ized Cross-Correlation (NCC) score. Various enhancements are added to the basic alignment algorithm above to improve its performance. The input im- ages are subsampled prior to analysis to improve speed and noise robustness. Shift estimation is performed over multiple scales to rule out incorrect shifts due to strong correlation of texture at certain frequencies. When appropriate, the images are automatically cropped to improve overlap before gradient projection. Given the alignment between consecutive frames from a panning camera, a panoramic image can be constructed dur- ing image capturing. Overlapping images are stitched along an irregular seam that avoids cutting through moving objects. This seam also minimizes an intensity mismatch of the two images on either side of the seam. Image blending is fi- nally used to eliminate any remaining intensity mismatch af- ter stitching. 1.1. Literature review Numerous solutions are available for translational image alignment. Amongst them, a correlation-based method is popular for its robustness. However, 2D correlation is costly for large images. 2D phase correlation, for example, requires O(N 2 logN 2 ) computations for an N ×N image using Fast Fourier Transform (FFT). The computational complexity can be reduced to O(NlogN ) if the correlation is performed on 1D image projections only [1]. This projection-based align- ment algorithm is suitable for images with strong gradient structures along the projection axes. This assumption holds for most indoor and natural landscape scenes. Adams et al. reported a real-time projection-based align- ment of a 320 × 240 viewfinder video stream at 30 frames per second on standard smartphone hardware [2]. Their algorithm uses projections of the image’s gradient energy along four di- rections. The use of image gradient rather than intensity im- proves alignment robustness against local lighting changes. Despite their speed advantage, previous projection-based alignment algorithms have a number of limitations. First, the images must have a substantial overlap (e.g., more than 90%
6

Multi-hypothesis projection-based shift estimation for sweeping panorama reconstruction

May 11, 2015

Download

Technology

Tuan Q. Pham

T.Q. Pham and P. Cox, ICME 2012.
International Conference on Multimedia & Expo,
Melbourne, Australia, 2012.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Multi-hypothesis projection-based shift estimation for sweeping panorama reconstruction

MULTI-HYPOTHESIS PROJECTION-BASED SHIFT ESTIMATION FOR SWEEPINGPANORAMA RECONSTRUCTION

Tuan Q. Pham† Philip Cox

Canon Information Systems Research Australia (CiSRA)1 Thomas Holt drive, North Ryde, NSW 2113, Australia.

[email protected]

ABSTRACT

Global alignment is an important step in many imaging appli-cations for hand-held cameras. We propose a fast algorithmthat can handle large global translations in either x- or y-direction from a pan-tilt camera. The algorithm estimates thetranslations in x- and y-direction separately using 1D corre-lation of the absolute gradient projections along the x- and y-axis. Synthetic experiments show that the proposed multipleshift hypotheses approach is robust to translations up to 90%of the image width, whereas other projection-based alignmentmethods can handle up to 25% only. The proposed approachcan also handle larger rotations than other methods. The ro-bustness of the alignment to non-purely translational imagemotion and moving objects in the scene is demonstrated by asweeping panorama application on live images from a Canoncamera with minimal user interaction.

Index Terms— shift estimation, image projection, sweeppanorama

1. INTRODUCTION

Global alignment is an important task for many imaging ap-plications such as image quality measurement, video stabi-lization, and moving object detection. For applications onembedded devices, the alignment needs to be both accurateand fast. Robustness against difficult imaging conditions suchas low light, camera motion blur or motion in the scene isalso desirable. In this paper, we describe a low-cost globalshift estimation algorithm that addresses these needs. Thealgorithm’s robustness against difficult imaging conditionsand its real-time performance is demonstrated on a sweep-ing panorama application using live images from a hand-heldCanon camera.

In particular, our global alignment algorithm performsseparable shift estimation using one-dimensional (1D) pro-jections of the absolute gradient images along the samplingaxes. For each image dimension, multiple shift hypothesesare maintained to avoid misdetection due to non-purely trans-lational motion, independent moving objects, or distractionsfrom the non-overlapping areas. The final shift estimate is the

one that produces the highest two-dimensional (2D) Normal-ized Cross-Correlation (NCC) score.

Various enhancements are added to the basic alignmentalgorithm above to improve its performance. The input im-ages are subsampled prior to analysis to improve speed andnoise robustness. Shift estimation is performed over multiplescales to rule out incorrect shifts due to strong correlation oftexture at certain frequencies. When appropriate, the imagesare automatically cropped to improve overlap before gradientprojection.

Given the alignment between consecutive frames from apanning camera, a panoramic image can be constructed dur-ing image capturing. Overlapping images are stitched alongan irregular seam that avoids cutting through moving objects.This seam also minimizes an intensity mismatch of the twoimages on either side of the seam. Image blending is fi-nally used to eliminate any remaining intensity mismatch af-ter stitching.

1.1. Literature review

Numerous solutions are available for translational imagealignment. Amongst them, a correlation-based method ispopular for its robustness. However, 2D correlation is costlyfor large images. 2D phase correlation, for example, requiresO(N2logN2) computations for an N×N image using FastFourier Transform (FFT). The computational complexity canbe reduced to O(NlogN) if the correlation is performed on1D image projections only [1]. This projection-based align-ment algorithm is suitable for images with strong gradientstructures along the projection axes. This assumption holdsfor most indoor and natural landscape scenes.

Adams et al. reported a real-time projection-based align-ment of a 320×240 viewfinder video stream at 30 frames persecond on standard smartphone hardware [2]. Their algorithmuses projections of the image’s gradient energy along four di-rections. The use of image gradient rather than intensity im-proves alignment robustness against local lighting changes.

Despite their speed advantage, previous projection-basedalignment algorithms have a number of limitations. First, theimages must have a substantial overlap (e.g., more than 90%

Page 2: Multi-hypothesis projection-based shift estimation for sweeping panorama reconstruction

Fig. 1. Simultaneous shift estimation over multiple scales.

of the frame area according to [2]) for the alignment to work.This is because image data from non-overlapping areas cor-rupt the image projections, eventually breaking their correla-tion. Second, any deviation from a pure translation is likely tobreak the alignment. The viewfinder algorithm [2], for exam-ple, claims to handle a maximum of 1◦ rotation only. Third,previous gradient projection algorithms are not robust to lowlighting condition. The weak gradient energy of dark cur-rent noise at every pixel often overpowers the stronger butsparse gradient of the scene structures when integrated overa whole image row or column. For a similar reason, gradi-ent projection algorithms are also not robust against highlytextured scene such as carpet or foliage.

1.2. Structure of this paper

In this paper, we present a fast global alignment algorithmwith application in sweeping panorama reconstruction. Sec-tion 2 presents the new multiple hypotheses global alignmentalgorithm using gradient projections. Section 3 describes asoftware prototype for sweeping panorama stitching using thenew alignment algorithm. Section 4 evaluates the alignmentand panorama stitching algorithms. Section 5 concludes thepaper.

2. PROJECTION-BASED IMAGE ALIGNMENT

We propose a projection-based shift estimation algorithm thatis robust to large translation, small rotation and perspectivechange, noise and texture. The global shift is computed overmultiple scales as shown in Fig. 1. The input images are firstsubsampled to a manageable size to reduce noise and compu-tation. A dyadic image pyramid is then constructed for eachimage [3]. At each pyramid level, a shift estimate is obtainedindependently using the new projection-based image align-ment algorithm described in Section 2.1. The shift candidatewith the highest 2D NCC score is the final shift estimate.

Aligning two images at multiple subsampled resolutionsand taking the best solution is more robust than alignment ata single original resolution for a number of reasons. First,noise is substantially reduced by subsampling while the gra-dient information of the scene is largely preserved. Second,subsampling reduces texture variation and its contribution tothe gradient projections.

Too much subsampling, however, eliminates useful align-ment details. To achieve an optimal gain in signal-to-noise ra-tio, we align the images over three successively halved pyra-mid levels starting from an image size around 2562 pixels at

Fig. 2. Multiple shift hypotheses from gradient projectioncorrelation.

the base of the pyramid. Block summing is used to subsamplethe images for efficiency. Because block summing producesslightly more aliased images compared to Gaussian subsam-pling, some subpixel alignment error is expected. However,the alignment error can be corrected by subpixel peak inter-polation of the NCC score at the base pyramid level.

2.1. Multi-hypothesis gradient projection correlation

At each pyramid level, the translation between two input im-ages I1, I2 is estimated by a multi-hypothesis projection-based shift estimation algorithm described in Fig. 2. Im-age gradients |∂I1/∂x| and |∂I1/∂y| are estimated using fi-nite difference. The magnitude of the x-gradient image isthen integrated along image columns to obtain the x-gradientprojection: px1 =

∫|∂I1/∂x|dy. The y-gradient projec-

tion is similarly obtained from the y-gradient image: py1 =∫|∂I1/∂y|dx. The corresponding gradient projections from

the two images are correlated to find multiple possible trans-lations in either dimension. Cross-correlation of zero-paddedzero-mean signals is used instead of a brute-force search fora correlation peak in [2] to handle a larger range of possiblemotion. Multiple 2D shift hypotheses are derived from allcombinations of the 1D shift hypotheses in both dimensions.A 2D NCC score is obtained for each of these 2D shift hy-potheses from the overlapping area of the input images dic-tated by the shift. The shift hypothesis with the highest 2DNCC score is then refined to a subpixel accuracy by an ellip-tical paraboloid fit over a 3×3 neighborhood around the 2DNCC peak.

Fig. 3 shows a block diagram with all steps and possibleexecution paths of our multi-hypothesis projection-based shiftestimation algorithm. The efficiency of the new algorithmcomes from two improvements over [2] in steps 1 and 2:

1. The input images are subsampled to a manageable size

Page 3: Multi-hypothesis projection-based shift estimation for sweeping panorama reconstruction

Fig. 3. Flow chart describing the proposed projection-basedshift estimation algorithm.

(e.g., 256×256 pixels) before alignment;

2. The 2D translation is estimated separately in x- and y-dimension (rather than in four orientations as in [2]) us-ing projections of the images directional gradient mag-nitude (rather than the gradient energy as in [2]) ontothe corresponding axis.

The algorithm is robust to large translations thanks to anew multiple shift hypotheses algorithm in steps 3 to 6:

3. For each pair of 1D projections, k shift hypotheses areselected from the k strongest 1D NCC peaks (e.g., k=5)using non-maximal suppression [4];

4. Any shift candidate with a dominant 1D NCC score,which is higher than 1.5-time the second highest scorealong the same dimension, is the final shift for that di-mension;

5. If only one dimension has a dominant NCC score, thetwo images are cropped to an overlapping area alongthis dimension before returning to step 2;

6. If there is no shift hypothesis with a dominant 1D NCCscore, k2 2D shift hypotheses are constructed from the1D shift hypotheses (see Fig. 2). The shift candidatewith the highest 2D NCC score is the final 2D shift.

Note that our algorithm terminates at step 4 if two imageshave substantial overlap. Step 5 is executed if there is a largeshift in only one dimension. Step 6 is the most expensive partbecause it requires the computation of k2 2D NCC scores.Fortunately, for a sweeping panorama application, the motionis mainly one-dimensional. As a result, most of the examplesin this paper branch to step 5, which requires significantlyfewer 2D NCC score computations to find the best translation.

3. SEAMLESS PANORAMA STITCHING

Using the alignment algorithm described in the previous sec-tion, a panning image sequence can be combined to form apanoramic image. If the alignment is accurate to a subpixellevel, frame averaging can be used for image composition [5].However, subpixel alignment is difficult for images capturedby a moving camera with moving objects in the scene. A more

Frame 1 Frame 11 Frame 21 Frame 31 Frame 41panorama from 48 panning images (6 actually used)

100 200 300 400 500 600 700 800 900 1000 1100

50

100

150

200

250

300

350

Fig. 4. Sweeping panorama (1119×353) in the presence ofmoving objects and perspective image motion (seams shownin yellow).

robust compositing method is to segment the mosaic and usea single image per segment [6]. For sweeping panorama, theimages undergo a translation mainly in one direction. Twoconsecutive images can therefore be joined together along aseam that minimizes the intensity mismatch between adjacentsegments [7]. Laplacian pyramid fusion [3] can then be usedto smooth out any remaining seam artefacts.

To demonstrate our alignment technology on realisticscenes, we built a standalone application that stitches live im-ages from a panning camera. The images are automaticallytransferred from a Canon 40D camera to a PC. A screen-shot of our demo application is given in Fig. 4, where thepanorama was reconstructed from six panning images in real-time.

For efficiency, we do not use all captured images forpanorama stitching. The images whose fields of view arecovered by neighbouring frames can be skipped to reducethe seam computations. All incoming frames still need to bealigned to determine their overlapping areas. The first frameis always used in the panorama. A frame is skipped if it over-laps more than 75% with the last used frame and if the nextframe also overlaps more than 25% with the last used frame.The second condition ensures no coverage gap is created byremoving a frame. These overlapping parameters can be in-creased to encourage more frames to be used during stitching.Fig. 4 illustrates an example with this default overlapping pa-rameter where only four out of six captured frames are neededto construct a panoramic image.

Our software prototype automatically determines thesweep direction from the alignment information. There isno need for the user to select the direction, as required insome consumer cameras. Fig. 5b shows an example of a ver-tical panorama constructed by our system from ten images inFig. 5a. The output image is a good reproduction of the scenedespite few horizontal or vertical structures in the scene, light-ing change due to camera auto-gain, and texture of carpet onthe floor. Another example on automatic sweep direction de-tection can be seen in Fig. 9, where the camera was panned

Page 4: Multi-hypothesis projection-based shift estimation for sweeping panorama reconstruction

(a) 10 input frames (512×340) (b) panorama (543×1330)

Fig. 5. Vertical sweeping panorama produced by our system.

Fig. 6. Shift estimation run time set out against image size forthree algorithms.

from right to left instead of the traditional left to right motionas in Fig. 4.

4. EVALUATION

We first present an evaluation of our projection-based shift es-timation, followed by results on seamless panorama stitching.

4.1. Shift estimation

We compare our multi-hypothesis projection-based shift esti-mation algorithm against an FFT-based 2D correlation andthe viewfinder alignment algorithm [2]. All three algo-rithms were implemented in Matlab version R2010b. For theviewfinder alignment algorithm, the images were subsampledto approximately 320×240 pixels to match the viewfinder res-olution in [2]. Harris corner detection followed by nearestneighbour corner matching was used to correct for small ro-tation and scale change as described in [2].

We applied the three shift estimators to panning imagepairs of different sizes and recorded the execution time in

Fig. 7. Estimated shifts for image pairs undergoing a syn-thetic horizontal shift.

Matlab. For each available image size, an average runtimeand its standard deviation are plotted as error bars in Fig. 6.Runtime varies even for the same image size due to differentcontent overlap. A line is fit to the data points to predict theruntime of each algorithm for an arbitrary image size. All al-gorithms show a linear run-time performance with respect tothe number of input pixels. 2D correlation is the slowest algo-rithm. Its floating-point FFT operation also triggers an out-of-memory error for images larger than ten Mega Pixels (MP).Our algorithm runs slightly faster than that of Adams et al.because ours does not have the corner detection and match-ing steps. The red line fit in Fig. 6 shows that it takes us lessthan 0.05 of a second in Matlab to align a 1 MP image pairand roughly 0.1 second to align an 8 MP image pair. As theimage size gets larger, the major part of the run-time is spenton image subsampling, which can be implemented more effi-ciently in hardware using CCD binning.

To measure the robustness of our projection-based align-ment algorithm against large translation, we performed a syn-thetic shift experiment. Two 512×340 images were croppedfrom the panoramic image in Fig. 4 such that they are re-lated by a purely horizontal translation, which ranges from1 to 500 pixels. The estimated shifts [tx ty] are plotted inFig. 7 for three algorithms: 2D correlation, viewfinder align-ment, and this paper′s. Both 2D correlation and viewfinderalignment fail to estimate shifts larger than 128 pixels (i.e.tx > 25% of image width). Our multi-hypothesis algorithm,on the other hand, estimates both shift components correctlyfor a synthetic translation up to 456 pixels (i.e. 90% of imagewidth). As suggested by the 2D correlation subplot on the toprow of Fig. 7, the strongest correlation peak does not alwayscorrespond to the true shift. Large non-overlapping areas canalter the correlation surface, leading to a sudden switch of theglobal peak to a different location. This sudden change in theglobal correlation peak corresponds to the sudden jumps ofthe tx and ty curves in the 2D correlation subplot.

Page 5: Multi-hypothesis projection-based shift estimation for sweeping panorama reconstruction

Fig. 8. Estimated shifts for image pairs undergoing a smallsynthetic rotation.

The average accuracy of the estimated shifts in Fig. 7 istabulated in Table 1. We measured the Root Mean SquaredErrors (RMSE) of the estimated shifts within two ground-truth translation intervals. The first interval (1 ≤ tx ≤ 128) iswhere all three algorithms achieve subpixel accuracy. Withinthis interval, the viewfinder alignment algorithm is the mostaccurate and this paper′s is the least accurate. The second in-terval covers a larger range of shifts (1 ≤ tx ≤ 456) and thisis when all other algorithms fail. Within this larger motionrange, our algorithm produces an average of 2-pixel align-ment error for horizontal translation up to 90% of the imagewidth.

We also tested the robustness of our shift estimation al-gorithm against small image rotation. Fig. 8 plots the esti-mated shifts by the same three alignment algorithms on purelyrotated image pairs. The images are generated from frame1 of the image sequence in Fig. 4 by a rotation, followedby central cropping to 276×448 pixels to remove the miss-ing image boundary. Under zero translation, the viewfinderalignment algorithm is robust up to 3◦ rotation. Outsidethis ±3◦ rotation range, however, the viewfinder alignmentalgorithm produces unreliably large shift estimation errors.Note that the middle subplot has a 10-time larger vertical axislimit compared to the other two subplots. Our algorithm per-forms equally well to that of Adams et al. for small rotation(|θ| < 3◦). For larger rotation, the error of our alignmentincreases only gradually, reaching 10-pixel misalignment for10◦ rotation.

The performances of the three alignment algorithms under

Table 1. RMSE of estimated shifts under large translationCorrelation Adams et al. This paper

1 ≤ tx ≤ 128 0.118 0.083 0.4201 ≤ tx ≤ 456 278.444 279.549 2.281

small image rotation are further described by the RMSEs inTable 2. Within a±1◦ rotation range, Adams et al. is the mostaccurate method, closely followed by this paper. Both achievesubpixel accuracy. For any larger rotation range, our algo-rithm is the most accurate. We consistently produce less than2-pixel alignment error for rotation up to 5◦. Adams et al., onthe other hand, fail to align images with more than 3◦ rotation.

4.2. Panorama stitching

We demonstrate the accuracy of our multi-hypothesisprojection-based shift estimation on a sweeping panorama ap-plication. Five images on the top row of Fig. 4 come from asequence of 48 images captured by a hand-held camera. Dueto a panning motion of the camera, the input images undergo ahorizontal translation mainly. The translations are calculatedbetween consecutive image pairs using the alignment algo-rithm presented in Section 2. Six frames (1,12,22,33,43,48)with sufficient content overlap are automatically selected forpanorama stitching. The selected frames are stitched togetheralong a set of irregular seams (shown as yellow lines in thepanorama).

Fig. 4 demonstrates our solution′s robustness to movingobjects and non-purely translational motion. Because the in-tensity difference across the seams is minimized, the stitchedimage appears seamless. The seams do not cut through mov-ing objects such as the cars on the road. However, one ofthese cars appears multiple times in the panorama as it movesthrough the scene during image acquisition. Another visibleartefact is the bending of the balcony wall close to the camera.This geometric distortion is due to the approximation of a full3D projective transformation of the images by a simple 2Dtranslation. Despite these artefacts, the produced panorama isa plausible representation of the scene.

Our global alignment algorithm is also robust to motionblur. An example of a panning sequence with severe motionblur is shown on the top row of Fig. 9. Because multiple 1Dshift hypotheses are kept, the correct 2D shifts are success-fully detected, leading to a good panorama reconstruction onthe bottom row of Fig. 9. Note that the output panorama couldhave been improved further using motion blur deconvolution.However, deconvolution is out of the scope of this paper.

More panoramas reconstructed by our system are givenin Fig. 10. Our algorithm works well outdoors (Fig. 10a)because motion of distant scenes can be approximated by a

Table 2. RMSE of estimated shifts under small rotation

Correlation Adams et al. This paper−1◦ ≤ θ ≤ 1◦ 1.070 0.673 0.737−3◦ ≤ θ ≤ 3◦ 3.212 1.684 1.310−5◦ ≤ θ ≤ 5◦ 5.481 141.555 1.679

Page 6: Multi-hypothesis projection-based shift estimation for sweeping panorama reconstruction

Frame 11 Frame 9 Frame 6 Frame 3 Frame 0sweeping panorama from 12 panning images (9 actually used)

500 1000 1500 2000 2500 3000

200

400

600

Fig. 9. Seamless panorama reconstruction under motion blur(output size is 3456×704).

(a) Outdoor panorama (8448×1428) from 14 images

(b) 360◦ panorama (4448×496) from a PTZ camera

(c) 180◦ panorama (4000×704) of a busy shopping centre

Fig. 10. Sweeping panoramas constructed by our system.

translation. Projective distortions only appear when there issignificant depth difference in the scene. The 360◦ indoorpanorama in Fig. 10b, for example, shows bending of linearstructures due to this perspective effect. These distortions areunavoidable for a wide-angle view because the panorama ef-fectively lies on a cylindrical surface, whereas each input im-age lies on a different imaging plane. Finally, an 180◦ viewof a busy shopping centre is presented in Fig. 10c. The recon-structed panorama captures many people in motion, none ofthem are cut by the hidden seams.

For comparison purposes, we captured some panoramicimages using a consumer camera available on the market.Different from our technology, which stitches as few framesas possible along some irregular seams, this camera joins asmany frames as it captures along straight vertical seams. Thisstrip-based stitching algorithm is prone to motion artefactssuch as the motion trail of the car in Fig. 11a. The thin stripapproach is also not robust to jittered camera motion. Fig. 11bshows some jitter artefacts of a whiteboard and a nearby win-dow due to an uneven panning motion. The top drawer of thevertical panorama in Fig. 11c also looks distorted. Our solu-tion does not suffer from jittered artefacts because the imagesare aligned in both directions before fusion.

(a) motion trail of a moving car

(b) ripples due to unstable sweeping motion(c) over-exposed

Fig. 11. Some panoramas produced by a consumer camera.

5. CONCLUSION

We have presented a new projection-based shift estimationalgorithm using multiple shift hypothesis testing. Our shiftestimation algorithm is fast and it can handle large imagetranslations in either x- or y-direction. The robustness ofthe algorithm in real-life situations is demonstrated using asweeping panorama stitching application. Our alignment al-gorithm is found to be robust against small perspective changedue to camera motion. It is also robust against motion blurand moving objects in the scene. We have presented a demoapplication for live panorama stitching from a Canon cam-era. The panorama stitching solution comprises of a multi-hypothesis projection-based image alignment step, an irregu-lar seam stitching step and an optional image blending step.

6. ACKNOWLEDGMENT

The authors would like to thank Ankit Mohan from CanonUSA R&D and Edouard Francois from Canon ResearchFrance for their help to improve this paper’s presentation.

7. REFERENCES

[1] S. Alliney and C. Morandi, “Digital image registration usingprojections,” PAMI, 8(2):222–233, 1986.

[2] A. Adams, N. Gelfand, and K. Pulli, “Viewfinder alignment,”Comput. Graph. Forum, 27(2):597–606, 2008.

[3] E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, andJ. M. Ogden, “Pyramid method in image processing,” RCAEng., 29(6):33–41, 1984.

[4] T. Q. Pham, “Non-maximum suppression using fewer than twocomparisons per pixel,” in Proc. of ACIVS, 2010, pp. 438–451.

[5] H.-Y. Shum and R. Szeliski, Construction of panoramic mosaicswith global and local alignment, IJCV, 36(2):101–130, 2000.

[6] J. Davis, “Mosaics of scenes with moving objects,” in Proc. ofCVPR, 1998, pp. 354–360.

[7] S. Avidan and A. Shamir, “Seam carving for content-awareimage resizing,” in Proc. of SIGGRAPH, 2007.