Title

3D Object Transfer Between Non-Overlapping Videos

Keywords

Computer vision; Depth driven video matting and compositing; Image-based rendering; Video-based augmented reality

Abstract

Given two video sequences of different scenes acquired with moving cameras, it is interesting to seamlessly transfer a 3D object from one sequence to the other. In this paper, we present a video-based approach to extract the alpha mattes of rigid or approximately rigid 3D objects from one or more source videos, and then geometry-correctly transfer them into another target video of a different scene. Our framework builds upon techniques in camera pose estimation, 3D spatiotemporal video alignment, depth recovery, key-frame editing, natural video matting, and image-based rendering. Based on the explicit camera pose estimation, the camera trajectories of the source and target videos are aligned in 3D space. Combinied with the estimated dense depth information, this allows us to significantly relieve the burdens of key-frame editing and efficiently improve the quality of video matting. During the transfer, our approach not only correctly restores the geometric deformation of the 3D object due to the different camera trajectories, but also effectively retains the soft shadow and environmental lighting properties of the object to ensure that the augmenting object is in harmony with the target scene. © 2006 IEEE.

Publication Date

1-1-2006

Publication Title

Proceedings - IEEE Virtual Reality

Volume

2006

Number of Pages

127-134

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/VR.2006.3

Socpus ID

33750142564 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/33750142564

This document is currently not available here.

Share

COinS