Interactive chroma keying for mixed reality

Authors: contact us about adding a copy of your work at STARS@ucf.edu

Abstract

In Mixed Reality (MR) applications, immersion of virtual objects in captured video contributes to the perceived unification of two worlds, one real, one synthetic. Since virtual actors and surround may appear both closer and farther than real objects, compositing must consider spatial relationships in the resulting world. Chroma keying, often called blue screening or green screening, is one common solution to this problem. This method is under-constrained and most commonly addressed through a combination of environment preparation and commercial products. In interactive MR domains that impose restrictions oil the video camera hardware, such as in experiences using video see-through (VST) head-mounted displays (HMD), chroma keying becomes even more difficult due to the relatively low camera quality, the use of multiple camera sources (one per eye), and the required processing speed. Dealing with these constraints requires a fast and affordable solution. In our approach, we precondition the chroma key by using principal component analysis (PCA) to obtain usable alpha mattes from video streams in real-time oil commodity graphics processing units (GPUs). In addition, we demonstrate how our method compares to off-line commercial keying tools and how it performs with respect to signal noise within the video stream. Copyright (C) 2009 John Wiley & Sons, Ltd.