Title

Defining A Visual Correlation Threshold For A Proposed Test &Amp; Evaluation Standard

Keywords

Correlation threshold; Visual correlation

Abstract

Interoperability, succinctly defined, is the ability of multiple systems to find a common ground, in order to work together in a coupled environment. Standardization designs between simulators have been developed to support interoperation. However, differences in manufacturers' image generation software (e.g., rendering engines, polygonalization, thinning) make it difficult to produce a standardized "fidelity" between applications. Proprietary application information is a key factor in this issue due to different manufacturers allowing database correlation or synthesis, but disallowing uniform image generation processes. Despite the advanced capabilities available to overcome Terrain Database (TDB) correlation challenges, such as side-by-side viewers and sophisticated software applications, visual correlation remains an issue within simulation-based training environments. A determining cue indicator in a visual system simulation is the fidelity or "look" of the environment. Discrepancies between visual systems are identified by a human observing multiple displays without clear metrics to identify and evaluate correlation levels. This paper presents a draft standard to evaluate visual correlation within networked simulation-based training systems. This work results from empirical research comparing human-in-the-loop evaluation and two automated correlation assessment methods.

Publication Date

10-22-2012

Publication Title

Fall Simulation Interoperability Workshop 2012, 2012 Fall SIW

Number of Pages

15-19

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

84867535010 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84867535010

This document is currently not available here.

Share

COinS