Visualvital: An Observation Model for Multiple Sections of Scenes
Date
Authors
ORCID
Journal Title
Journal ISSN
Volume Title
Publisher
item.page.doi
Abstract
A computational methodology for reorienting, repositioning, and merging camera positions within a region under surveillance is proposed, so as to optimally cover all features of interest without overburdening human or machine analysts with an overabundance of video information. This streamlines many video monitoring applications, such as vehicular traffic and security camera monitoring, which are often hampered by the difficulty of manually identifying the few specific locations (for tracks) or frames (for videos) relevant to a particular inquiry from a vast sea of hundreds or thousands of hours of video. VisualVital ameliorates this problem by considering geographic information to select ideal locations and orientations of camera positions to fully cover the span without losing key visual details. Given a target quantity of cameras, it merges relatively unimportant camera positions to reduce the quantity of video information that must be collected, maintained, and presented. Experiments apply the technique to paths chosen from maps of different cities around the world with various target camera quantities. The approach finds detail-optimizing positions with a time complexity of O(n log n). ©2017 IEEE.