Visualvital: An Observation Model for Multiple Sections of Scenes

Date

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers Inc.

item.page.doi

Abstract

A computational methodology for reorienting, repositioning, and merging camera positions within a region under surveillance is proposed, so as to optimally cover all features of interest without overburdening human or machine analysts with an overabundance of video information. This streamlines many video monitoring applications, such as vehicular traffic and security camera monitoring, which are often hampered by the difficulty of manually identifying the few specific locations (for tracks) or frames (for videos) relevant to a particular inquiry from a vast sea of hundreds or thousands of hours of video. VisualVital ameliorates this problem by considering geographic information to select ideal locations and orientations of camera positions to fully cover the span without losing key visual details. Given a target quantity of cameras, it merges relatively unimportant camera positions to reduce the quantity of video information that must be collected, maintained, and presented. Experiments apply the technique to paths chosen from maps of different cities around the world with various target camera quantities. The approach finds detail-optimizing positions with a time complexity of O(n log n). ©2017 IEEE.

Description

Full text access from Treasures at UT Dallas is restricted to current UTD affiliates (use the provided Link to Article).

Keywords

Big data, Cameras, Computer networks--Security measures, Smart cities, Ubiquitous computing, Geographic information systems, Video surveillance

item.page.sponsorship

NSF award #1054629 and AFOSR award FA9550-14-1-0173

Rights

©2017 IEEE

Citation