Modeling and Visualization of Human Activities for Multicamera Networks
<p/> <p>Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end us...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2009-01-01
|
Series: | EURASIP Journal on Image and Video Processing |
Online Access: | http://jivp.eurasipjournals.com/content/2009/259860 |
id |
doaj-e2c2432313604754b8071c806477857e |
---|---|
record_format |
Article |
spelling |
doaj-e2c2432313604754b8071c806477857e2020-11-25T00:18:34ZengSpringerOpenEURASIP Journal on Image and Video Processing1687-51761687-52812009-01-0120091259860Modeling and Visualization of Human Activities for Multicamera NetworksPatro RobertVarshney AmitabhSankaranarayanan AswinCTuraga PavanChellappa Rama<p/> <p>Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.</p>http://jivp.eurasipjournals.com/content/2009/259860 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Patro Robert Varshney Amitabh Sankaranarayanan AswinC Turaga Pavan Chellappa Rama |
spellingShingle |
Patro Robert Varshney Amitabh Sankaranarayanan AswinC Turaga Pavan Chellappa Rama Modeling and Visualization of Human Activities for Multicamera Networks EURASIP Journal on Image and Video Processing |
author_facet |
Patro Robert Varshney Amitabh Sankaranarayanan AswinC Turaga Pavan Chellappa Rama |
author_sort |
Patro Robert |
title |
Modeling and Visualization of Human Activities for Multicamera Networks |
title_short |
Modeling and Visualization of Human Activities for Multicamera Networks |
title_full |
Modeling and Visualization of Human Activities for Multicamera Networks |
title_fullStr |
Modeling and Visualization of Human Activities for Multicamera Networks |
title_full_unstemmed |
Modeling and Visualization of Human Activities for Multicamera Networks |
title_sort |
modeling and visualization of human activities for multicamera networks |
publisher |
SpringerOpen |
series |
EURASIP Journal on Image and Video Processing |
issn |
1687-5176 1687-5281 |
publishDate |
2009-01-01 |
description |
<p/> <p>Multicamera networks are becoming complex involving larger sensing areas in order to capture activities and behavior that evolve over long spatial and temporal windows. This necessitates novel methods to process the information sensed by the network and visualize it for an end user. In this paper, we describe a system for modeling and on-demand visualization of activities of groups of humans. Using the prior knowledge of the 3D structure of the scene as well as camera calibration, the system localizes humans as they navigate the scene. Activities of interest are detected by matching models of these activities learnt a priori against the multiview observations. The trajectories and the activity index for each individual summarize the dynamic content of the scene. These are used to render the scene with virtual 3D human models that mimic the observed activities of real humans. In particular, the rendering framework is designed to handle large displays with a cluster of GPUs as well as reduce the cognitive dissonance by rendering realistic weather effects and illumination. We envision use of this system for immersive visualization as well as summarization of videos that capture group behavior.</p> |
url |
http://jivp.eurasipjournals.com/content/2009/259860 |
work_keys_str_mv |
AT patrorobert modelingandvisualizationofhumanactivitiesformulticameranetworks AT varshneyamitabh modelingandvisualizationofhumanactivitiesformulticameranetworks AT sankaranarayananaswinc modelingandvisualizationofhumanactivitiesformulticameranetworks AT turagapavan modelingandvisualizationofhumanactivitiesformulticameranetworks AT chellapparama modelingandvisualizationofhumanactivitiesformulticameranetworks |
_version_ |
1725375803655454720 |