Extensibility, modularity, and quality-adaptive streaming towards collaborative video authoring

Video capture devices and online video viewing sites are proliferating. Content can be produced more easily than ever but the tasks required to compose and produce interesting videos are still very involving. Unfortunately, very little support is given for groups of amateurs to meet and collaborate...

Full description

Bibliographic Details
Main Author: Légaré, Jean-Sébastien
Language:English
Published: University of British Columbia 2010
Online Access:http://hdl.handle.net/2429/21741
Description
Summary:Video capture devices and online video viewing sites are proliferating. Content can be produced more easily than ever but the tasks required to compose and produce interesting videos are still very involving. Unfortunately, very little support is given for groups of amateurs to meet and collaborate to creation of new media. Existing video sharing sites do have some support for collaboration, but their best-effort mode of content delivery makes it impossible to support many of the desirable features usually available in local editors, such as advanced navigation support and fast-startup. Quality-adaptive streaming is interesting as it allows content to be distributed, and allows clients of varying capabilities to view the same encoded video source, the so-called “Encode once, stream anywhere”. This becomes even more important as the gap between low-end and high-end devices widens. In previous work we presented a Quality Adaptive Streaming system called QStream which has none of these limitations, but lacks the editing features. There are several media frameworks on the desktop that can provide the modules and pipelines necessary to build an editor, but they too are non-adaptive, and have multiple incompatibilities with QStream. This thesis presents Qinematic, a content creation framework that is quality-adaptive, and that borrows concepts from popular media frameworks for extensibility and modularity.