PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 7253, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessing video content transmitted over networked content infrastructures becomes a fundamental requirement for
service providers. Previous research has shown that there is no direct correlation between traditional network QoS and
user perceived video quality. This paper presents a study investigating the impact of individual packet loss on four types
of H.264 main-profile encoded video streams. Four artifact factors to model the degree of artifacts in video frames are
defined. Further, the visibility of artifacts considering the video content characteristics, encoding scheme and error
concealment is investigated in conjunction with a user study. The individual and joint impacts of artifact factors are
explored on the test video sequences. From the results of user tests, the artifact factor-based assessment method shows
superiority over PSNR-based and network QoS based quality assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel algorithm for constructing an Unequal Error Protection (UEP) FEC code
targeted towards video streaming applications. A concatenation of a set of parallel outer block codes followed
by a packet interleaver and an inner block code is presented. The algorithm calculates on the fly the optimal
allocation of the code rates of the inner and outer codes. When applied to video streaming applications using
H.264, the discussed UEP framework achieves gains of up to 5dB in video quality compared to equal error
protection (EEP) FEC at the same code rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video streaming over wireless networks is getting very popular because of the high bandwidth and the support of quality
of service offered by recent wireless standards, such as IEEE 802.11e. We consider optimizing the quality of video
streaming in single-hop wireless networks that are composed of multiple wireless stations. Our optimization problem
controls parameters in different layers to optimally allocate the wireless network resources among all stations. We address
this problem in two steps. First, we formulate an abstract optimization problem for video streaming in single-hop wireless
networks in general. This formulation exposes the important interaction between parameters belonging to different layers
in the network stack. Then, we instantiate and solve the general problem for the recent IEEE 802.11e WLANs, which
support prioritized traffic classes. We show how the calculated optimal solutions can efficiently be implemented in the
distributed mode of the IEEE 802.11e standard. We evaluate our proposed solution using extensive simulations in the
OPNET simulator, which captures most features of realistic wireless networks. In addition, to show the practicability of
our solution, we have implemented it in the driver of an off-the-shelf wireless adapter that complies with the IEEE 802.11e
standard. Our experimental and simulation results show that significant quality improvement in video streams can be
achieved using our solution, without incurring any significant communication or computational overhead.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for
the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand
(VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number
of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper,
we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations
and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation
provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop
probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered
bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher
link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed framework
uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in
an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme
especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings,
typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of the proper choice of the thickness of pre-encoded video layers in congestion-controlled
streaming applications. While congestion control permits to distribute the network resources in a fair manner
among the different video sessions, it generally imposes an adaptation of the streaming rate when the playback
delay is constrained. This can be achieved by adding or dropping layers in scalable video along with efficient
smoothing of the video streams. The size of the video layers directly drives the convergence of the congestion
control to the stable state. In this paper, we derive bounds on both the encoding rates of the video layers
that depend on the prefetch delay that can be used for stream smoothing. We then discuss the practical
scheduling aspects related to the transmission of layered video when delays are constrained. We finally describe
an implementation of the proposed scheduler and we analyze its performance in NS-2 simulations. We show
that it is possible to derive a media-friendly rate allocation for layered video in different transmission scenarios,
and that the proper choice of the layer thickness improves the average video quality when the prefetch delay is
constrained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed interactive applications tend to have stringent latency requirements and some may have high bandwidth demands. Many of them have also very dynamic user groups for which all-to-all communication is needed. In online multiplayer games, for example, such groups are determined through region-of-interest management in the application. We have investigated a variety of group management approaches for overlay networks in earlier work and shown that several useful tree heuristics exist. However, these heuristics require full knowledge of all overlay link latencies. Since this is not scalable, we investigate the effects that latency estimation techqniues have ton the quality of overlay tree constructions. We do this by evaluating one example of our group management approaches in Planetlab and examing how latency estimation techqniues influence their quality. Specifically, we investigate how two well-known latency estimation techniques, Vivaldi and Netvigator, affect the quality of tree building.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of service for latency dependent content, such as video streaming, largely depends on the distance and available bandwidth
between the consumer and the content. Poor provision of these qualities results in reduced user experience and increased overhead. To
alleviate this, many systems operate caching and replication, utilising dedicated resources to move the content closer to the consumer.
Latency-dependent content creates particular issues for community networks, which often display the property of strong internal
connectivity yet poor external connectivity. However, unlike traditional networks, communities often cannot deploy dedicated
infrastructure for both monetary and practical reasons. To address these issues, this paper proposes Corelli, a peer-to-peer replication
infrastructure designed for use in community networks. In Corelli, high capacity peers in communities autonomously build a
distributed cache to dynamically pre-fetch content early on in its popularity lifecycle. By exploiting the natural proximity of peers in
the community, users can gain extremely low latency access to content whilst reducing egress utilisation. Through simulation, it is
shown that Corelli considerably increases accessibility and improves performance for latency dependent content. Further, Corelli is
shown to offer adaptive and resilient mechanisms that ensure that it can respond to variations in churn, demand and popularity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we use experimental measurements to study the performance of multimedia applications over a
commercial IEEE 802.16 WiMAX network. Voice-over-IP (VoIP) and video streaming applications are tested. We
observe that the WiMAX-based network solidly supports VoIP. The voice quality degradation compared to high-speed
Ethernet is only moderate, despite higher packet loss and network delays. Despite different characteristics of the uplink
and the downlink, call quality is comparable for both directions. On-demand video streaming performs well using UDP.
Smooth playback of high-quality video/audio clips at aggregate rates exceeding 700 Kbps is achieved about 63% of the
time, with low-quality playback periods observed only 7% of the time. Our results show that WiMAX networks can
adequately support currently popular multimedia Internet applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since mobile devices are battery powered, several mobile TV standards dictate using energy saving schemes to increase
the viewing time on mobile devices. The most common scheme for saving energy is to make the base station broadcast
the video data of a TV channel in bursts with a bit rate much higher than the encoding rate of the video stream, which
enables mobile devices to turn off their radio frequency circuits when not receiving bursts. While broadcasting bursts saves
energy, it increases the channel switching delay. The switching delay is an important performance metric, because long
and variable switching delays are annoying to users and may turn them away from the mobile TV service. In this paper,
we first analyze the burst broadcasting scheme currently used in many deployed mobile TV networks, and we show that it
is not efficient in terms of controlling the channel switching delay. We then propose new schemes to guarantee that a given
maximum switching delay is not exceeded and that the energy consumption of mobile devices is minimized. We prove the
correctness of the proposed schemes and derive closed-form equations for the achieved energy saving. We also implement
the proposed schemes in a mobile TV testbed to show their practicability and to validate our theoretical analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A distributed camera network allows for many compelling applications such as large-scale tracking or event
detection. In most practical systems, resources are constrained. Although one would like to probe every camera
at every time instant and store every frame, this is simply not feasible. Constraints arise from network bandwidth
restrictions, I/O and disk usage from writing images, and CPU usage needed to extract features from the images.
Assume that, due to resource constraints, only a subset of sensors can be probed at any given time unit.
This paper examines the problem of selecting the "best" subset of sensors to probe under some user-specified
objective - e.g., detecting as much motion as possible. With this objective, we would like to probe a camera
when we expect motion, but would not like to waste resources on a non-active camera. The main idea behind our
approach is the use of sensor semantics to guide the scheduling of resources. We learn a dynamic probabilistic
model of motion correlations between cameras, and use the model to guide resource allocation for our sensor
network.
Although previous work has leveraged probabilistic models for sensor-scheduling, our work is distinct in its
focus on real-time building-monitoring using a camera network. We validate our approach on a sensor network of
a dozen cameras spread throughout a university building, recording measurements of unscripted human activity
over a two week period. We automatically learnt a semantic model of typical behaviors, and show that one can
significantly improve effciency of resource allocation by exploiting this model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
lThe Web is such a rich architecture that it is giving birth to new applications that were unconceivable only few
years ago in the past. Developing these applications being different from developing traditional applications,
generalist programming languages are not well suited. To help face this problem, we have conceived the Hop
programming language whose syntax and semantics are specially crafted for programming Web applications. In
order to demonstrate that Hop, and its SDK, can be used for implementing realistic applications, we have started
to develop new innovative applications that extensively relies on the infrastructure offered by Web and that use
Hop unique features. We have initiated this effort with a focus on multimedia applications.
Using Hop we have implemented a distributed audio system. It supports a flexible architecture that allows
new devices to catch up with the application any time: a cell phone can be used to pump up the volume, a PDA
can be used to browse over the available musical resources, a laptop can be used to select the output speakers,
etc. This application is intrinsically complex to program because, i) it is distributed (several different devices
access and control shared resources such a music repositories and sound card controllers), ii) it is dynamic (new
devices may join or quit the application at any time), and iii) it involves different heterogeneous devices with
different hardware architectures and different capabilities.
In this paper, we present the two main Hop programming forms that allow programmers to develop multimedia
applications more easily and we sketch the parts of the implementation of our distributed sound system that
illustrate when and why Hop helps programming Web multimedia applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Software-based reactive multimedia computation systems are pervasive today in desktops but also in mobile
and ultra-portable devices. Most such systems offer a callback-based architecture to incorporate specific stream
processing. The Synchronous Data flow model (SDF) and its variants are appropriate for many continuous stream
processing problems such as the ones involving video and audio. SDF allows for static scheduling of multi-rate
processing graphs therefore enabling optimal run-time efficiency. But the SDF abstraction does not adapt well to
real-time processing because it lacks the notion of time: executing non-trivial schedules of multi-rate data flows
in a time-triggered callback architecture, though possible through buffering, causes jitter, excessive latency and
run-time inefficiencies. In this paper we formally describe a new Time-Triggered SDF (TTSDF) model with a
static scheduling algorithm that produces periodic schedules than can be split among several callback activations,
solving the above-mentioned problems. The model has the same expressiveness than SDF, in the sense that any
graph computable by one model will also be computable by the other. Additionally, it enables parallelization
and run time load balancing between callback activations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video
popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search
engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well
as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video
content amongst the main regions of the world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.