PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
As the Internet evolves into a global information infrastructure,
there is a growing need to provide quality of service (QoS)
guarantees for applications over the Internet. Multi-Protocol
Label Switching (MPLS) and Differentiated Services (Diffserv) are
two major technologies currently employed in the networking
community for this purpose. This paper proposes an approach to
traffic engineering that uses MPLS and Diffserv to enhance QoS
performance over an IP network. A novel algorithm for multiple QoS
constrained routing with imprecise link state information is
presented. Simulation is used to verify the correctness and
effectiveness of the algorithm. The service architecture and the
functionalities of various techniques of traffic engineering label
switched paths (e.g., label switched path setup, provisioning,
rerouting) are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integrated Services (IntServ) and Differentiated Services (DiffServ) are two of the current approaches to provide Quality of Service (QoS) guarantees in the next generation Internet. IntServ aims at providing
guarantees to end applications (individual connections) which gives rise to scalability issues in the core of the network. On the contrary, DiffServ is designed to provide QoS to aggregates, and does not suffer from
scalability. It is therefore, believed that the combination of IntServ at the edge and DiffServ at the core will be able to provide QoS guarantees to end applications. Although there have been several proposals on how to perform mapping of services between IntServ and DiffServ, there hasn't been any study to quantitatively show the level of QoS that can be achieved when the two networks are connected. The of this paper is to quantitatively demonstrate the QoS guarantees that can be obtained by end applications when IntServ is run over DiffServ. We have used goodput, drop ratio and non-conformant ratio of packets from the different services and the queue size of DiffServ router to determine the QoS obtained by packets belonging to different traffic classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Technological communities that are formed on the basis university, industry and government partnerships are developing and deploying advanced network applications and technologies, accelerating the creation of tomorrow's Internet. Alcatel and North Carolina State University (NCSU) have jointly launched a virtual Lab between Alcatel's Research & Innovation Center in Plano, TX and NCSU campus at Raleigh, NC across the Internet2 national backbone network. The objective of this co-operative work is to conduct a large-scale field trial in the currently deployed state of the art QoS technologies and investigate areas of improvement. Results from phase one of our work in a Differentiated services (DiffServ) experiments over this testbed involving network equipment from Alcatel and other third party vendors show that DiffServ is capable of delivering the premium service using its expedited forwarding (EF) per-hop behaviour (PHB) for a large class of bandwidth starving applications. However, it is found that DiffServ needs some additional mechanisms to efficiently deliver similar services for jitters and delay sensitive applications, especially in a condition of severely congested network. The situation is even more complicated when one considers resource-sharing environments beyond the extreme cases of EF and best effort (BE) only. The second phase of our work involves investigating empirically and through simulation, a fine-grained new integrated scheduling scheme to extend the DiffServ with a variety of adaptive queue management (AQM) solutions and study their effect on end-to-end (e2e) DiffServ experiments involving not only EF/BE but also assured forwarding (AF) traffic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of active queue management algorithms have been studied since
Random Early Detection (RED) was first introduced in 1993. While
analytical and experimental studies have debated whether
dropping/marking should be based on average or instantaneous queue
length or, alternatively, based on input and output rates (or queue
length slope), the merits and drawbacks of the proposed algorithms,
and the effect of load-based versus queue-based control have not been
adequately examined. In particular, only RED has been tested in
realistic configurations and in terms of user metrics, such as
response times and average delays. In this paper, we examine active
queue management (AQM) that uses both load and queuing delay to
determine its packet drop/mark probabilities. This class of
algorithms, which we call load/delay controllers (LDC), has the
advantage of controlling the queuing delay as well as accurately
anticipating incipient congestion. We compare LDC to a number of
well-known active queue management algorithms including RED, BLUE,
FRED, SRED, and REM in configurations with multiple bottlenecks, round
trip times and bursty Web traffic. We evaluate each algorithm in terms
of Web response time, delay, packet loss, and throughput, in addition
to examining algorithm complexity and ease of configuration. Our
results demonstrate that load information, along with queue length,
can aid in making more accurate packet drop/mark decisions that reduce
the Web response time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we devise and study the performance of a new active queue
management mechanism for Web traffic to more intelligently select packets to
drop incipient of network congestion. The proposed mechanism specifically
targets short-lived or fragile flows (e.g., most HTTP flows) to keep link
utilization high while reducing the HTTP response time. The goal of the
proposed active queue management scheme is to protect new TCP flows and TCP
flows that have packet dropped recently from potential network congestion, thus
achieving better response times. Our simulation studies have compared the
performance of RED and the proposed AQM for a network with only HTTP traffic at
loads less, close to, and more than the network capacity. Simulations show that
a subsidy given to a flow that that is in its initial stage provides
significantly better performance in terms of HTTP request-reply delays without
sacrificing the link utilization. The new scheme is very simple to implement.
It contains one additional control parameter to RED, and the tuning of this
parameter is simple. Since most HTTP flows are short-lived, only the state
information for a subset of active flows needs to be maintained for a very
short period of time, and then all the resources used for keeping the state
information can be reclaimed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TCP flow control algorithms have been designed for wireline networks where congestion is measured by packet loss due to buffer overflow. However, wireless networks also suffer from significant packet losses due to bit errors and handoffs. TCP responds to all the packet losses by invoking congestion control and avoidance algorithms and this results in degraded end-to-end performance in wireless networks. In this paper, we describe an Wireless Random Exponential Marking(WREM) scheme which effectively improves TCP performance over wireless networks by decoupling loss recovery from congestion control. Moreover, WREM is capable of handling the coexistence of both ECN-Capable and Non-ECN-Capable routers. We present simulation results to show its effectiveness and compatibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the Quality-of-service (QoS) issue will be discussed for non-deterministic LAN (Ethernet). The Ethernet bandwidth has grown to 10 Gbps. IEEE will make formal specifications for the 10 Gbps Ethernet available around the second half of the year 2002. It is based on the CSMA/CD [6] and is a MAC-to-MAC protocol. The network interfaces have unique addresses called the MAC addresses. The manufacturers embed them in the MAC chip. Due to the limited view of the path for the Ethernet protocol, our approach will emphasize on bringing in some deterministic accessing of Ethernet segment using the time-slice approach. Instead of the sender sending the packet whenever it desires, it first needs to communicate with a master controller. The master controller will be the aggregation point, like a layer two Ethernet switch or a layer three switch/router or any node within the shared Ethernet segment. These are the nodes that build links between islands within the network. The master controller is responsible for assigning time slices to the sender node for sending in their data packets. During the assigned time slice only that nodes' packet is travelling via the network. In case of a active switch no high buffering capacity needs to be built into it as with the help of the time slicing mechanism the master controller has kept the rate of packet inflow equal to or less than the rate of packet outflow. Packet overflow will never happen in the aggregation point. But the most important aspect of this is that due to the deterministic nature of the link bandwidth and also with the help of our time based packet delivery mechanism guarantees can be provided about the time taken for a certain packet size to reach from one node to another. By summing up all the packet transfer time of all the intermediate nodes' an absolute time delivery system can be setup.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
nd communication capabilities.
Examples range from smart dust embedded in
building materials to networks of appliances
in the home. Embedded devices will be
deployed in unprecedented numbers, will
enable pervasive distributed computing, and
will radically change the way people
interact with the surrounding environment
[EGH00a].
The paper targets embedded systems
and their real-time (RT) communication
requirements. RT requirements arise from
the
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the transient period after a link failure the network cannot guarantee the agreed service levels to user data. This is due to the fact that forwarding tables in the network are inconsistent. Moreover, link states can inadvertently be advertised wrong due to protocol time outs, which may result in persistent route flaps. Reducing the probability of wrongly advertised link states, and the time during which the forwarding tables are inconsistent, is therefore of eminent importance to provide consistent and high level QoS to user data.
By queuing routing traffic in a queue with strict priority over all other (data) queues, i.e. assigning the highest priority in a Differentiated Services model, we were able to reduce the probability of routing data loss to almost zero, and reduce flooding times almost to their theoretical limit. The quality of service provided to user traffic was considerable higher than without the proposed modification.
The scheme is independent of the routing protocol, and can be used with most differentiated service models. It is compatible with the current OSPF standard, and can be used in conjunction with other improvements in the protocol with similar objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The debates continue: Is it better to simply built out the network with an over-supply of bandwidth or invest in delivery systems that enable IP QoS via DiffServ and related standards. In this new era of DWDM and "lambda routers", some service providers have chosen to focus on maximizing raw bandwidth for re-sale. But for others further up the services pyramid, IP-based QoS remains the key to profitable new service delivery and satisfied customers. This article will explore the latest twists in this debate in light of the new technologies available for both the core and the edge of the Internet and private shared IP networks today.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In reliable multicast communications, data packets are disseminated to
all receivers in a multicast group. Generally, end-to-end congestion
control adjusts transmission rate of the sender to the node of the
lowest throughput in a multicast group.
In this situation, when a multicast group includes
receivers of significantly low throughput, lots of receivers suffer
throughput degradation due to existence of low throughput nodes.
This technical problem, called intra-session fairness, is special
problem for multicast communications which include lots of receivers
whose receiving capability is different in general. This paper
proposes a new congestion control scheme for reliable multicast
which improves intra-session fairness. A proposed scheme makes use
of network support technology to enable a server in a network which
includes low
throughput receiver beneath it to behave as a pseudo receiver.
A server stores arrived data packets and transmits them
to receivers beneath it with adjusting transmission rate to the
lowest throughput one. A server can hide existence of low throughput
nodes to the sender, which enables improvement of intra-session
fairness. Simulation results show that our proposed congestion control
scheme improve intra-session fairness with increase of servers.
We also show strategic
arrangement of servers brings better improvement of intra-session
fairness with small number of servers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Congestion avoidance on today's Internet is mainly provided by the combination of the TCP
protocol and Active Queue Management (AQM) schemes such as the de facto
standard RED (Random Early Detection). When used with ECN (Explicit Congestion Notification),
these algorithms can be modeled as a feedback control system in which the feedback information
is carried on a single bit. A modification of this scheme called MECN was proposed,
where the marking information
is carried using 2 bits. MECN conveys more accurate feedback about the network congestion to
the source than the current 1-bit ECN. The TCP source reaction was modified so that it
takes advantage of the extra information about congestion and adapts faster to the changing congestion
scenario leading to a smoother decrease in the sending rates of the sources upon congestion detection and
consequently resulting in an increase in the router's throughput. A linearized fluid flow model
already developed for ECN is extended to our case. Using control theoretic tools
we justify the performance obtained in using the MECN scheme and give guidelines for
optimizing its parameters. We use ns simulations to illustrate
the performance improvement from the point of better throughput and low level of oscillations in the queue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Models are developed to analyze the throughput of ARQ protocols, such as Go Back N and Selective Repeat, and protocols without ARQ. Forward error correction is added to these models to study the interactions between these mechanisms.
In systems where sending FEC has a negligible effect on the channel loss probability, the goodput of streams increases. Reducing the data throughput and including error correction packets, thus keeping the data rate perceived by the channel constant, has advantages in the Go Back N protocol when the channel loss rate is above a certain range. Furthermore, these models form a starting point with which to study more complicated models such as the TCP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been widely accepted that auctioning which is the pricing approach with minimal information requirement is a proper tool to manage scare network resources. Previous works focus on Vickrey auction which is incentive compatible in classic auction theory. In the beginning of this paper, the faults of the most representative auction-based mechanisms are discussed. And then a new method called uniform-price auction (UPA), which has the simplest auction rule is proposed and it's incentive compatibility in the network environment is also proved. Finally, the basic mode is extended to support applications which require minimum bandwidth guarantees for a given time period by introducing derivative market, and a market mechanism for network resource allocation which is predictable, riskless, and simple for end-users is completed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new QoS mapping mechanism for video transport in a network using the DiffServ architecture. Video packets mapping is based on the QoS index, which represents the preference of each video flow in terms of loss and delay. Then, in a specified video application, the QoS mapping will be given by prioritizing certain packets according to its importance in MPEG video streams and the desired end-to-end quality. In order to verify the efficiency of this mechanism, the end-to-end system performance was evaluated by means of modeling and simulation of MPEG-4 video transmission over a DiffServ domain. The results showed that the proposed QoS mapping mechanism can take advantage of the DiffServ architecture, making possible enhanced end-to-end video quality according to the user, and applications needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent past we have witnessed the migration of the Internet from a classic computer communication infrastructure to a platform for multimedia applications. Several new approaches have appeared and many protocols have been presented in order to accomplish this transformation. However many challenges remain, especially in the provision of quality of service (QoS). We present and evaluate new techniques and combinations of other well-known QoS mechanism to achieve end-to-end QoS for real-time applications. We propose a priority disciplines for Differential Services architectures and evaluate it in detail for mixed traffic including streams that use conventional protocols such as UDP or TCP and others that are targeted for real-time applications such as RTP. Next we present new brokering concepts at the boundary of the DS-Domain. We present two new dynamic Service Level Agreement methods that will help us accomplish our end-to-end delay minimization goal. These new approaches reduce the connection and/or re-connection time expended by the server during the agreement process with the DS-entry nodes. Finally we include a QoS routing mechanism in our system and map its features to the DS-domain characteristics
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-hop temporary autonomous system of mobile nodes with wireless transmitters and receivers without the aid of preestablished network infrastructure is self-organized network. In such an environment, it may be necessary for one mobile node, in order to communicate with another node, to depend on a chain of intermediate nodes forwarding packets, due to the limited propagation range of each mobile node. Conventional routing protocols can not adapt well to such an environment. This paper proposes a distributed adaptive routing protocol for finding and maintaining the routes, which is most likely to meet QoS requirements, based on the two characteristics: average error bits rate and living time.
Keywords Link State ,QoS routing, Self-Organized Network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new Active Hierarchical Label Switching Router (AHLSR) architecture of MPLS is proposed for MPEG-4-based mobile multimedia IP-networks. The proposed architecture has two important modules, AHLSR Controller (AHLSRC) and AHLSR Protocol (AHLSRP). AHLSR conducts layer 2 switching and layer 3 routing independently; it supports multi-resolution functionality. Each VOL flow entering AHLSR is segmented into several VOL sub-flows according to resolutions. And, AHLSRC manages both dedicated and unused channels. The AHLSR can achieve better performance without wasting communications bandwidth in MPEG-4-based mobile multimedia communications
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.