The Next Generation of IP - Flow Routing

Dr. Lawrence G. Roberts

Founder, CTO Caspian Networks

SSGRR 2003S International Conference, L’Aquila Italy, July 29, 2003

 

Abstract: For the last 33 years IP routers have not changed, they still support only “best effort” traffic. However, the bandwidth available to people has been increasing rapidly with the advent of broadband access. The result is that many new services are now desired that require far better QoS than “best effort” IP can support. Also, with broadband, the problem of controlling the total usage and carrier expense has become important. Thus, it has become critical to improve both the delay performance and the control of bandwidth for IP service, much as was accomplished in ATM. Also, call rejection for high bandwidth streaming services like video is required instead of random discards if quality is to be maintained. All these problems can be solved with no change to TCP/IP by routing flows rather than packets. This requires keeping some state information for the duration of the flow, but this information can be captured on the fly as the first packet goes by. This permits an IP flow router to achieve all the capabilities of an ATM switch plus some, but without the call setup delay and at a lower cost than a conventional IP router.

 

Summary of the IP QoS Issue

 

Problem: IP routers historically have never supported Quality of Service (QoS) due to the fact that they only look at packets and do not keep any state information on individual flows. A flow can be defined as the stream of IP packets between a particular source IP address and port going to a unique destination IP address and port, all of which packets are using the same protocol. An individual flow might be a voice call, video call, file transfer, or web access. For TCP flows, the network controls the flow rate through discarding packets. For UDP flows, the user controls the rate but the introduction of delay variance or packet loss in the network can be a major problem. The problem then is to design an IP router that can support rate guarantees, delay guarantees, and eliminate packet loss for flows of any size and still maintain high line and trunk utilization.

 

Prior Work: ATM attempted to solve the QoS problem and much was learned during the design of ATM, but due to the long, complex software flow state setup process ATM could not be used for short data calls. Another approach that is being tested is to use MPLS with class of service (CoS) queues. MPLS however cannot support QoS for individual small flows, cannot provide delay guarantees, and cannot reject new flows to protect current flows from packet loss. Current IP and MPLS routers attack the QoS problem by under loading the network but this increases the cost of IP as much as 10:1. Instead of wasting network capacity, some recent researchers have looked at improving IP routers through the use of some flow state information. (1, 2, 3, 4, 5, 6) This work has been largely theoretical but has concluded that using some flow state information inside an IP router could substantially improve the throughput and QoS.

 

The Caspian Flow Router Design: This paper describes the design and results actually achieved in a real IP flow router product that can scale to extremely large capacity and can support millions of flows per port. Full ATM QoS capability can be achieved at very high levels of utilization and many other benefits have resulted from flow state information. The important conclusion is that flow state can be incorporated into an IP router, without modifying the IP protocol, providing all known QoS features, without increasing the cost, while permitting the utilization to be increased to over 80%.

 

 

History of the Current “Best Effort” IP Routers

 

When I started the ARPANET in 1969 (7) there was so little memory available on the original IMPs (4 K bytes for both program and data) that it was impossible to consider storing any information on each flow in the network. Thus, it was clear that “flow state” could not be used due to severe memory constraints. State information had historically been used in telephone switches yet with the advent of packet data communication, I realized that the time required to set up the state information across a circuit path was too lengthy and required far to great an amount of computer resources to support short data transfers. In most cases the call setup for a data transfer (averaging 14 packets per transfer) would be more expensive than the actual data transfer. The original packet protocols, NCP on the ARPANET and X.25 in the early carrier networks (8), saved each packet on each hop until it’s receipt was acknowledged so that data could be transferred without loss or error The packet flow forged its path across the network at full line speed and only a short time was required to retransmit the packet in event of error over the last link. For the high error rate telephone line used then, this was the fastest way to transfer a lossless file. During the original pre-Internet MIT-SDC network experiment I conducted in 1965 (9), the packet length was carefully constructed to minimize the total file transfer time including retransmissions. The Internet started with this packet retransmission approach although the optimal packet size grew as the transmission facilities improved so as to continue to ensure the lowest file transfer time.

 

1983 to 2003: TCP/IP was fast emerging as the new standard router—to-router protocol due to its significant efficiencies in eliminating the trunk by trunk packet storage and retransmit.  This only became possible due to the fact that new fiber optical transmission lines were now error free enough to make end-to-end packet retransmission optimal. However, given the continuing memory constraints, IP router designers continued the practice of each router forwarding packets with no memory of the flow.  The user sends his packets into the network and each router looks only at the destination and queued the packet for the ‘best’ output trunk currently selected for traffic to that destination. This scenario requires strict packet ordering of each flow so as to avoid reordering at the destination. To achieve this, all packets from all flows, heading for a given location, are sent on the same route, even if that route is overloaded. If an overload condition occurs, the result is to drop packets randomly from the queue. For TCP flows this causes TCP backoff and thereby reduces the load. So long as TCP traffic is the dominant traffic (which it historically has been), this would sufficiently control the trunk load and not require UDP traffic discard. However this assumption is no longer valid.   Whenever UDP traffic (voice, video, etc.) peaks to require more resources than are available given trunk capacity and in the absence of flow-state information, the only possible decision is to discard random UDP packets, thus destroying the integrity of all the voice or video UDP calls in progress The clear result of this lack of flow-state information is that with conventional IP routers today, it remains impossible to achieve the same quality of service (QoS – delay and bandwidth guarantees) that is possible on TDM networks or on ATM networks.

 

The Problems with “Best Effort” IP Routers

 

1.      QoS in the Core: The concept that over-provisioning of IP trunks can permit “reasonable” QoS is true in the core where trunk speed is many gigabits/second and there are millions of flows on each trunk. The law of large numbers works so long as the flows are small compared to the trunk size and the total traffic in the premium traffic class never exceeds the available trunk capacity. Delay variance should be very small for the highest priority class so long as it does not exceed the trunk capacity. The actual experience with voice over IP in the actual Internet core does not seem to be as good as this would suggest even though non-Internet voice only IP networks do far better. There are two probable reasons for this:

 

a.       Route Changes – IP routing as it is now applied, selects new preferred routes for all routes in the general area of a trunk failure. Even if the route that a voice flow is using has no failure, a nearby failure can cause the voice route to be changed so as to re-optimize the network. Since this new route will not be the same distance as the old route, the voice packets can arrive out of order due to the route change. This causes a gap in the voice received. Since the Internet core has many routers and trunks the frequency of route changes is far higher than it would be for a private network and thus this problem is greatly magnified.

 

b.      Brief Overloads – Even though most core networks run at 25% load (US average) the only measurements made today are for 5-minute averages (10). During these 5-minute samples, the load will vary greatly in a self-similar way to the longer-term variation of the traffic. Since 5 minute peak traffic on any given trunk varies up to 40% during peak hour, one can expect that an additional large variation occurs during any given 5 minute period. Thus, even at 25% average load, the load on a trunk can peak for a second or so to over 100% causing a brief backup even for the premium voice traffic.

 

UDP CAC: No matter what the traffic loading, one problem will always exist for core routers using the “best efforts”  packet-by-packet processing; UDP flows like video and voice cannot be protected from random discards if the UDP traffic momentarily exceeds the trunk capacity. When this occurs, today’s IP routers discard random packets from all of the UDP flows. This causes random holes in the UDP packet stream; snow in video, noise in voice across all flows in process (not just misbehaving or new traffic flows). If there is compression or encryption then the packet loss creates even worse noise and quality loss in the ongoing UDP flows. This is unavoidable since the router has no knowledge of flows and thus cannot stop new flows from overloading the trunk.

 

2.      Routing Restrictions: Another result of the lack of memory in the early routers was that the router could not afford to keep much information on the network topology or trunk utilization. Thus, the practice of calculating and circulating only the preferred next hop, not the total network topology and load information was adopted. This required all routers in each network domain to use the same algorithm (or sharply constrained algorithms) to compute the next hop so as ensure that no loops occurred. Of course this led to very slow progress in routing algorithms. Near-equal-cost algorithms, which could provide substantial utilization improvements, were known from the start but still have not been deployed very widely. The result of these early memory constraints led to a router design that became a religion thwarting new routing concepts, rather than what I had originally intended which was to provide a starting point for network research. The “net-heads” felt any move toward using state information was an attack on their religion by the circuit switching “bell-heads”. And the “bell-heads” knew that true quality of service including delay and bandwidth guarantees and call rejection could never evolve out of IP Routers as they were originally conceived and designed.

 

3.      Failure Recovery:  When the Internet was small there was no reason to consider how to recover from trunk failures before the failure because the process was so fast to recompute all paths in the network. This procedure now takes minutes and has not fundamentally changed because there is no other routing information available except the preferred path. The result is that IP today recovers much too slowly and it has been necessary to use SONET or Optical switchover to a spare trunk (a costly practice) or MPLS fast reroute (slower than desired and forces the use of MPLS along with its high operating cost).

 

ATM: In an attempt to develop a packet-based service that would deliver true QoS, ATM was emerged in the mid 1980’s. However two major mistake were made early on:

 

1.      Round trip software intensive call setup that limited the call setup rate (call per second) to about 3% of the rate needed for TCP/IP data, thus relegating all data traffic to be trunked (not switched). This then required packet routers at each node as well as an ATM switch. This duplication of equipment became a clearly losing proposition over time.

 

2.      Fixed Packet Size was chosen due to the fear that ASIC’s would not be able to support variable size packets in hardware. The size was chosen small enough to keep the packetizing delay for voice down to a workable minimum (6 ms). Fixing the size, however, lost about 20% efficiency for IP data that had packet sizes on average of 350 bytes (or greater) and short burst of packets (10-14 packets per transfer). It also meant that the processing in PC software due to the frequent interrupts was too expensive, thus NIC cards with outboard processors were necessary for all computer interfaces. When compared to cheap dumb Ethernet NIC cards, ATM NIC cards cost far too much.   The introduction of cheap 100 Mbps Ethernet boards was the final deathblow for ATM at the desktop.

 

The conceptual battle between the IP and Telco people was demonstrated when ATM adopted a similar concept of round trip flow-path setup used in SS7 by the telephone network. This decision became the downfall of ATM. It was in fact voted in by those who wanted the ATM signaling to be fully compatible with SS7. As the ATM forum first looked at this issue of call setup, I proposed a very simple forward path setup that would not delay the first packet and thereby permit short data traffic as well as voice traffic. This concept was soundly defeated however, by those who felt that this would be incompatible with the migration of the telephone network and SS7 to ATM. The result was that ATM could only be used for voice, video and trunks. These trunks would need to use IP routers to support short data flows like web traffic. Thus, as the volume of data traffic exceeded voice traffic in year 2000, the use of ATM to trunk IP became a clearly wasteful proposition. The world quickly moved to the switch design that supported the majority of the traffic (IP) and people hoped that somehow IP could be improved to support the same quality of service as ATM. With voice now a clear minority of the traffic, the hope became that the use of trunk over-provisioning techniques combined with static pre-engineering of IP network topology would permit reasonable quality voice.

 

ATM had in fact advanced the state of the art in developing important, new robust techniques for specifying and managing QoS including the extremely innovative ABR feedback control (11). These techniques proved that packet technology could achieve the QoS of the voice network and these techniques would clearly be important if somehow the best of IP and ATM could somehow be merged.

 

DiffServ (CoS): The response of the router industry to the growing requests for QoS has been to add a single 6-bit field (the DiffServ Mark) into each IP packet. This field could in theory specify 64 classes of service, or queues that could be serviced with different priorities at the output. Since 64 marks has proved to be far too few to specify all the necessary QoS parameters uniquely (as ATM had previously done), DiffServ marks could only be used by joint agreement across a single trunk (with exception for a few global characteristics). Additionally, any of these queues could fill up and overflow.  When this occurs all flows in that queue (for example video) suffer random packet loss. For voice and video this practice is a disaster as no one is able to receive quality video. Thus Diffserv permits what is called Class of Service (CoS) but this is not what those needing ‘guaranteed service’ such as priority or premium voice, video, gaming, remote control, or other real time services require. This problem occurs not only at the edge, but also clearly the same overload can occur anywhere in the core. Over-provisioning in the core hides the problem at great cost, but it still can overload resulting in premium packet loss at a lower frequency.

 

MPLS: When IP arrived at the point where no clear solution for IP QoS has emerged to address the above issues, MPLS was proposed as a hybrid circuit—packet merger. MPLS paths (Label Switched Paths or LSPs) are set up with one of two software protocols (RSVP or LDP) both as complex and slow as ATM signaling.  As a result:

 

1.      Nothing was gained in the ability to use these LSPs for individual flows due to the slow setup speed.

2.      It has no QoS advantages over IP with Diffserv since the multiple tail dropped queues result in the same random discard behaviors.

3.      Its primary benefit is to provide a simple header to permit multiple protocols to be mixed in a single stream.

4.      Another benefit claimed has been the ability to route packets to the same destination over two or more routes which was originally intended to permit better manual traffic management, The actual result is that this manual management has a high operational cost and no real gain in trunk utilization.

5.      The final improvement claimed was the ability to do a “fast reroute” of an MPLS path (100 ms), slower than SONET yet faster than IP reconvergence. This, however, resulted in a new dual standard (vendors could not agree on one).  The same effect could be achieved with IP at far faster speeds if near-equal path routing information were distributed as has been planned in TE extensions to OSPF and ISIS. This alternative could be totally standard, receivable by any IP router.

 

Thus all current communications protocols have either major QoS failings (IP and MPLS limitations) or cannot support fast data calls (ATM). Therefore no conventional switches or routers have been able to support the desire of our next generation, one converged network that can support high QoS and short data calls with fast error recovery and high availability.

 

 

 

 

FLOW ROUTING

 

Realizing that IP was the only technology adopted by the masses of edge users (supported by cheap Ethernet interfaces with the flexibility, resiliency and standards-backing required) and the only technology that could support short data calls like Web calls at full line rate, IP is clearly the protocol that can continue to take us forward. The protocol itself does not limit the QoS, the speed of error recovery, the availability, or the scalability of the switch. Thus there is no reason to change TCP/IP in order to obtain the goal of a converged network. In fact due to the worldwide acceptance of IP it would be unreasonable not to build a new network around IP. The problem has always been the actual routers’ design. Its history of constrained memory created the initial design and implementation. For 30 years no one except Caspian Networks (12) has attempted to change the basic design. A sort of religion developed that it was heresy to have any state information in an IP router, This religion was fed by the design required for a conventional router where to keep packets in order, the only acceptable routing method was to pipeline the routing lookups using the fastest access memory available, SRAM. This ensured the packets stayed in order, but since SRAM was vastly too expensive for state memory, the obvious conclusion in the past was that maintaining state is neither feasible nor economic.

 

However, maintaining state information is essential if one wants to use Connection Admission Control (CAC) on new UDP flows, guarantee the rate of a flow, guarantee the delay variance of a flow, route flows independently for load balancing purposes, or recover immediately from trunk failures. All these are essential if we are to achieve QoS similar to ATM or TDM over IP. There are two problems to solve then:

 

1.      How can the state information be created: The SS7, ATM, and RSVP-MPLS solution is a roundtrip call setup in software that is clearly too slow for the new real-time networks. The alternative is to capture the state on the fly as the first packet traverses the network. This is a forward path setup and can be done in hardware at full line rate.

 

2.      How can the state be stored economically: First one must switch from expensive SRAM to the new PC cache memory, DDRAM that is 75 times less expensive. DDRAM has a sufficiently high data throughput –yet to maintain the throughput rate, a DDRAM controller must be used which introduces a 40-60 ns delay variance, far too high for a pipeline-type of system for packet processing. However, once one accepts that flow state is to be collected, the basic flow ID of an IP packet (the five-tuple SA, DA, SP, DP, PR) can be hashed and the flow uniquely identified. Now parallel processing can be used, dramatically increasing the processing power available in a chip.  Since the flows are identified, different flows can be assigned to different parallel processors where no order needs be maintained between divergent flows. Thus the full DDRAM memory capacity can be used for the route table, the state table, packet storage, and a large calendar table for achieving weighted fair queuing per flow. Thus, once one accepts using state, the memory cost drops so far that state is less expensive to process, not more expensive.  Additionally, packet processing occurs on the first packet (header) only for a given flow (not all 14 packets as in conventional routers) saving processing resources on a flow-by-flow basis as well.


 

Comparison of a Packet Router and a Flow Router

 

Packet Router

 

                                                                       

 

 

 

 

 

 

Flow Router

 

 

 

 

 

 

 

 

 

 

Unique Characteristics of a Flow Router

 

Dynamic Load Balancing: Once one has recorded the state information on the ID, route, time of receipt, and rate of a flow, one can route all subsequent packets of the same flow on the same route. The route need not be the same route for all flows to a given destination since each flow can now be kept in order by being kept on its own route. Thus, with near-equal cost route information one can distribute the load over all near-equal paths based on the current load information. It is important to continually measure the load on all ports so that dynamic load balancing can determine the best path. This load balancing eliminates all the manual labor of MPLS manual routing and actually achieves a much higher trunk loading than is possible with static load balancing due to the rapidly changing traffic load characteristics of IP traffic.

 

Comparison of Protocols


Fast Error Recovery: When a routing decision is made for a flow, an alternate, diverse route can also be determined. This is stored with the state information so that if a card or trunk failure occurs, the flow can instantly be rerouted in hardware over the new path. Again, this depends on having multiple near-equal cost paths computed, but given this, each flow can be assigned the best diverse alternate based on its QoS, rate, and the current trunk loading. If a path failure occurs, then the first packet in a flow to arrive at the broken point is turned around and returned to the source point. This tells the source that that path is bad and the packet and all subsequent packets can be routed over the alternate path. When other flows encounter the break, they may have different alternate routes specified since the actual trunk utilizations across the topology will have varied as different flows have been set up. Thus no packets are lost after the break is discovered (10 ms typical) and the flows that were on that path are automatically re-distributed over many alternative paths, distributing the load. Then, as these rerouted flows arrive at various output ports, if the total load on the port is too high, the TCP traffic will be automatically reduced by selective discard to adjust the total load to the given trunk capacity. The guaranteed traffic and UDP traffic will not be affected so long as there is sufficient TCP traffic to absorb the overload. In this way, a network can run at very high utilization until a failure occurs. Then the lost capacity can be absorbed by the TCP traffic and the network can remain fully loaded without needing any spare trunks. The goal of any IP network provider is to recover all voice and video flows instantly with no impact and give all the excess capacity to TCP users. A spare trunk would just be lost TCP capacity except during the short interval of trunk failures. Note also that flows that are on nearby paths to the break can be left intact without moving them, unlike the current IP situation where many of these would be moved to rebalance the network at the cost of out-of-order packets. This is because today IP can only re-optimize the preferred routes to a destination whereas flow routers can deal at a much finer grain and re-route actual discrete flows, and not simply large, aggregate pipes (tunnels) or groups of flows.

 


Guaranteed Bandwidth: One of important capabilities of TDM, ATM, and Frame Relay has been the capability to offer actual bandwidth guarantees for individual flows like voice, or for groups of flows like a VPN across the network. IP packet routers have no capability to offer individual flow guarantees since they do not keep track of flows. Their only possibility for a guarantee is to guarantee the maximum rate of a whole class that has a separate queue, however, this limits these IP packet routers to a small number of guarantees. The market for ATM and Frame Relay has shown that real guarantees can command much higher revenue. The method by which a flow router achieves a guarantee is to compare the rate of the packet arrival in the state block to the guaranteed rate and discard if the user exceeds the agreed rate by some burst tolerance (just like ATM). The user can burst up to the limit of the burst tolerance, but the rate is controlled on average. Since the rate is known, the output port (and all in-between bottlenecks) can record the current level of guaranteed traffic and reject (CAC) new guaranteed flows if the agreed capacity is in use.

 


Guarantees for Flow Groups: Once a flow is determined to be within its own rate limit, a flow router can check an aggregate group of flows and discard the flow (UDP) or the packet (TCP) to maintain a guarantee on the group of flows. This is important for VPN’s across the network or for controlling the load on an external link feeding a standard packet switching device (like an Ethernet switch) in front of the flow router. This way the load through the external switch can be controlled so as to insure all discarding and QoS decisions are made in the flow router, thus extending the QoS functionality one or two stages beyond the flow router to the edge of the network without replacing all the edge equipment. 

 


Maximum Rate Traffic (UDP) CAC Control: For UDP traffic where the rate is not known, but the flows are also video or voice type calls where random packet loss is not acceptable, the actual measured level of UDP traffic can be measured and if a new UDP flow arrives when the measured level is at or above a specified percentage of the trunk capacity then the new flow can be rejected by discarding the first packet. The unique capability of a flow router is to know which packets are ongoing flows (already in process) and which are new flows. With variable rate compressed video the total UDP capacity for all the existing flows will vary but if the UDP percentage is set correctly, the rejection of new flows will keep the total UDP under the trunk capacity and thus prevent any packet loss.  This capability is one of the strongest benefits of flow-based routing and is not available in conventional IP routers.

 

TCP Slow Start Improvement: Today with random discard (RED or WRED) it takes TCP longer than required to get up to the maximum rate that the network can support. This is because random discards occur randomly as the flow is increasing its rate, thereby causing TCP to drop back to half speed before continuing to increase its rate. This substantially increases the time it takes to transfer a web page. With flow routers, mp packets are discarded until each TCP flow has reached the maximum rate the network can support and exceeded this rate by the burst tolerance. Then packets will be discarded so that TCP will cleanly oscillate around the rate the network has determined it can support for all TCP flows in the same class. This should decrease the time to retrieve a web page.

 


TCP Fairness and multiple SLA’s: In addition to delaying slow-start, today’s IP routers using random discard allow different flows on the same path to operate at different rates. This is because there is no measurement of the flow rate and flows can obtain random rates due to the random discards. Higher rates have more discards so there is some convergence but still there is too much variation for a carrier to offer different service levels (SLA’s). However, with flow routing, the flows can all be controlled to the same rate and various weights can be applied so that one user could have two or four times the bandwidth of another user who paid less (even if both are TCP traffic types). This functionality is the same as ABR permitted in ATM and this has been studied extensively in that literature (11).

 


High Trunk and Fabric Utilization: As a result of being able to dynamically balance the traffic load over multiple paths, if a flow router measures the load on these paths or ports at frequent intervals (50 ms), it can achieve over 80% utilization even with Internet type IP traffic, on all those paths and ports. Thus, assuming sufficient TCP demand, a flow router network could maintain a constant 80% average load on all the router’s ports, trunks and internal fabric. This is possible due to the fact in those instances when IP traffic can be dynamically distributed over many routes in very small increments (individual flows) and controlled if the total is too high (using TCP discard), the load on each path or port can be tightly held within about ±7%. Compare the following:

 

A standard IP packet router cannot dynamically balance any load because anytime the routing of a destination is changed, all the flows can get packets out of order. For UDP this is very bad, for TCP it costs significant time. Thus, routing changes cannot be done at the fast (50 ms) intervals required to achieve ±7% utilization. Any increase in the measurement interval will have a linear impact on the utilization standard deviation. Thus even if routing could be changed every ˝ second the utilization standard deviation would be 70%, equivalent to no control at all. Thus, since route changes without flow identification, are so harmful to the QoS due to packet reordering, it is not feasible to achieve any gain in dynamic load balancing with a best-efforts IP packet router. The gain that I have observed in studying traffic and trunk utilization from many US Internet carriers is that there is at least a 50% gain in trunk utilization that could be achieved through dynamic load balancing (10).

 

A Flow Router however, can route each flow any way it wants to without any out-of-order problem. Thus these devices can continuously change their routing to optimize the network’s resources.  This gain can be very significant and any such gain correspondingly reduces the total network cost.

 

Mixing Packet and Flow IP Routers

 

Flow routers and packet routers both interoperate totally since they both use the save external protocols (IP or MPLS). Many of the cost improvements can be achieved locally as flow routers are mixed into a network while still preserving the investment in traditional router technology (and minimizing new investments to strategic parts of the network). Also, many savings due to the traffic control and SLA’s for new revenue opportunities can also be achieved locally. However, in order to insure QoS improvements like bandwidth and delay variation guarantees or the ability to CAC UDP flows, a flow router path across the network must exist. The high QoS part of the network need not however, have more capacity than is required by the premium real time streaming traffic, like video and voice. Thus there are many mixed network possibilities where flow routers only need to be added, as premium traffic demands their use.

 


 


Conclusion

 

IP flow routers can significantly improve the QoS and utilization of an IP network over the use on conventional IP packet routers. As a result, flow routers can make possible the flawless support of many real-time streaming media like video and voice using standard IP protocol.  In addition to managing the QoS of individual flows, they can manage groups of flows so as to permit the use of inexpensive edge devices like Ethernet switches for the final leg of an end-to-end QoS path through a network. They also can permit many new service levels for higher revenue. And lastly, they can permit a large cost reduction in the network through dynamic load balancing and network convergence. Since they are 100% compatible with today’s IP or MPLS packet routers, they can be mixed in a network, thereby not requiring a full network replacement as voice and video support is introduced.


 

References

1. IP traffic and QoS control: the need for a flow-aware architecture, T. Bonald, S. Oueslati-Boulahia and J.W. Roberts, France Telecom R&D. Presented at World Telecommunications Congress, September 2002 – Paris, France

2. Flow-aware admission control for a commercially viable Internet, T. Bonald, S. Oueslati-Boulahia and J.W. Roberts, France Telecom R&D. Presented at EURESCOM Summit 2002, Heidelberg, Germany, October 2002

3. A flow-based model for Internet backbone traffic, C. Barakat, P. Thiran, G. Iannaccone, C. Diot, P. Owezarski. Proceeding of IMW 2002, ACM Press, Marseille France, November 2002

4.The performance of circuit switching in the Internet, Pablo Molinero-Fernández, Nick McKeown, Stanford University, OSA Journal of Optical Networking, Vol. 2, No. 4, pp. 83–96, March 2003

5. T. Shimizu, M. Nabeshima, I. Yamasaki, T. Kurimoto "MXQ: A network mechanism for controlling misbehaving flows in best effort networks", IEICE Trans. Info & Syst, Vol E84-D, Num. 5, May 2001

 

6. I. Stoica, "Stateless Core: A scalable approach for QoS in the Internet", Ph. D. Thesis, CMU, 2000.

 

7. Computer Network Development to Achieve Resource SharingL. G. Roberts & B. Wessler, Proceedings of the Spring Joint Computer Conference, Atlantic City, New Jersey - May 1970 

 

8. Evolution of Packet Switching - Proceedings of the IEEE, Vol. 66, No. 1 - November 1978

 

9. Toward a Cooperative Network of Time-Shared Computers - with Thomas Marill) Proceeding of the Fall Joint Computer Conference, San Francisco, California - November 1966   

10. Internet Traffic Measurement 2000 and 2001, Lawrence G. Roberts, Jan 2001

11. Explicit Rate Flow Control, A 100 Fold Improvement over TCP, L. G. Roberts – 4/29/97

12. www.caspiannetworks.com