Wednesday, November 16, 2011

notes: Compression

 

- "Optimizing links for maximum payload throughput" is exam speak for compression.
- If files are already compressed or in a compressed format, it is recommended to not use compression.


TCP Header Compression

- Is a mechanism that compresses the TCP header in a data packet before the packet is transmitted.
- Configured with ip tcp header-compression
- STAC Compression
- The lossless data compression mechanism is STAC, using the LZF algorithm.
- Configured under the interface with "compress stac".

Predictor
- Uses the RAND compression algorithm.
- Configured using "compress predictor" along with PPP encapsulation.

RTP Header Compression

- Allows the reduction of the RTP header to be reduced from 40 bytes to 2-5 bytes.
- It's best used on slow speed links for real time traffic with small data payloads, like VOIP.
- To configure on serial link use "ip rtp header-compression"
- To enable per VC, use the command

frame-relay map ip {IP} {DLCI} [broadcast] rtp header-compression

-  The 'passive' keyword, means the router will not send RTP compressed headers unless RTP headers was received.

sh ip tcp header-compression - Shows header compression statistics
sh frame-relay map - Shows the configured header compression per DCLI

interface se0/0
compress stac
- Configures lossless data compression mechanism

interface se1/0
encap ppp
- Required for predictor
compress predictor - Enables the RAND algorithm compression
ip tcp header-compression - Enables TCP header compression
ip rtp header-compression [passive] [periodic-refresh]
- Enables RTP header compression
- [passive] Compress for destinations sending compressed RTP headers
- [periodic-refresh]: Send periodic refresh packets

note:  Only one side of the link uses the passive keyword. If both sides are set to be passive, cRTP does not occur because neither side of the link ever sends compressed headers.

interface s0/1.1
frame-relay map ip {ip} {dlci} rtp header-compression [connections] [passive] [periodic-refresh]
- Enables RTP header compression per VC
- [connections] Max number of compressed RTP connections (DEF=256)
- [passive] Compress for destinations sending compressed RTP headers
- [periodic-refresh]: Send periodic refresh packets

frame-relay ip tcp header-compression [passive]: Enables TCP header compression on a Frame Relay interface

Multilink PPP

To reduce the latency experienced by a large packet exiting an interface (that is, serialization delay), Multilink PPP (MLP) can be used in a PPP environment, and FRF.12 can be used in a VoIP over Frame Relay environment. First, consider MLP.  Multilink PPP, by default, fragments traffic. This characteristic can be leveraged for QoS purposes, and MLP can be run even over a single link. The MLP configuration is performed under a virtual multilink interface, and then one or more physical interfaces can be assigned to the multilink group. The physical interface does not have an IP address assigned. Instead, the virtual multilink interface has an IP address assigned. For QoS purposes, a single interface is typically assigned as the sole member of the multilink group. Following is the syntax to configure MLP:

1.  interface multilink [multilink_interface_number]: Creates a virtual multilink interface
2.  ip address ip_address subnet_mask: Assigns an IP address to the virtual multilink interface
3.   ppp multilink: Configures fragmentation on the multilink interface

4.  ppp multilink interleave: Shuffles the fragments
5.  ppp fragment-delay [serialization_delay]: Specifies how long it takes for a fragment to exit the interface
6.  encapsulation ppp: Enables PPP encapsulation on the physical interface
7. no ip address: Removes the IP address from the physical interface
8.  Associates the physical interface with the multilink group

ex: to achieve serialization delay of 10ms

R1(config)# interface multilink 1
R1(config-if)# ip address 10.1.1.1 255.255.255.0
R1(config-if)# ppp multilink
R1(config-if)# ppp multilink interleave
R1(config-if)# ppp fragment-delay 10
R1(config-if)# exit
R1(config)# interface serial 0/0
R1(config-if)# encapsulation ppp
R1(config-if)# no ip address
R1(config-if)# multilink-group 1


R2(config)# interface multilink 1
R2(config-if)# ip address 10.1.1.2 255.255.255.0
R2(config-if)# ppp multilink
R2(config-if)# ppp multilink interleave
R2(config-if)# ppp fragment-delay 10
R2(config-if)# exit
R2(config)# interface serial 0/0
R2(config-if)# encapsulation ppp
R2(config-if)# no ip address
R2(config-if)# multilink-group 1

LFI can also be performed on a Frame Relay link using FRF.12. The configuration for FRF.12 is based on an FRTS configuration.  Only one additional command is given, in map-class configuration mode, to enable FRF.12. The syntax for that command
is as follows:


Router(config-map-class)#frame-relay fragment fragment-size: Specifies the size of the fragments

As a rule of thumb, the packet size should be set to the line speed divided by 800. For example, if the line speed is 64 kbps, the fragment size can be calculated as follows:

fragment size = 64,000 / 800 = 80 bytes

This rule of thumb specifies a fragment size (80 bytes) that creates a serialization delay of 10 ms.

The following example shows an FRF.12 configuration to create a serialization delay of 10 ms on a link clocked at a rate of 64 kbps. Because FRF.12 is configured as a part of FRTS, CIR and Bc values are also specified

R1(config)# map-class frame-relay FRF12-EXAMPLE
R1(config-map-class)# frame-relay cir 64000
R1(config-map-class)# frame-relay bc 640
R1(config-map-class)# frame-relay fragment 80
R1(config-map-class)# exit
R1(config)# interface serial 0/1
R1(config-if)# frame-relay traffic-shaping
R1(config-if)# interface serial 0/1.1 point-to-point
R1(config-subif)# frame-relay interface-dlci 101
R1(config-fr-dlci)# class FRF12-EXAMPLE

notes: QoS in Switches

 

- COS (Class of Service), is also known as 802.1p priority bits.
- QOS must be enabled on a switch with "mls qos".
- With "mls qos" OFF the switch does not modify any markings.
- With "mls qos" ON switch clears all COS, ip-prec, and DCSP, unless the trust configuration was specified.

Classification:

- If QOS is disabled globally no classification will occur.
- To trust the incoming marking type use the command "mls qos trust"
- For IP-traffic, ip-precedence or DSCP can be trusted.
- For trunk links COS can be trusted

If a packet has no incoming COS or it is a access link, a default value of zero is applied.
- But this default value can be changed with "mls qos cos"
- For known devices conditional trusting could be configured.
- Thus only trust the CoS if for example a cisco-phone is plugged in.
   Configured with: "mls qos trust device cisco-phone"
- Alternatively default COS classification on all traffic incoming could be forced, regardless of existing marking.

Example how to override all interface traffic with COS-3:
interface fa0/0
mls qos cos override
mls qos cos 3

 

Ingress Queuing

- The 3560 packet scheduler uses a method called shared round-robin (SRR) to control the rates at which packets are send.
- On ingress queues, SRR performs sharing among the two queues according the weights configured.
- The weights are relative rather than absolute, ie like percentage based rather than bandwidth.
- Firstly specify the ratio's by which to divide the ingress buffers into the two queues.
- Configured with the command

mls qos srr-queue input buffers {percentage1} {percentage2}

- Then configure the bandwidth percentage for each queue, which sets the frequency at which the scheduler takes packets from the two buffers (even though the command says bandwidth it does NOT represent any bit rate)


- Configured with

mls qos srr-queue input bandwidth {weight1} {weight2}


- These two commands determine how much data the switch can buffer before it begins dropping packets.
- Either of the two ingress queues can be configured as a priority queue.
- The weight parameter defines the percentage of the link’s bandwidth that can be consumed by the priority queue when there is competing traffic in the non-priority queue.

Creating a Priority Queue

- Either of the two ingress queues can be configured as a priority queue
- The priority queue is configure with

mls qos srr-queue input priority-queue {queue-number} bandwidth {weight}

For example, consider a case with queue 2 as the priority queue, with a configured bandwidth of 20 percent. If frames have been coming in only queue 1 for a while and then some frames arrive in queue 2, the scheduler would finish servicing the current frame from queue 1 but then immediately start servicing queue 2. It would take frames from queue 2 up to the bandwidth configured with the weight command. It would then share the remaining bandwidth between the two queues.

sw2(config)# mls qos srr-queue input cos-map queue 2 6
!
sw2(config)# mls qos srr-queue input priority-queue 2 bandwidth 20

Example:

!Configure the buffers for input interface queues 1 and 2
sw2(config)# mls qos srr-queue input buffers 80 20
!
!Configure the relative queue weights
sw2(config)# mls qos srr-queue input bandwidth 3 1
!
!Configure the two WTD thresholds for queue 1, and map traffic to each
!threshold based on its CoS value
sw2(config)# mls qos srr-queue input threshold 1 40 60
sw2(config)# mls qos srr-queue input cos-map threshold 1 0 1 2 3
sw2(config)# mls qos srr-queue input cos-map threshold 2 4 5
sw2(config)# mls qos srr-queue input cos-map threshold 3 6 7
!
!Verify the configuration
sw2# show mls qos input-queue
Queue :       1     2
----------------------------------------------
buffers :     80    20
bandwidth :    3     1
priority :     0    20
threshold1:   40   100
threshold2:   60   100

the switch will place traffic with CoS values of 5 and 6 into queue 2, which is a priority queue. It will take traffic from the priority queue based on its weight configured in the priority-queue bandwidth statement. It will then divide traffic between queues 1 and 2 based on the relative weights configured in the input bandwidth statement. Traffic in queue 1 has WTD thresholds of 40, 60, and 100 percent. Traffic with CoS values 0–3 are in threshold 1, with a WTD drop percent of 40. Traffic with CoS values 4 and 5 are in threshold 2, with a WTD drop percent of 60. CoS values 6 and 7 are in threshold 3, which has a nonconfigurable drop percent of 100.

srr-queue bandwidth share weight1 weight2 weight3 weight4
srr-queue bandwidth shape weight1 weight2 weight3 weight4

 

Egress Queueing


-  Adds a shaping feature that slows down egress traffic, which helps sub-rates for ethernet interfaces.
- There are four egress queues per interface.
-  Queue number one can be configured as a priority/expedite queue.

The egress queue is determined indirectly by the internal DSCP, and the internal DSCP is compared to the DSCP-to-COS map.
  - The resulting COS being compared to the COS-to-queue map.
  - SRR on egress queues can be configured for shared mode or for shape mode.

Both shared and shaped mode scheduling attempt to service the queues in proportion to their configured bandwidth when more than one queue holds frames.

Both shared and shaped mode schedulers service the PQ as soon as possible if at first the PQ is empty but then frames arrive in the PQ.

Both shared and shaped mode schedulers prevent the PQ from exceeding its configured bandwidth when all the other queues have frames waiting to be sent.

The only difference in operation is that the queues in shaped mode never exceed their configured queue bandwidth setting.

There are four queues per interface rather than two, but you can configure which CoS and DCSP values are mapped to those queues, the relative weight of each queue, and the drop thresholds of each. You can configure a priority queue, but it must be queue 1. WTD is used for the queues, and thresholds can be configured as with ingress queueing. One difference between the two is that many of the egress commands are given at the interface, whereas the ingress commands were global.

example:

sw2(config)# mls qos queue-set output 1 buffers 40 20 30 10
!
sw2(config)# mls qos queue-set output 1 threshold 2 40 60 100 100
!
sw2(config)# int fa 0/2
sw2(config-if)# queue-set 1
sw2(config-if)# srr-queue bandwidth share 10 10 1 1
sw2(config-if)# srr-queue bandwidth shape 10 0 20 20
sw2(config-if)# priority-queue out
!
sw2# show mls qos int fa 0/2 queueing
FastEthernet0/2
Egress Priority Queue : enabled
Shaped queue weights (absolute) : 10 0 20 20
Shared queue weights : 10 10 1 1
The port bandwidth limit : 75 (Operational Bandwidth:75.0)
The port is mapped to qset : 1

Buffers and the WTD thresholds for one of the queue are changed for queue-set 1. Queue-set 1 is assigned to an interface, which then has sharing configured for queue 2 with a new command: srr-queue bandwidth share weight1 weight2 weight3 weight4. Shaping is configured for queues 3 and 4 with the similar command srr-queue bandwidth shape weight1 weight2 weight3 weight4. Queue 1 is configured as a priority queue. When you configure the priority queue, the switch ignores any bandwidth values assigned to the priority queue in the share or shape commands. The 3560 also gives the ability to rate limit the interface bandwidth with the command srr-queue bandwidth limit percent. In this example, the interface is limited by default to using 75 percent of its bandwidth.

Congestion Avoidance

- The 3560 uses WTD for congestion avoidance.
- WTD creates three thresholds per queue into which traffic can be divided, based on COS value.
- Tail drop is used when the associated queue reaches a particular percentage.

For example, a queue can be configured so that it drops traffic with COS values of 0–3 when the queue reaches 40 percent then drops traffic with COS 4 and 5 at 60 percent full, and finally drops COS 6 and 7 traffic only when the queue is 100 percent full.

WTD is configurable separately for all six queues in the 3560 (two ingress, four egress)

- Allocates buffers to each queue-set ID

mls qos queue-set output {set-id} buffers {a1}{a2}{a3}{a4}

- Configures the WTD thresholds, guarantee the availability of buffers

mls qos queue-set output {set-id} threshold {q-id} {drop-1} {drop-2} {reserve} {maximum}

- Maps the port to a queue-set

interface fa0/7
queue-set {set-id]

Traffic Policing

- Can be applied both input and output queues.

Two types
1.  Individual
    - Applies to a single class-map like IOS.

2. Aggregate
   - Applies to multiple class-maps in a single policy-map.
   - Classes X,Y, and Z cannot exceed 640k as an aggregate.
   - Is Applied with the global command

mls qos aggregate-policer {name} {rate-bps} {burst-bytes} exceed-action {drop | policed-dscp-transmit}

- Applies the aggregate-policer to the different classes

police aggregate {name}

- A unique exceed action in the policer can be used to remark DSCP to policed-dscp-map

 

Show commands:

sh mls qos - Displays global QOS configuration information
sh mls qos maps dscp-mutation [name] - Displays the current DSCP mapping entries.
sh mls qos maps dscp-cos - Displays the DSCP-to-COS map
sh mls qos interface [buffers|queueing] - Displays the QOS information at the port level
sh mls qos input-queue - Displays the settings for the ingress queues
sh mls qos aggregate-policer - Displays the QOS aggregate policer configuration

 

mls qos - Enables switching QOS globally
interface fa0/1
mls qos vlan-based
- Enables VLAN-based QOS on the port

interface fa0/2
mls qos cos {cos}
- Configures the default COS value for untagged packets
mls qos cos override - Enforces the COS for all packets entering the interface

interface fa0/3
mls qos trust {cos|dscp|ip-prec}
- Enables trusting the incoming packet based on its marking
no mls qos rewrite ip dscp - Enables DSCP transparency. The DSCP field in the packet is left unmodified

interface fa0/4
mls qos trust device cisco-phone
- Specifies that the Cisco IP Phone is a trusted device
mls qos map dscp-cos {dscp list} to {cos} - Modifies the DSCP-to-COS map

notes: AutoQoS

 

- Autoqos automates the deployment of quality of service (QOS) policies.
- Any existing QOS policies must be removed before the autoqos-generated polices are applied.
- Autoqos is supported only on the IP Plus image for low-end platforms.
- Ensure that autoqos is enabled on both sides of the network link.
- The bandwidth on both sides of the link must be the same, otherwise a fragmentation size mismatch might occur preventing the connection to be established.
- Autoqos feature cannot be configured on a frame-relay DLCI if a map class is attached to the DLCI.
- For frame-relay networks, fragmentation is configured using a delay of 10 milliseconds (ms) and a minimum fragment size of 60 bytes.

- Autoqos pre-requisites:
> CEF must be enabled on the interface/PVC.
> The interfaces must have IP addresses configured.
> The amount of bandwidth must be specified by using the "bandwidth" command.

- The bandwidth of the serial interface determines the speed of the link.
- The speed of the link in turn determines the configurations generated by the autoqos.
- Autoqos uses the interface bandwidth that is allocated at the time it is configured, but not after autoqos is executed

Autoqos for the enterprise feature consists of two configuration phases:
1.  Auto-Discovery (data collection)
>> Uses NBAR-based protocol discovery to detect the applications on the network and performs statistical analysis on the network traffic.

2.  Autoqos template generation and installation
>> This phase generates templates from the data collected during the Auto-Discovery phase and installs the templates.

- Class definitions for the enterprise autoqos:

image
- The "auto discovery qos" command is not supported on sub-interfaces.
- The "auto qos voip" command is not supported on sub-interfaces.

Autoqos — VoIP
> Same as above, previous QOS policies has to be removed before running the autoqos-VoIP macro.
> All other requirements must be met too.
> The VoIP feature helps the provisioning of QoS for Voice over IP (VoIP) traffic.

Commands:

- Views the auto-discovery phase in progress, or displays the results of the data collected

sh auto discovery qos [interface]

Displays the autoqos templates created for a specific interface or all

sh auto qos [interface] -

interface s0/2
bandwidth {kpbs}
- Optional but always recommended
auto discovery qos [trust] - Starts the auto-discovery phase
                                      - [trust] Indicates that the DSCP markings of packets are    trusted
no auto discovery qos - Stops the Auto-Discovery phase
auto qos - Generate the autoqos templates and installs it

interface s0/3
encapsulation frame
bandwidth {kbps}
frame-relay interface-dlci 100
auto qos voip [trust]
- Configures the autoqos — VoIP feature
- [trust] indicates that the DSCP markings of packets are trusted

notes: RSVP (Resource Reservation Protocol)

 

- RSVP on it own is just a reservation tool in the control plane, still require external mechanism to enforce the mechanism.
- Allows end user application to make bandwidth reservations inside the network.
- When using “ip rsvp bandwidth” on a sub-interfaces, it is also required to be configured on the main interface.
- When using multiple sub-interfaces with “ip rsvp bandwidth”, the main interface should be configured to be the sum of all sub-interfaces.

- WFQ required for RSVP, gets disabled by default with traffic-shape

map-class frame-relay FRTS
frame fair-queue

- Enables RSVP for IP on an interface

interface e0/0
ip rsvp bandwidth {interface-kbps} {single-flow-kbps}

notes: Unconditional Packet Discard

 

command:

class-map class1
match access-group 101
- References ACL-101
!
policy-map policy1
- UPD is just a fancy name for the 'DROP' action in a policy-map
class class1
drop
- Any traffic matching ACL-101 will be dropped
!
interface s2/0
service-policy output policy1
- Applied to the interface

Wednesday, November 9, 2011

notes: Policing

 

- Traffic-policing is designed to drop traffic in excess of the target rate, and enforce a max threshold of bandwidth.
- To accomplish this, a system of credits is used.
- Before a packet can be sent the amount of credits equaling the packet's size in bits must have been earned, like wages.
- Policing differs from shaping, in that the router is allowed to borrow future credits and in turn is permitted to go into a debt situation of having to "pay" back the credits.
- Policing can be applied to input or output traffic.
- Limits the rate of traffic on the interface.
- Policing is not a queueing mechanism, because traffic is not buffered for later transmission, either dropped or sent.

Legacy "Rate-Limit" – CAR

- Uses a 2 rate policer.
- Legacy CAR statement supports the continue feature to have nested rate-limits.
- Similar to traffic shaping, changing the burst size determines how often the rate is enforced over the second.
- NOTE that rate-limit Bc/Be are in BYTES, unlike shaping where Bc/Be are in bits.
- NOTE Excess burst is only used when the configured Be is greater than the configured Bc.
Example with a Bc=1000 and Be=1000 there will be no burst.

Formula:

The TC is typically 1 second.

Bc = CIR/8 * Tc

Command:

rate-limit {in|output} [access-group] {CIR (bps)} {Bc (bytes)} {Be (bytes)} conform {OPTIONS} exceed {OPTIONS}

OPTIONS
continue - Scans other rate limits.
drop - Drops the packet.
set-dscp-continue - Sets the DSCP and scans other rate limits.
set-dscp-transmit - Sets the DSCP and sends it.
set-prec-continue - Rewrites packet precedence, scans other rate limits.
set-prec-transmit - Rewrites packet precedence and sends it.
set-qos-continue - Sets QOS-group and scans other rate limits.
set-qos-transmit - Sets QOS-group and sends it.
transmit - Transmits the packet.

- Shows input/output packet and byte counters

sh interface {int} rate-limit

- Example of how to mark ALL input traffic with DSCP-12
- This statement DOES NOT police any traffic, only MARKS
- [8000 8000 8000] arbitrary value, holds no meaning here because conforming
traffic gets marked with DSCP-12 and so does exceeding traffic

interface s0/0
rate-limit input 8000 8000 8000 conform-action set-dscp-transmit 12 exceed-action set-dscp-transmit 12

Example how to limit traffic matching ACL-123 to 128k

rate-limit output 192000 36000 72000 conform-action transmit exceed-action drop

- Example of a "line-rate" statement, configuring the TOTAL output to 192k

rate-limit output access-group 123 128000 24000 48000 conform-action continue exceed-action drop

example:

■ Police all traffic on the interface at 496 kbps; but before sending this traffic on its way….
■ Police all web traffic at 400 kbps.
■ Police all FTP traffic at 160 kbps.
■ Police all VoIP traffic at 200 kbps.
■ Choose Bc and Be so that Bc has 1 second’s worth of traffic, and Be provides no additional burst capability over Bc.

! ACL 101 matches all HTTP traffic
! ACL 102 matches all FTP traffic
! ACL 103 matches all VoIP traffic
interface s 0/0
rate-limit input 496000 62000 62000 conform-action continue exceed-action droprate-limit input access-group 101 400000 50000 50000 conform-action transmit exceed-action
drop
rate-limit input access-group 102 160000 20000 20000 conform-action transmit exceed-action drop
rate-limit input access-group 103 200000 25000 25000 conform-action transmit exceed-action drop

Under subinterface s1/0.1, four rate-limit commands are used. The first sets the rate for all traffic, dropping traffic that exceeds 496 kbps. However, the conform action is “continue.” This means that packets conforming to this statement will be compared to the next rate-limit statements, and when matching a statement, some other action will be taken. For instance, web traffic matches the second rate-limit command, with a resulting action of either transmit or drop. VoIP traffic would be compared with the next three rate-limit commands before matching the last one. As a result, all traffic is limited to 496 kbps, and three particular subsets of traffic are prevented from taking all the bandwidth. CB Policing can achieve the same effect of policing subsets of traffic by using nested policy maps.

MQC Policing

- Uses a two or three rate policer, and does not support the continue feature.
- Uses an exponential formula to decide whether the formula is conforming or exceeding based on the burst rate.
- The burst value determines how often, per second there is policing.
     - With a smaller police value, the router will police more often.
     - With a larger police value, the router will police less often.
- The Bc/Be are also configured in bytes.


Note that although MQC police can be applied inbound/outbound on an interface, when queueing is configured in the same policy-map, it can only be applied outbound.

CB Policing categorizes packets into two or three categories, depending on the style of policing, and then applies one of these actions to each category of packet. The categories are conforming packets, exceeding packets, and violating packets. The CB Policing logic that dictates when packets are placed into a particular category varies based on the type of policing.

Formulas
> Single Rate, two colour: no violate Bc = CIR/32, Be = 0
> Single Rate, three colour: violate Bc = CIR/32, Be = Bc
> Dual Rate, three colour: PIR Bc = CIR/32, Be = PIR/32

image

OPTIONS
drop - Drops the packet.
set-discard-class-transmit - Sets the discard-class and sends it.
set-dscp-transmit - Sets the DSCP and sends it.
set-frde-transmit - Sets the FR DE and sends it.
set-mpls-exp-imposition-transmit - Sets the exp-bits at tag imposition and sends it.
set-mpls-exp-topmost-transmit - Sets exp-bits on topmost label and sends it.
set-prec-transmit - Rewrites the packet precedence and sends it.
set-qos-transmit - Sets the QOS-group and sends it.
transmit - Transmits the packet.

policy-map POLICE
class SMTP
police cir 384000 bc 72000 be 144000
- CIR is in bits per second
conform-action {OPTIONS} - BC/BE are in bytes per second
exceed-action {OPTIONS}
violate-action {OPTIONS}
- Violate-action enables a 3-rate policer

police bps burst-normal burst-max conform-action action exceed-action action
[violate-action action]

dual-rate":

police {cir cir} [bc conform-burst] {pir pir} [be peak-burst] [conform-action action [exceed-action action [violate-action action]]]

COPP (Control Plane Policing)

- The COPP feature allows users to configure a QoS filter that manages the traffic flow of control plane packets to protect the control plane of Cisco IOS routers and switches against reconnaissance and denial-of-service (DOS) attacks.
- In this way, the control plane can help maintain packet forwarding and protocol states despite an attack or heavy traffic load on the router or switch.
- Ensure that layer 3 control packets have priority over other packet types that are destined for the control plane.

- The following types of layer 3 packets are forwarded to the control plane:
1.  Routing protocol CP (control packets).
2.  Packets destined for the local IP address of the router.
3.  Packets from management protocols (such as SNMP, Telnet, and SSH).
- Aggregate control plane services provide control plane policing for all CP packets that are received from all line-card interfaces on the router.
- Distributed control plane services provide control plane policing for all CP packets that are received from the interfaces on a line card.

 

- Control-plane traffic is classified into different categories of traffic:

1.  Control-plane host sub-interface
     - Is traffic which is directly destined for one of the routers interfaces.
     Examples of control-plane host IP traffic include tunnel termination traffic, management traffic, or routing protocols such as SSH, SNMP, BGP, OSPF, and EIGRP.
    - All host traffic terminates on and is processed by the router.

2.  Control-plane transit sub-interface
     - Is traffic which is software switched by the route processor, thus packets not directly destined to the router itself but rather traffic traversing through the router.
    - Non terminating tunnels handled by the router are an example of this type of control-plane traffic.

3.  Control-plane CEF-exception sub-interface
   - Is traffic that is either redirected as a result of a configured input feature in the CEF packet forwarding path for process switching, or directly enqueued in the control-plane input queue by the interface driver.
    - Examples are ARP, L2 keepalives, and all non-IP host traffic.

Example:

acess-list 140 permit tcp host 10.1.1.1 any eq 23 - Allows 10.1.1.1 trusted host traffic
access-list 140 permit tcp host 10.1.1.2 any eq 23 - Allows 10.1.1.2 trusted host traffic
!
class-map telnet-class
match access-group 140
!
policy-map control-plane-in
class telnet-class
police 80000 conform transmit exceed drop
- Drops all traffic that matches the class "icmp-class
!
control-plane
service-policy output control-plane-out
- Defines the aggregate control plane service for the active RP

- Displays information about the all control plane policies

sh policy-map control-plane all

- Enters control-plane configuration mode
- [host] Applies policies to host control-plane traffic, optional
- [transit] Applies policies to transit control-plane traffic
- [cef] Applies policies to CEF-exception control-plane traffic
- [slot] Attach a QoS policy to the specified slot

control-plane [host | transit | cef | slot]

- Attaches a QoS service policy to the control plane
-{input} Applies to packets received on the control plane
-{output} Applies to packets transmitted from the control plane

service-policy {input|output} {p-name}

Monday, November 7, 2011

notes: Shaping

 

- Traffic-shaping
- Only applies to outbound traffic.
- Queueing mechanisms can be used in conjunction with traffic shaping.
- Traffic shaping delay packets to ensure that a class of packets does not exceed a defined rate. While delaying the packets, the shaping function queues the packets, by default in a FIFO queue.
- Shaping is designed to buffer/delay traffic in excess of the configured target rate.
- To accomplish this, a system of credits is used.
- Before a packet can be sent the amount of credits equalling the packet's size in bits must have been earned, like wages.
- Traffic shaping does not permit the borrowing of future credits.
- When shaping is applied to an interface, the router is given a full amount of credits. After this point all credits
must be earned.

2 Types of Shaping
    - Generic Traffic Shaping (GTS)
    - Frame-Relay Traffic Shaping (FRTS)

2 Methods of applying GTS and FRTS
  - Legacy method
  - MQC

- Serialization/Access-Rate (AR): Physical clocking, this determines the amount of data that can be encapsulated on to the wire.
- Serialization delay: A constant delay based on the access rate of the interface. It is the time needed to place data on the wire. (Can’t be changed)

Shaping CIR
- Dictates the average output rate one aims to average per second on the circuit/interface.

Tc (Time Interval)
- It is the time in milliseconds into which the second is divided.
- The Tc cannot be adjusted directly, but it can be changed by adjusting the CIR and Bc.
- The get the TC value correct for the formulas below, always use TC/1000.
- The maximum value of Tc is 125ms (1/8th of a second) and the minimum value is 10ms (1/100th of a second).
- The largest amount of traffic that can be sent in a single interval is Bc + Be.
- DO NOT use the "frame-relay tc" command to configure the Tc value, it is ONLY used for FR SVC's with a CIR=0.
- Usually just defining an average CIR will be sufficient. But if low-latency throughput is required, changing the Tc might be necessary.
- Changing the Bc value, has a direct affect on the delay/time interval.

Bc (Committed Burst)
- Is the number of committed bits allowed to be sent per interval (Tc) to conform with target-rate (CIR) per second.
- If Bc worth of bits are sent every interval in that second, the output rate is the CIR.
- The Bc bucket is refilled each new Tc.
- If there are bits left in the Bc bucket that were not used in that interval, they roll over to the Be bucket.
- If the Be bucket is full, these excess credits are lost.
- The Bc determines the Tc, as a result the amount of data to send per interval:
- Bigger Bc - more delay but more data per Tc.
- Smalled Bc - less delay but less data per Tc. (Smaller Bc are generally needed for voice)

Be (Excess Burst)
- Is the number of non-committed bits the router is allowed to send above the Bc if available credits.
- If all the Bc per interval was not used, then at a later time the router can send Be worth to average out the total amount sent up to CIR.
- There is no time limit to how long BE can "store" unused BC credits. A common misconception, is that its only from the previous interval.
- Be defaults to zero bits.

Formulas (Tc/1000):
> CIR = Bc / Tc
> Tc = Bc / CIR
> Bc = CIR x Tc
> Be = (CAR - CIR) x Tc

Generic Traffic Shaping

- Is used to control the maximum output target rate on an interface.

Generic Traffic Shaping (GTS) is a simple form of traffic shaping that is supported on most router interfaces but cannot be used with flow switching. GTS is configured and applied to an interface or subinterface. In its basic configuration, it shapes all traffic leaving the interface. You can modify that with an access list permitting the traffic to be shaped and denying any traffic that should be passed through unshaped.

commands:

- Shows the configured shaping values per DLCI

sh traffic-shape

- Shows packet/byte count, packets/bytes delayed

sh traffic-shape statistics

- Command syntax to enable traffic shaping on the interface

- AR : Configures the access rate to 64k
- Bc : The rate will not exceed 8k per time interval (Tc)
- Be : Indicates excess rate if configured. (Value of 0 here)
- Buffer-Limit is configured as 1000

traffic-shape {rate | group (acl)} {access-rate (bps)} [Bc (bits) [Be (bits)]] [buffer limit]
interface s0/0
traffic-shape rate 640000 8000 0 1000

- All traffic matching ACL-100 will match this shaping rate

traffic-shape group 100 640000 8000 0

- Configures reflection of FECNs as BECNs.
- If BECN received this interface will throttle to no lower than 32k

traffic-shape fecn-adapt
traffic-shape adaptive 32000

 

FRTS (Frame-Relay Traffic Shaping)

CIR
-  Dictates the average output rate one aims to average per second on the circuit/interface.

MINCIR
- The rate to which the router will throttle down at a minimum, if a BECN was received from the frame-relay cloud.
- Defaults to half the configured CIR.

FECN (Forward Explicit Congestion Notification)
- Sent towards the destination, to indicate congestion was experienced on the way, which will get reflected back to the source as a BECN.

BECN (Backward Explicit Congestion Notification)
- Is sent back to the source sending the traffic as a indication to slow down the sending-rate, as there is congestion in the direction the traffic is sent, but in opposite direction of the BECN.


Adaptive Shaping
- Used to allow the router to throttle back in the event of congestion.
- The router will throttle back 25% per Tc when BECNs are received, and will continue to throttle 25% each Tc until BECN's are no longer received or until MINCIR is reached.

Common reasons to use FRTS:
- To force a router to conform to the rate subscribed from the frame-relay service provider, because the local serialization delay is much faster that the provisioned rate, or
- To throttle down higher speed site so that it does not overrun a lower speed site, typically used in partial mesh topologies.


Careful once FRTS is enabled on an interface:
- All DLCI's on that interface (including sub-interfaces) are assigned the default CIR value of 56000 bps.
- If DLCI's require a different output rate that 56k, the CIR should be adjusted.

- If FRTS is applied to a physical frame interface the config will apply to all VC configured on that interface.
- If FRTS is applied to the VC, then the config only applies to that VC.

Fragmentation:
- Prevents smaller real time packets (ie VOIP) from getting delayed behind big packets in the hardware FIFO queue.
NOTE : The fragmentation size should be set to match the Bc, that way worst delay = single Tc.

commands:

- Shows the configured shaping values

sh traffic-shape

- Shows packet/byte count, packets/bytes delayed

sh traffic-shape statistics

- Shows the configured map-class

sh run map-class frame-relay FRTS


map-class frame-relay FRTS
frame-relay cir {bps}
- Committed Information Rate (CIR), (default = 56000 bps)
frame-relay bc {bps} - Committed burst size (Bc), (default = 7000 bits)
frame-relay be {bps} - Excess burst size (Be), (default = 0 bits)
frame-relay mincir {bps} - Minimum acceptable CIR, (default = CIR/2 bps)

- Enables rate adjustment in response to BECN
frame-relay adaptive-shaping becn

- Enables rate adjustment in response to foresight messages and BECN

frame-relay adaptive-shaping foresight

frame-relay fecn-adapt - Enables shaping reflection of a received FECN as BECN
frame-relay fragment {bytes} - Specifies the maximum fragment size

- If the output queue depth exceeds the configured amount, slow down rate
frame adaptive interface-congestion {queue-depth}


interface s0/0
frame-relay traffic-shaping
- STEP 1, Enables FRTS under the physical interface
frame-relay class FRTS - STEP 2, Applies legacy FRTS to EACH VC configured on the interface OR

interface S0/0.1
frame-relay interface-dlci 405
class FRTS
- STEP 2, Applies FRTS only to this VC

MQC FRTS

Once FRTS has been enabled on the interface, all DLCIs on that interface (including sub-interfaces) are assigned the default CIR of 56kbps.

FRTS applied to Multipoint Frame-Relay interface per VC

policy-map FRTS-MQC-R1 - Creates a service-policy for VC going to R1
class class-default
shape average cir {bps}
policy-map FRTS-MQC-R2
- Creates a service-policy for VC going to R2
class class-default
shape average cir {bps}
shape max-buffers {buffer-depth}
- Increases the buffer queue depth
!
!
map-class frame-relay FRTS-R1
service-policy output FRTS-MQC
- Calls the service-policy in the map-class
map-class frame-relay FRTS-R2
service-policy output FRTS-MQC
- Calls the service-policy in the map-class
!
interface s0/0
frame map ip 10.0.0.1 501 broadcast
- Layer3-to-Layer2 mapping
frame map ip 10.0.0.2 502 broadcast - Layer3-to-Layer2 mapping
frame-relay interface-dlci 501
class FRTS-R1
- Applies the class-map FRTS-R1 only to VC 501
frame-relay interface-dlci 502
class FRTS-R2
- Applies the class-map FRTS-R2 only to VC 502

Class-Based Shaping

Class-Based Shaping (CB Shaping) is the Cisco recommended way to configure traffic shaping. It allows you to create class maps and policy maps once and then reuse them for multiple interfaces, rather than redoing the entire configuration under each individual interface. This lessens the likelihood of operator error or typos. CB Shaping also provides more granular control over the QoS operation.

CB-Shaping is GTS applied via MQC.
- CB-Shaping uses the same principles and calculations as FRTS, but does NOT adaptively shape.
- CB-Shaping is supported on non Frame-Relay interfaces.
- CB-shaping defaults to a Bc and Be = target-Rate * Tc(25ms).

image

Shape Average
> Formula: Bc = Shape-Rate * Tc

Shape Peak
> Formula: Shape-Rate = Configured-Rate ( 1 + BE/BC)

Example of CB-Shape applied to Frame-Relay interface

- Increases the buffer queue depth

policy-map FRTS-MQC
class class-default
shape average cir {bps}
shape max-buffers {buffer-depth}
!
- Normal CB-Shaping just applied to a frame-interface
interface s0/0
service-policy out FRTS-MQC

Sunday, November 6, 2011

notes: Congestion Avoidance: WRED

Attempt to avoids congestion before it occurs by selectively dropping traffic, ie random-detect.
- Weights are based on IP precedence/DSCP.
- WRED is typically used to avoid TCP global synchronization and generally not to successful when majority of flows are UDP.
- Minimum threshold is when WRED becomes active and starts randomly dropping packets.
- The rate of packet drop increases linearly as the average queue size increases until it reaches the maximum threshold.
- When the average queue size reaches the maximum threshold, the fraction of packets dropped is that of the MPD.
- When the average queue size is above the maximum threshold, all packets are dropped.
- MPD (Mark Probability Denominator).
> Is used to determine how aggressively packets will be dropped.
> The lower the number the more aggressively dropped.
> When max-threshold reached, 1/MPD will be dropped!!!!

WRED
The purpose of Weighted Random Early Detection (WRED) is to prevent an interface’s output queue from filling to capacity, because if a queue is completely full, all newly arriving packets are discarded. Some of those packets might be high priority, and some might be low priority. However, if the queue is full, no room exists for any packet. WRED is referred to as a congestion-avoidance QoS tool. It can also prevent a global synchronization problem, in which all TCP senders back off as packets at a full queue are dropped, and then all senders begin to increase the amount of traffic sent, until another synchronized back-off is triggered. Global synchronization results in poor utilization of interface bandwidth.
-it can be either applied on interface or in MQC format.
commands:
- Shows the input and output queue size, and default values
sh queueing int {int}
- Enabled RED on an interface, by default will be classified by IP precedence
- Changes the default values of WRED, (min=10, max=40, mpd=10)
interface s0/0
random-detect [dscp-based | prec-based] #random-detect prec {value} {min-t} {max-t} {mpd}
random-detect dscp {value} {min-t} {max-t} {mpd}
using MQC:
- Shows the policy map configured with all the counters
sh policy-map interface {int} #
- Enables DSCP-based WRED as drop policy
policy-map WRED
class TELNET
bandwidth {kbps}
random-detect dscp-based
random-detect dscp [rsvp] {value}
class HTTP
bandwidth {kbps}
random-detect prec-based
random-detect precedence [rsvp] {value}
class SMTP
bandwidth {kbps}
random-detect ecn


example:
Router(config)# interface ethernet 0/0
Router(config-if)# random-detect dscp-based
Router(config-if)# random-detect dscp af13 25 100 4
Router(config-if)# random-detect dscp af12 30 100 4
Router(config-if)# random-detect dscp af11 35 100 4

explanation:
To reinforce this syntax, consider the following example, where the goal is to configure WRED on interface Ethernet 0/0. After the output queue depth reaches 25 packets, the possibility is introduced that a DSCP value of AF13 be discarded. Packets marked with a DSCP value of AF12 should not be discarded until the queue depth reaches 30 packets. Finally, packets marked with a DSCP value of AF11 should not have any chance of discard until the queue depth reaches 35 packets. If the queue depth exceeds 100 packets, there should be a 100 percent chance of discard for these three DSCP values. However, when the queue depth is exactly 100 packets, the percent chance of discard for these various packet types should be 25 percent:
Examine the solution; the mark probability denominator is 4. This value was chosen to meet the requirement that there be a 25 percent chance of discard when the queue depth equals the maximum threshold (that is, 1 / 4 = .25). Also, a DSCP value of\ AF13 is dropped before a DSCP value of AF12, which is dropped before a DSCP value of AF11. This approach is consistent with the definition of these PHBs, because the last digit in the AF DSCP name indicates its drop preference. For example, a value of AF13 would drop before a value of AF12.
image
The last of the WRED numeric settings that affect its logic is the mark probability \ denominator (MPD), from which the maximum percentage of 10 percent is derived in Figure 13-5. IOS calculates the discard percentage used at the maximum threshold based on the simple formula 1/MPD. In the figure, an MPD of 10 yields a calculated value of 1/10, meaning the discard rate grows from 0 percent to 10 percent as the average queue depth grows from the minimum threshold to the maximum. Also, when WRED discards packets, it randomly chooses the packets to discard.
image
image

Friday, November 4, 2011

notes: LLQ

 

The LLQ feature brings strict priority queuing to CBWFQ. Strict priority queuing allows delaysensitive data such as voice to be dequeued and sent first (before packets in other queues are dequeued), giving delay-sensitive data preferential treatment over other traffic.

- LLQ adds the concept of a priority queue to CBWFQ, but without starving other classes.
- The LLQ provides a maximum bandwidth guarantee with low-latency, and optional burst capability.
- LLQ uses only one queue per QOS policy, does allow multiple queues.
- LLQ has a built-in congestion aware policer, preventing the starvation of non-priority traffic.
- The internal policer is ONLY applied during times of congestion, else LLQ traffic may use any excess bandwidth.
- During times of congestion, a priority class cannot use any excess bandwidth, thus any excess traffic will be dropped.
- But during times of non-congestion, traffic exceeding the LLQ is placed into the class default and is not priority "queued".
- This is why it is usually recommended to also add a "police" statement in the LLQ, so that priority traffic gets queued correctly or dropped.
- The queueing strategy will be 'class-based queueing' as with "show interface" command.

 

image

command:

- Shows the policy map configured with all the counters

sh policy-map interface {int}

- Shows the input and output queue size

- Shows the available bandwidth that can be assigned


sh queueing int {int}

class-map VOIP
match ip rtp 16384 16383

policy-map LLQ
class VOIP
priority {kbps} [burst {bytes}]

police cir {bps} bc {bytes} be {bytes

interface S0/0
service-policy output LLQ
- Applies the queueing policy to the interface

Thursday, November 3, 2011

notes: CBWFQ

 

- CBWFQ is used to reserve a guaranteed minimum bandwidth in the output queue based on each user defined class.
- CBWFQ supports 64 classes/queues.
- Drop policy is tail drop or WRED, and it is configurable per class.
- Scheduling within a single class:
     - FIFO on 63 classes.
     - FIFO or WFQ on the class-default class.
- The queueing strategy only comes into effect when there is congestion in the output queue.
- Class class-default needs “fair-queue” configured if “bandwidth” was not specified.
- Weights can be defined by specifying:
     -  Bandwidth {in kbps}: Absolute reservation based on the configured amount.
     -  Bandwidth Percent: Absolute reservation based on percentage of configured interface "bandwidth" of the link.
    -  Remaining Percent: Relative reservation based on what is available interface bandwidth, not the configured "bandwidth".
- The queueing strategy will be 'class-based queueing' as listed with "show interface" command.
- Classification is done through ACL's or by using NBAR.

NOTE: Don't forget to change the default max-reserved-bandwidth of 75% for the interface before applying the service-policy.
NOTE: "max-reserve-bandwidth" is only a configuration limitation.

image

CBWFQ supports multiple class maps (the number depends upon the platform) to classify traffic into its corresponding FIFO queues. Tail drop is the default dropping scheme of CBWFQ. You can use weighted random early detection (WRED) in combination with CBWFQ to prevent congestion of a class.

commands:

- Shows the policy map configured with all the counters

sh policy-map interface {int}

class-map SMTP
match access-group SMTP
class-map match-any HTTP
match protocol HTTP
class-map FTP
match access-group FTP

policy-map QoS
class SMTP
bandwidth 512
class HTTP
bandwidth percent 25
class FTP
bandwidth remaining percent 25
class class-default
fair-queue
- Required if "bandwidth" was not specified

optional: using WRED

class class-default
random-detect [dscp-based] [precedence-based]


interface S0/0
bandwidth 1024
max-reserved-bandwidth {%}
- Changes the default 75% reserved bandwidth used when queueing is applied.

interface s0/0

service-policy output QoS

notes: Legacy Priority Queue

 

Priority queuing (PQ) can give strict priority to latency-sensitive applications (for example, e-commerce applications). PQ gives priority to specific packets by placing those packets in a high-priority queue. Other packets are placed in a medium, normal, or low queue. However, if any packets are in the high queue, none of the packets in lower-priority queues are sent.
Similarly, when packets are in the medium queue, no packets are sent from the normal or low queues. Although this approach does accomplish the goal of giving priority to specific traffic, it can lead to protocol starvation.

- Legacy priority queueing uses four queues (high, medium, normal and low), which gets serviced from high-to-low.
- PQ is prone to starvation.
- The queueing strategy will be 'priority-list' as listed with "show interface" command.
- Similar to custom queueing, the 'gt', 'lt' and 'fragments' keywords are also available.

Router(config)# priority-list 1 protocol ip high tcp www
Router(config)# priority-list 1 protocol ip medium tcp telnet
Router(config)# priority-list 1 default low
!
Router(config)# interface serial 0/1
Router(config-if)# priority-group 1

Router(config-if)# ip rtp priority starting-udp-port
port-number-range bandwidth

The port-number-range is not the last port number in the range. Rather, it is the number of ports in the range. For example, the following command specifies that 64 kbps of bandwidth should be made available for packets using UDP ports in the range 16,384 through 32,767:


Router(config-if)# ip rtp priority 16384 16383 64

notes: Legacy Custom Queuing

 

- Implementation of weighted round robin.
- Up to 16 configurable queues, including a priority queue.
- Thresholds are based on the number of bytes and/or number of packets.
- CQ is prone to inaccurate bandwidth allocations.
- Can only apply one mechanism per interface. MQC changes this.
- The custom queue is used to create a bandwidth reservation in the output queue based on the configured queues.
- With the custom queue it is important to note that the behaviour of the queueing mechanism only becomes evident when the output queue is congested.
- Each configured queue is guaranteed only the minimum configured amount, but can utilize all unused bandwidth.
- Because queueing is always outbound, when custom queueing applied to the interface, no direction can be specified.
- The queueing strategy will be 'custom-list', as seen with "sh interface".
- Queue 0 is like a priority queue. Traffic in this queue will always be sent first.
- 0 - 16: are configurable queues.

Defaults:
>> Byte-count = 1500 bytes
>> Queue-limit = 20 packets

commands: 

Router(config)# queue-list 1 protocol ip 1 tcp www
Router(config)# queue-list 1 protocol ip 2 tcp telnet
Router(config)# queue-list 1 default 3
Router(config)# queue-list 1 queue 1 byte-count 1500 limit 512

Router(config)# queue-list 1 queue 2 byte-count 1500 limit 512
Router(config)# queue-list 1 queue 3 byte-count 3000 limit 512
!
Router(config)# interface serial 0/1
Router(config-if)# bandwidth 128
Router(config-if)# custom-queue-list 1

 

Total number of bytes serviced during each round-robin cycle = 1500 + 1500 + 3000 = 6000
Percentage of bandwidth for World Wide Web traffic = 1500 / 6000 = .25 = 25 percent
Percentage of bandwidth for Telnet traffic = 1500 / 6000 = .25 = 25 percent
Percentage of bandwidth for default traffic = 3000 / 6000 = .5 = 50 percent

verification:

- Shows the queueing strategy and configured queues

sh interface {int}

- Shows the custom queue configuration

sh queueing custom

- Shows the current queue contents

sh queue {int} [queue no]

Wednesday, November 2, 2011

notes: WFQ

 

- Dynamically allocates flows into queues. The allocation is not configurable, only the number of queues are configurable.

- Guarantees throughput to all flows, and drops packets of most aggressive flows.
- Default on Cisco interface below 2.048mb.
- Cannot provide fixed bandwidth guarantees.
- Configured with "fair-queue" under an interface.

- To have a dedicated queue for each flow (no starvation, delay, or jitter within the queue)
- To allocate bandwidth fairly and accurately among all flows (minimum scheduling delay, guaranteed service)
- To use IP precedence as weight when allocating bandwidth

image

WFQ uses automatic classification. Manually defined classes are not supported.
WFQ dropping is not a simple tail drop. WFQ drops packets of the most aggressive flows.
WFQ scheduler is a simulation of a time-division multiplexing (TDM) system. The bandwidth is fairly distributed to all active flows.

commands:

- Shows WFQ values

sh queueing fair

- Enables WFQ on an interface

interface s0/0

fair-queue [cdt] [dynamic-queues] [reserv-queues]

- Configuring Queue limit

Specifies the maximum number of packets that can be in all output queues on the interface at any time.
• The default value for WFQ is 1000.
• Under special circumstances, WFQ can consume a lot of buffers, which may require lowering this limit.

router(config-if)#

hold-queue max-limit out

Verication:

  show interface
show queue
show queueing

• CDT
– Number of messages allowed in the WFQ system before the router starts
dropping new packets for the longest queue.
– The value can be in the range from 1 to 4096 (default is 64)
• dynamic-queues
– Number of dynamic queues used for best-effort conversations (values are:
16, 32, 64, 128, 256, 512, 1024, 2048, and 4096)
• reservable-queues
– Number of reservable queues used for reserved conversations in the range
0 to 1000 (used for interfaces configured for features such as RSVP - the
default is 0)

WFQ is automatically enabled on all interfaces that have a default bandwidth of less than 2 Mbps. The fair-queue command is used to enable WFQ on interfaces where it is not enabled by default or was previously disabled.

• Fair queuing is enabled by default:
– On physical interfaces whose bandwidth is less than or equal to 2.048 Mbps
– On interfaces configured for Multilink PPP
• Fair queuing is disabled:
– If you enable the autonomous or silicon switching engine mechanisms
– For any sequenced encapsulation: X.25, SDLC, LAPB, reliable PPP

notes: QoS Mechanism

 

Quality of service (QoS) mechanisms are used to implement a coordinated QoS policy in
devices throughout the network. The moment an IP packet enters the network, it is classified and usually marked with its class identification. From that point on, the packet is treated by a variety of QoS mechanisms according to the packet classification. Depending upon the mechanisms it encounters, the packet could be expedited, delayed, compressed, fragmented, or even dropped. This lesson describes mechanisms for implementing QoS.

The main categories of tools used to implement QoS in a network are as follows:


1.  Classification and marking: The identifying and splitting of traffic into different classes
and the marking of traffic according to behavior and business policies.

2.  Congestion management: The prioritization, protection, and isolation of traffic based on
markings.

3.  Congestion avoidance: Discards specific packets based on markings to avoid network
congestion.

4.  Policing and shaping: Traffic conditioning mechanisms that police traffic by dropping
misbehaving traffic to maintain network integrity. These mechanisms also shape traffic to
control bursts by queuing traffic.

4.  Link efficiency: One type of link efficiency technology is packet header compression,
which improves the bandwidth efficiency of a link. Another technology is link fragmentation and interleaving (LFI), which can decrease the “jitter” of voice transmission
by reducing voice packet delay.

 

image

In a QoS-enabled network, classification is performed on every input interface.

Marking should be performed as close to the network edge as possible—in the originating
network device, if possible. Devices farther from the edge of the network, such as routers and switches, can be configured to trust or untrust the markings made by devices on the edge of the network. An IP Phone, for example, will not trust the markings of an attached PC, while a switch will generally be configured to trust the markings of an attached IP Phone.
It only makes sense to use congestion management, congestion avoidance, and traffic-shaping mechanisms on output interfaces, because these mechanisms help maintain smooth operation of links by controlling how much and which type of traffic is allowed on a link. On some router and switch platforms, congestion management mechanisms, such as weighted round robin (WRR) and modified deficit round robin (MDRR), can be applied on the input interface.
Congestion avoidance is typically employed on an output interface wherever there is a chance that a high-speed link or aggregation of links feeds into a slower link (such as a LAN feeding into a WAN).
Policing and shaping are typically employed on output interfaces to control the flow of traffic from a high-speed link to lower-speed links. Policing is also employed on input interfaces to control the flow into a network device from a high-speed link by dropping excess low-priority packets.
Both compression and LFI are typically used on slower-speed WAN links between sites to improve bandwidth efficiency.

notes: Congestion Management

 

- Prioritization, protection and isolation of traffic based on markings.

Software Queues and Hardware Queues

The queues created on an interface by the popularly known queuing tools are called software queues, as these queues are implemented in software. However, when the queuing scheduler picks the next packet to take from the software queues, the packet does not move directly out the interface. Instead, the router moves the packet from the interface software queue to a small hardware FIFO (first-in, first-out) queue on each interface. Cisco calls this separate, final queue either the transmit queue (Tx queue) or transmit ring (Tx ring), depending on the model of the router; generically, these queues are called hardware queues.

Hardware queues provide the following features:


■ When an interface finishes sending a packet, the next packet from the hardware queue can be encoded and sent out the interface, without requiring a software interrupt to the CPU—ensuring full use of interface bandwidth.
■ Always use FIFO logic.
■ Cannot be affected by IOS queuing tools.
■ IOS automatically shrinks the length of the hardware queue to a smaller length than the
default when a queuing tool is present.
■ Short hardware queue lengths mean packets are more likely to be in the controllable software queues, giving the software queuing more control of the traffic leaving the interface.

Note:  The only function of a hardware queue that can be manipulated is the length of the queue.

Queuing tools decide how packets are emptied from an interface’s output queue. Several queuing tools are available in the Cisco IOS Software:

1.  First-In, First-Out (FIFO): The default queuing mechanism on high-speed interfaces (that is, greater than 2.048 Mbps), which does not reorder packets.

2.  Weighted Fair Queuing (WFQ): The default queuing mechanism on low-speed interfaces, which makes forwarding decisions based on a packet’s size and Layer 3 priority marking.

3.  Low latency queuing (LLQ): The preferred queuing method for voice and video traffic, in which traffic can be classified in up to 64 different classes, with different amounts of bandwidth given to each class; includes the capability to give priority treatment to one or more classes.

4.  Priority queuing: A legacy queuing approach with four queues, in which higher-priority queues must be emptied before forwarding traffic from any lower-priority queues.

5.  Custom queuing: A legacy queuing approach that services up to 16 queues in a round-robin fashion, emptying a specified number of bytes from each queue during each round-robin cycle.


6.   Class-based weighted fair queuing (CBWFQ): Similar to LLQ, with the exception of having no priority queuing mechanism

7.  IP RTP priority: A legacy queuing approach for voice traffic that placed a range of UDP ports in a priority queue, with all other packets treated with WFQ