Wednesday, November 16, 2011

notes: Compression

 

- "Optimizing links for maximum payload throughput" is exam speak for compression.
- If files are already compressed or in a compressed format, it is recommended to not use compression.


TCP Header Compression

- Is a mechanism that compresses the TCP header in a data packet before the packet is transmitted.
- Configured with ip tcp header-compression
- STAC Compression
- The lossless data compression mechanism is STAC, using the LZF algorithm.
- Configured under the interface with "compress stac".

Predictor
- Uses the RAND compression algorithm.
- Configured using "compress predictor" along with PPP encapsulation.

RTP Header Compression

- Allows the reduction of the RTP header to be reduced from 40 bytes to 2-5 bytes.
- It's best used on slow speed links for real time traffic with small data payloads, like VOIP.
- To configure on serial link use "ip rtp header-compression"
- To enable per VC, use the command

frame-relay map ip {IP} {DLCI} [broadcast] rtp header-compression

-  The 'passive' keyword, means the router will not send RTP compressed headers unless RTP headers was received.

sh ip tcp header-compression - Shows header compression statistics
sh frame-relay map - Shows the configured header compression per DCLI

interface se0/0
compress stac
- Configures lossless data compression mechanism

interface se1/0
encap ppp
- Required for predictor
compress predictor - Enables the RAND algorithm compression
ip tcp header-compression - Enables TCP header compression
ip rtp header-compression [passive] [periodic-refresh]
- Enables RTP header compression
- [passive] Compress for destinations sending compressed RTP headers
- [periodic-refresh]: Send periodic refresh packets

note:  Only one side of the link uses the passive keyword. If both sides are set to be passive, cRTP does not occur because neither side of the link ever sends compressed headers.

interface s0/1.1
frame-relay map ip {ip} {dlci} rtp header-compression [connections] [passive] [periodic-refresh]
- Enables RTP header compression per VC
- [connections] Max number of compressed RTP connections (DEF=256)
- [passive] Compress for destinations sending compressed RTP headers
- [periodic-refresh]: Send periodic refresh packets

frame-relay ip tcp header-compression [passive]: Enables TCP header compression on a Frame Relay interface

Multilink PPP

To reduce the latency experienced by a large packet exiting an interface (that is, serialization delay), Multilink PPP (MLP) can be used in a PPP environment, and FRF.12 can be used in a VoIP over Frame Relay environment. First, consider MLP.  Multilink PPP, by default, fragments traffic. This characteristic can be leveraged for QoS purposes, and MLP can be run even over a single link. The MLP configuration is performed under a virtual multilink interface, and then one or more physical interfaces can be assigned to the multilink group. The physical interface does not have an IP address assigned. Instead, the virtual multilink interface has an IP address assigned. For QoS purposes, a single interface is typically assigned as the sole member of the multilink group. Following is the syntax to configure MLP:

1.  interface multilink [multilink_interface_number]: Creates a virtual multilink interface
2.  ip address ip_address subnet_mask: Assigns an IP address to the virtual multilink interface
3.   ppp multilink: Configures fragmentation on the multilink interface

4.  ppp multilink interleave: Shuffles the fragments
5.  ppp fragment-delay [serialization_delay]: Specifies how long it takes for a fragment to exit the interface
6.  encapsulation ppp: Enables PPP encapsulation on the physical interface
7. no ip address: Removes the IP address from the physical interface
8.  Associates the physical interface with the multilink group

ex: to achieve serialization delay of 10ms

R1(config)# interface multilink 1
R1(config-if)# ip address 10.1.1.1 255.255.255.0
R1(config-if)# ppp multilink
R1(config-if)# ppp multilink interleave
R1(config-if)# ppp fragment-delay 10
R1(config-if)# exit
R1(config)# interface serial 0/0
R1(config-if)# encapsulation ppp
R1(config-if)# no ip address
R1(config-if)# multilink-group 1


R2(config)# interface multilink 1
R2(config-if)# ip address 10.1.1.2 255.255.255.0
R2(config-if)# ppp multilink
R2(config-if)# ppp multilink interleave
R2(config-if)# ppp fragment-delay 10
R2(config-if)# exit
R2(config)# interface serial 0/0
R2(config-if)# encapsulation ppp
R2(config-if)# no ip address
R2(config-if)# multilink-group 1

LFI can also be performed on a Frame Relay link using FRF.12. The configuration for FRF.12 is based on an FRTS configuration.  Only one additional command is given, in map-class configuration mode, to enable FRF.12. The syntax for that command
is as follows:


Router(config-map-class)#frame-relay fragment fragment-size: Specifies the size of the fragments

As a rule of thumb, the packet size should be set to the line speed divided by 800. For example, if the line speed is 64 kbps, the fragment size can be calculated as follows:

fragment size = 64,000 / 800 = 80 bytes

This rule of thumb specifies a fragment size (80 bytes) that creates a serialization delay of 10 ms.

The following example shows an FRF.12 configuration to create a serialization delay of 10 ms on a link clocked at a rate of 64 kbps. Because FRF.12 is configured as a part of FRTS, CIR and Bc values are also specified

R1(config)# map-class frame-relay FRF12-EXAMPLE
R1(config-map-class)# frame-relay cir 64000
R1(config-map-class)# frame-relay bc 640
R1(config-map-class)# frame-relay fragment 80
R1(config-map-class)# exit
R1(config)# interface serial 0/1
R1(config-if)# frame-relay traffic-shaping
R1(config-if)# interface serial 0/1.1 point-to-point
R1(config-subif)# frame-relay interface-dlci 101
R1(config-fr-dlci)# class FRF12-EXAMPLE

notes: QoS in Switches

 

- COS (Class of Service), is also known as 802.1p priority bits.
- QOS must be enabled on a switch with "mls qos".
- With "mls qos" OFF the switch does not modify any markings.
- With "mls qos" ON switch clears all COS, ip-prec, and DCSP, unless the trust configuration was specified.

Classification:

- If QOS is disabled globally no classification will occur.
- To trust the incoming marking type use the command "mls qos trust"
- For IP-traffic, ip-precedence or DSCP can be trusted.
- For trunk links COS can be trusted

If a packet has no incoming COS or it is a access link, a default value of zero is applied.
- But this default value can be changed with "mls qos cos"
- For known devices conditional trusting could be configured.
- Thus only trust the CoS if for example a cisco-phone is plugged in.
   Configured with: "mls qos trust device cisco-phone"
- Alternatively default COS classification on all traffic incoming could be forced, regardless of existing marking.

Example how to override all interface traffic with COS-3:
interface fa0/0
mls qos cos override
mls qos cos 3

 

Ingress Queuing

- The 3560 packet scheduler uses a method called shared round-robin (SRR) to control the rates at which packets are send.
- On ingress queues, SRR performs sharing among the two queues according the weights configured.
- The weights are relative rather than absolute, ie like percentage based rather than bandwidth.
- Firstly specify the ratio's by which to divide the ingress buffers into the two queues.
- Configured with the command

mls qos srr-queue input buffers {percentage1} {percentage2}

- Then configure the bandwidth percentage for each queue, which sets the frequency at which the scheduler takes packets from the two buffers (even though the command says bandwidth it does NOT represent any bit rate)


- Configured with

mls qos srr-queue input bandwidth {weight1} {weight2}


- These two commands determine how much data the switch can buffer before it begins dropping packets.
- Either of the two ingress queues can be configured as a priority queue.
- The weight parameter defines the percentage of the link’s bandwidth that can be consumed by the priority queue when there is competing traffic in the non-priority queue.

Creating a Priority Queue

- Either of the two ingress queues can be configured as a priority queue
- The priority queue is configure with

mls qos srr-queue input priority-queue {queue-number} bandwidth {weight}

For example, consider a case with queue 2 as the priority queue, with a configured bandwidth of 20 percent. If frames have been coming in only queue 1 for a while and then some frames arrive in queue 2, the scheduler would finish servicing the current frame from queue 1 but then immediately start servicing queue 2. It would take frames from queue 2 up to the bandwidth configured with the weight command. It would then share the remaining bandwidth between the two queues.

sw2(config)# mls qos srr-queue input cos-map queue 2 6
!
sw2(config)# mls qos srr-queue input priority-queue 2 bandwidth 20

Example:

!Configure the buffers for input interface queues 1 and 2
sw2(config)# mls qos srr-queue input buffers 80 20
!
!Configure the relative queue weights
sw2(config)# mls qos srr-queue input bandwidth 3 1
!
!Configure the two WTD thresholds for queue 1, and map traffic to each
!threshold based on its CoS value
sw2(config)# mls qos srr-queue input threshold 1 40 60
sw2(config)# mls qos srr-queue input cos-map threshold 1 0 1 2 3
sw2(config)# mls qos srr-queue input cos-map threshold 2 4 5
sw2(config)# mls qos srr-queue input cos-map threshold 3 6 7
!
!Verify the configuration
sw2# show mls qos input-queue
Queue :       1     2
----------------------------------------------
buffers :     80    20
bandwidth :    3     1
priority :     0    20
threshold1:   40   100
threshold2:   60   100

the switch will place traffic with CoS values of 5 and 6 into queue 2, which is a priority queue. It will take traffic from the priority queue based on its weight configured in the priority-queue bandwidth statement. It will then divide traffic between queues 1 and 2 based on the relative weights configured in the input bandwidth statement. Traffic in queue 1 has WTD thresholds of 40, 60, and 100 percent. Traffic with CoS values 0–3 are in threshold 1, with a WTD drop percent of 40. Traffic with CoS values 4 and 5 are in threshold 2, with a WTD drop percent of 60. CoS values 6 and 7 are in threshold 3, which has a nonconfigurable drop percent of 100.

srr-queue bandwidth share weight1 weight2 weight3 weight4
srr-queue bandwidth shape weight1 weight2 weight3 weight4

 

Egress Queueing


-  Adds a shaping feature that slows down egress traffic, which helps sub-rates for ethernet interfaces.
- There are four egress queues per interface.
-  Queue number one can be configured as a priority/expedite queue.

The egress queue is determined indirectly by the internal DSCP, and the internal DSCP is compared to the DSCP-to-COS map.
  - The resulting COS being compared to the COS-to-queue map.
  - SRR on egress queues can be configured for shared mode or for shape mode.

Both shared and shaped mode scheduling attempt to service the queues in proportion to their configured bandwidth when more than one queue holds frames.

Both shared and shaped mode schedulers service the PQ as soon as possible if at first the PQ is empty but then frames arrive in the PQ.

Both shared and shaped mode schedulers prevent the PQ from exceeding its configured bandwidth when all the other queues have frames waiting to be sent.

The only difference in operation is that the queues in shaped mode never exceed their configured queue bandwidth setting.

There are four queues per interface rather than two, but you can configure which CoS and DCSP values are mapped to those queues, the relative weight of each queue, and the drop thresholds of each. You can configure a priority queue, but it must be queue 1. WTD is used for the queues, and thresholds can be configured as with ingress queueing. One difference between the two is that many of the egress commands are given at the interface, whereas the ingress commands were global.

example:

sw2(config)# mls qos queue-set output 1 buffers 40 20 30 10
!
sw2(config)# mls qos queue-set output 1 threshold 2 40 60 100 100
!
sw2(config)# int fa 0/2
sw2(config-if)# queue-set 1
sw2(config-if)# srr-queue bandwidth share 10 10 1 1
sw2(config-if)# srr-queue bandwidth shape 10 0 20 20
sw2(config-if)# priority-queue out
!
sw2# show mls qos int fa 0/2 queueing
FastEthernet0/2
Egress Priority Queue : enabled
Shaped queue weights (absolute) : 10 0 20 20
Shared queue weights : 10 10 1 1
The port bandwidth limit : 75 (Operational Bandwidth:75.0)
The port is mapped to qset : 1

Buffers and the WTD thresholds for one of the queue are changed for queue-set 1. Queue-set 1 is assigned to an interface, which then has sharing configured for queue 2 with a new command: srr-queue bandwidth share weight1 weight2 weight3 weight4. Shaping is configured for queues 3 and 4 with the similar command srr-queue bandwidth shape weight1 weight2 weight3 weight4. Queue 1 is configured as a priority queue. When you configure the priority queue, the switch ignores any bandwidth values assigned to the priority queue in the share or shape commands. The 3560 also gives the ability to rate limit the interface bandwidth with the command srr-queue bandwidth limit percent. In this example, the interface is limited by default to using 75 percent of its bandwidth.

Congestion Avoidance

- The 3560 uses WTD for congestion avoidance.
- WTD creates three thresholds per queue into which traffic can be divided, based on COS value.
- Tail drop is used when the associated queue reaches a particular percentage.

For example, a queue can be configured so that it drops traffic with COS values of 0–3 when the queue reaches 40 percent then drops traffic with COS 4 and 5 at 60 percent full, and finally drops COS 6 and 7 traffic only when the queue is 100 percent full.

WTD is configurable separately for all six queues in the 3560 (two ingress, four egress)

- Allocates buffers to each queue-set ID

mls qos queue-set output {set-id} buffers {a1}{a2}{a3}{a4}

- Configures the WTD thresholds, guarantee the availability of buffers

mls qos queue-set output {set-id} threshold {q-id} {drop-1} {drop-2} {reserve} {maximum}

- Maps the port to a queue-set

interface fa0/7
queue-set {set-id]

Traffic Policing

- Can be applied both input and output queues.

Two types
1.  Individual
    - Applies to a single class-map like IOS.

2. Aggregate
   - Applies to multiple class-maps in a single policy-map.
   - Classes X,Y, and Z cannot exceed 640k as an aggregate.
   - Is Applied with the global command

mls qos aggregate-policer {name} {rate-bps} {burst-bytes} exceed-action {drop | policed-dscp-transmit}

- Applies the aggregate-policer to the different classes

police aggregate {name}

- A unique exceed action in the policer can be used to remark DSCP to policed-dscp-map

 

Show commands:

sh mls qos - Displays global QOS configuration information
sh mls qos maps dscp-mutation [name] - Displays the current DSCP mapping entries.
sh mls qos maps dscp-cos - Displays the DSCP-to-COS map
sh mls qos interface [buffers|queueing] - Displays the QOS information at the port level
sh mls qos input-queue - Displays the settings for the ingress queues
sh mls qos aggregate-policer - Displays the QOS aggregate policer configuration

 

mls qos - Enables switching QOS globally
interface fa0/1
mls qos vlan-based
- Enables VLAN-based QOS on the port

interface fa0/2
mls qos cos {cos}
- Configures the default COS value for untagged packets
mls qos cos override - Enforces the COS for all packets entering the interface

interface fa0/3
mls qos trust {cos|dscp|ip-prec}
- Enables trusting the incoming packet based on its marking
no mls qos rewrite ip dscp - Enables DSCP transparency. The DSCP field in the packet is left unmodified

interface fa0/4
mls qos trust device cisco-phone
- Specifies that the Cisco IP Phone is a trusted device
mls qos map dscp-cos {dscp list} to {cos} - Modifies the DSCP-to-COS map

notes: AutoQoS

 

- Autoqos automates the deployment of quality of service (QOS) policies.
- Any existing QOS policies must be removed before the autoqos-generated polices are applied.
- Autoqos is supported only on the IP Plus image for low-end platforms.
- Ensure that autoqos is enabled on both sides of the network link.
- The bandwidth on both sides of the link must be the same, otherwise a fragmentation size mismatch might occur preventing the connection to be established.
- Autoqos feature cannot be configured on a frame-relay DLCI if a map class is attached to the DLCI.
- For frame-relay networks, fragmentation is configured using a delay of 10 milliseconds (ms) and a minimum fragment size of 60 bytes.

- Autoqos pre-requisites:
> CEF must be enabled on the interface/PVC.
> The interfaces must have IP addresses configured.
> The amount of bandwidth must be specified by using the "bandwidth" command.

- The bandwidth of the serial interface determines the speed of the link.
- The speed of the link in turn determines the configurations generated by the autoqos.
- Autoqos uses the interface bandwidth that is allocated at the time it is configured, but not after autoqos is executed

Autoqos for the enterprise feature consists of two configuration phases:
1.  Auto-Discovery (data collection)
>> Uses NBAR-based protocol discovery to detect the applications on the network and performs statistical analysis on the network traffic.

2.  Autoqos template generation and installation
>> This phase generates templates from the data collected during the Auto-Discovery phase and installs the templates.

- Class definitions for the enterprise autoqos:

image
- The "auto discovery qos" command is not supported on sub-interfaces.
- The "auto qos voip" command is not supported on sub-interfaces.

Autoqos — VoIP
> Same as above, previous QOS policies has to be removed before running the autoqos-VoIP macro.
> All other requirements must be met too.
> The VoIP feature helps the provisioning of QoS for Voice over IP (VoIP) traffic.

Commands:

- Views the auto-discovery phase in progress, or displays the results of the data collected

sh auto discovery qos [interface]

Displays the autoqos templates created for a specific interface or all

sh auto qos [interface] -

interface s0/2
bandwidth {kpbs}
- Optional but always recommended
auto discovery qos [trust] - Starts the auto-discovery phase
                                      - [trust] Indicates that the DSCP markings of packets are    trusted
no auto discovery qos - Stops the Auto-Discovery phase
auto qos - Generate the autoqos templates and installs it

interface s0/3
encapsulation frame
bandwidth {kbps}
frame-relay interface-dlci 100
auto qos voip [trust]
- Configures the autoqos — VoIP feature
- [trust] indicates that the DSCP markings of packets are trusted