Traffic Storm-Control: a tiny detail to keep in mind

When you configure Traffic Storm Control on Cisco switches ports using bandwidth percentage as threshold level, be aware that this percentage will be calculated on the basis of negotiated bandwidth not the hardware port type. In other words, you may have 1000BaseTX interface configured something like this:

interface GigabitEthernet1/0/6
 storm-control broadcast level 1.00
 storm-control action trap

And you think that port will start dropping broadcast traffic when it reaches 10Mbps rate (1% of 1000Mbps). That is true if it gets negotiated at actual 1000Mbps. However, if for some reason the speed gets negotiated to let’s say 100Mbps drops will start at 1Mbps rate which is 1% of 100Mbps.

The documentation says “The traffic storm control level is a percentage of the total available bandwidth of the port” This available bandwidth can be seen in show interface command output.

When the port is at 1000Mbps:

#sh int gi1/0/27 | inc Giga|BW
GigabitEthernet1/0/27 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 6cfa.8953.fd1b (bia 6cfa.8953.fd1b)
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

When it is at 100Mbps:

#sh int gi1/0/29 | inc Giga|BW
GigabitEthernet1/0/29 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 6cfa.8953.fd1d (bia 6cfa.8953.fd1d)
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,

VRRP troubleshooting case

Got an interesting troubleshooting case. Two Layer-3 switches with VRRP configured on their downlinks:


The diagram shows only three downlinks with VRRP, the actual number is around 100.

SW2 being VRRP Backup in normal conditions was reporting VRRP flapping from time to time becoming Master and going back to Backup state again:

%VRRP-6-STATECHANGE: Vl366 Grp 166 state Backup -> Master
%VRRP-6-STATECHANGE: Vl60 Grp 60 state Backup -> Master
%VRRP-6-STATECHANGE: Vl673 Grp 73 state Backup -> Master
%VRRP-6-STATECHANGE: Vl479 Grp 79 state Backup -> Master

%VRRP-6-STATECHANGE: Vl366 Grp 166 state Master -> Backup
%VRRP-6-STATECHANGE: Vl60 Grp 60 state Master -> Backup
%VRRP-6-STATECHANGE: Vl673 Grp 73 state Master -> Backup
%VRRP-6-STATECHANGE: Vl479 Grp 79 state Master -> Backup

The clue that drove me to solve the case was that almost all the flappings were occurring in between 9:00 and 18:00. The Layer-3 interfaces had traffic shaping configured:

interface Vlan366
  ip address x.x.x.73
  vrrp 166 ip x.x.x.73
  vrrp 166 preempt delay minimum 60
  vrrp 166 priority 101
  service-policy input Limitto2mbps
  service-policy output Limitto2mbps

Looking at SNMP monitoring system I found that %VRRP-6-STATECHANGE Syslog messages timestamps match with time when the traffic on a given interface reaches the shaping limit. What was actually happening was at this moment policy-map was starting to drop traffic and VRRP messages that SW1 was sending to SW2 were occasionally being dropped as well. SW2 missing a subsequent VRRP message declared itself a Master, then got next VRRP message from SW1 and switched to Backup again.

So excluding VRRP (IP Protocol 112 to and from its multicast address from traffic shaping by adding deny statement to traffic shaping ACLs

SW1#sh ip access-lists TS-ACL

Extended IP access list TS-ACL
4 deny 112 x.x.x.72 host
10 permit ip any any (4131183 matches)

solved the problem.

Nice thing to keep in mind: next time you do traffic shaping, make sure you don’t cause problems for your control plane traffic.