What?
There are many instances where we want guest traffic to not touch our enterprise networks. From a security standpoint, having guest traffic quarantined off into a non-routed VLAN, and terminating the traffic into a DMZ provides a secure method for handling all of this untrusted traffic.
What if we could pickup guest traffic (no matter where it is) and tunnel it to a single place like our DMZ? You can – this is exactly what the Anchor WLC does for us.
By having the Anchor WLC live out in the DMZ, we are able to build a EoIP tunnel between our Foreign WLC and the Anchor WLC. This serves as a mechanism to securely transport traffic from the AP, all the way to the DMZ – all while never gaining visibility into the rest of the network.
How?
In Cisco world, there are 2 main types of tunnels. The first one is CAPWAP which is the tunneling mechanism used between APs and WLCs for Control and (sometimes) Data traffic to ride in. The second one is Ethernet Over IP (EoIP) that the WLCs use to communicate between each-other. This is the logical underpinning that allows WLCs to share information such as client and AP data, and overall just enables the WLCs to be “aware” of each-other. We build these EoIP tunnels between WLCs to enable seamless roaming of clients between WLCs, and even to enable L3 roaming between WLCs.
Another great feature of the EoIP tunnels is that it allows us to take an SSID that is configured on our local (it’s actually called “Foreign” – but whatever) WLC, and terminate it on another WLC. This provides great flexability on what IP Network we actually want to terminate the SSID to – especially in the case of guest networks.
The way we form these WLC relationships is with something called a “Mobility Group”. A mobility Group is a bunch of WLCs that are “aware” of eachother and share information such as AP & Client statistics, and also allow us to terminate SSIDs onto a separate WLC.
Within the Mobility Group, messages are shared amongst the group members to enable features like seamless roaming, AP load balancing, Anchoring SSIDs, and fail-over support for APs. Every AP is aware of every WLC in the Mobility group and can failover to a neighboring WLC in the event of an outage (assuming the AP has L3 access to the remaining WLCs in the Mobility Group).
Every time a client associates to an AP or preforms a roam, the WLC sends a unicast message to each of the other Mobility Group members about this incident. As you can imagine, this can get EXTREMELY chatty when working with large scale deployments. You can enable Multicast Messaging in these types of deployments where the WLC will send a message to the Multicast group, thus everyone in that Multicast Group will hear the message. This is the preferred method for large scale deployments as it reduces chatter and overall load on the WLCs.
Now that we’ve had an overview of what a Mobility Group is, where it’s used and why – lets start approaching it from the other direction.
“Standard” Anchoring
In Enterprise environments, we typically see a single Anchor WLC that lives out in the DMZ somewhere. All of the Foreign WLCs will anchor their guest SSIDs to this WLC and life goes on as usual.
For starters, each WLC has its own “internal” Mobility Group that is defined in the configuration. When we form Mobility Group members, we point the OTHER WLCs at THIS Mobility Group. This is why the Anchor is aimed at the Mobility Group “Anchor” and the converse is true of the foreign WLCs.
What you will notice here is that each of the Foreign WLCs only has the “Anchor” WLC in its Mobility Group member list. This means that WLCs: A, B, and C, all form an EoIP tunnel to only the Anchor WLC. Even though all 4 WLCs are in the same Mobility Group, Mobility Messages ARE NOT shared between the 3 Foreign WLCs.
This typically works fine and life will go on. There however is only one problem with this design.
It doesn’t scale well.
As of the date of this blog, Cisco has both the Catalyst and AireOS Controllers on the market. The downside is that regardless of which platform you use, you are limited to the following;
- A WLC may only have 24 Members PER Mobility Group defined in its Member List
- A WLC may only have 72 entries in its Member List
The below picture is a sample of this very limitation.
You will notice in the list that we max out at 24 members of the “standard” Mobility Group before the WLC starts barking at us, and we have to move to a new Mobility Group name.
The issue here, is that all the Foreign WLCs are anchoring against the Anchor WLC, so what happens at WLC #25? Where does this WLC anchor?
The short answer is: we simply create another Mobility Group (ie: group name “standard1” in above picture) and start pairing Foreign WLCs to the Anchor in a new group.
This is a perfectly valid config and will work just fine. After-all, not many networks have dozens of WLCs that they are Anchoring guest traffic back to a single place…right?
For those of us that are lucky enough to walk into these types of accounts, it provides a true head-scratching moment mostly around the following:
- What if I have 60 sites I need to Anchor to 1 or 2 WLCs? They can’t all live in the same Mobility Group afterall..
- How do I maintain a common config across all my Anchors?
- How do I address the Mobility Group naming issue while staying on a standard?
“Reverse” Anchoring
This is where “Reverse” Anchoring can help migrate around these headaches. The only thing we are REALLY changing, is there is no longer a shared mobility group that we will Anchor against. This solves a few of my nagging OCD points:
- Using Unique, Foreign-Specific Mobility Groups, we will never approach the 24 Members per 1 Mobility Group Limit
- It maintains congruence of configs if you are utilizing multiple Anchors for Redundancy.
Redundancy
While we are at it, lets take a look at what our options really are for “Anchor Redundancy”.
- Anchor WLCs can be deployed as an SSO pair to give you box level redundancy
- Additional Anchor WLCs can be deployed as a standalone WLC for failover & client load balancing
For my moneys worth, I don’t see any real value in having an SSO pair on your Anchor WLCs. For the exact same amount of hardware and licensing, you can stand up a secondary Anchor. By having 2 discrete anchors, you have the ability to scale up your guest counts, and still are able to achieve fail-over redundancy. By setting your Anchor priority values the same on your Foreign WLC, the clients will round-robin between the two WLCs. This not only gives your greater scale to the number of clients you can anchor, but it also provides non-statefull fail-over should one of the Anchors go down.
This was written on 3/25/2020 while quarantining at home during the COVID-19 Pandemic. I finally had some time so sit down during my quarantine and put all of this on paper, as it’s been bouncing around my brain a lot lately.. I hope everyone is staying safe and healthy, cheers!