I use RR DNS at the one site with two boosters, I'm happy with it. My first thought is, why would your WAP controller matter? Traffic to it is control, not proxy, and traffic to the boosters should remain best route - right?
After that, if you have any reasonable method for monitoring the traffic to your current boosters, do so (or check what you already have), this will tell you how big a pipe you need. i.e. If peak traffic on either of two boosters is under 500Mb, one 1Gb port should support both, virtualized. Otherwise, yes (and I would do this anyway, ports aren't that expensive), you will need a port per virtual machine.
Finally, we use a Cisco Application Control Engine (ACE) to balance several virtual web servers. This is great, but it's a solution for high-demand on a single rack full of servers, and (like most things Cisco) it's pricey. The whole point of boosters is to move traffic to the sites, out of a central location. Unless you have a huge load at one site, this would be expensive overkill, although you could investigate some of the open source balancers. Or, you could always split your clients between boosters.
What I've considered is making tertiary boosters, meaning my server would sit in the same rack with a couple of boosters, which would sit between it and the site boosters. Clients would have the site booster listed first, then RR central boosters. This would accomplish two things - adding a buffer in front of the server, and adding a stack for outside DNS traffic to point to, rather than the single Server. Probably play with it this summer ;-)
Thanks for the response Berry!
Unfortunately, what I'm currently seeing doesn't jive with what I'd expect to see in regards to "best-route".
Not sure if this is due to the fact that our clients are wireless (and all the intricacies of our centralized wireless setup), or if it's due to the fact that our core gear is Cisco while our sites have HP switches without an enterprise switch OS.
RR DNS effectively splits our clients up between boosters (unless I'm missing something), so I like the idea of doing it as opposed to something like a hardware based balancer or introducing an OSS software based balancing scheme.
What I saw prior to starting RR DNS was that if a booster was busy, the clients wouldn't move on to the secondary or tertiary boosters, but rather would just indicate "Waiting for booster..." since the primary booster was responding, albeit overloaded. The only thing that made a client start using a secondary/tertiary booster was non-response from the primary.
I'm not sure I fully understand the capability of boosters either. I've heard reference to pointing boosters at other boosters instead of at the sever, but I'm not sure how that helps load balance in my case (I could see it if I had 100s of boosters though).
Additionally, I'm curious about the limits of a booster... I know the recommendation is 200-300 clients per booster, but is that for the random spare P4 machine you designate as a booster? Or can an actual server-class machine handle more? And with what tweaking (to threads, timeouts, etc)?
I've looked in the docs for guidance, but have found it lacking aside from the basic setup scenarios.
Nothing like waking up a dead thread! However, annual budget allows for some hardware and infrastructure upgrades, so I'm revisiting my booster setup...
I've moved to a 12x SLES based booster RR DNS setup with all the boosters at my data center.
It has proven rock solid, but I'm a little annoyed by the physical appearance of the setup, and updating the boosters would be unnecessarily tedious if not for cSSHx.
12x SFF Dells sitting in bottom of a rack doesn't look too bad, but I'm not sure it's the most efficient in regards to power.
And the 8U or so it takes up seems like it could easily be condensed if VMs could do the job.
I'm also a bit worried about the eventual death of that hardware. We have a few spares,but they won't last forever...
I chose this route because of the listed requirements for the booster (Dedicated NIC, and I thought a dedicated spindle but I don't see that anymore). Now a year later there's an * indicating the booster can be run as a VM, but none of our virtual hosts would accommodate 12 dedicated NICs at this point. I imagine any of them could be MADE to though...
I'm curious if the "Dedicated NIC" still applies to the VM scenario and if shared SAN storage suffers dramatically from a virtual host running multiple boosters...???
Anybody running this quantity of boosters in a virtual stack somewhere?
Are those who run virtualized boosters dedicating a NIC to each instance of the VM?
Can anybody at Filewave provide assurances that this many boosters would run nicely if they were all virtualized on the same host against local or iSCSI storage?
Recently we reconfigured our Booster-setup at our Main Office.
We now have 3 Windows-VM's running as Boosters in our ESX-environment.
These 3 Boosters are "hidden" behind our hardware F3 Load Balancer. The clients connect to a DNS-alias on the Load Balancer.
The load balancer then looks at the number of open connections to each Booster and redirects the new connection to the server with the least open connections.
This provides us with some "intelligence" in load balancing (opposed to RR-DNS) and some possibilities when doing maintenance on one of servers
Let me know if someone wants more details (I wil ask our Network-team who made the setup) :)