A few days ago, someone suggested I write this article as it seems many people are struggling with ‘problem’. In fact, the solution which I’m going to explain below is the answer to a problem typically found in “home labs” where the internet connection doesn’t always have multiple IP addresses. This doesn’t mean that it’s only valid for home-us or testing scenarios only. Given that IPv4 addresses are almost depleted, it’s a good thing not to waste these valuable resources if it’s not necessary.
Basically, what I’m going to explain is how you can use a KEMP Load Master to publish multiple services/workloads to the internet using only a single (external) IP address. In the example below, I will be publishing Exchange, Office Web Apps and Lync onto the internet.
The following image depicts how the network in my example looks like. It also displays the different domain names and IP addresses that I’m using. Note that – although I perfectly could – I’m not connecting the Load Master directly onto the internet. Instead, I mapped an external IP address from my router/firewall to the Load Master:
How it works
The principle behind all this is simple: whenever a request ‘hits’ the Load Master, it will read the host header which is used to connect to the server and use that to determine where to send the request to. Given that most of the applications we are publishing use SSL, we have to decrypt content at the Load Master. This means we will be configuring the Load Master in Layer 7. Because we need to decrypt traffic, there’s also a ‘problem’ which we need to work around. The workloads we are publishing to the internet all use different host names. Because we only use a single Virtual Service, we can assign only a single certificate to it. Therefore, you have to make sure that the certificate you will configure in the Load Master either includes all published host names as a Subject (Alternative) Name or use a wildcard certificate which automatically covers all the hosts for a given domain. The latter option is not valid if you have multiple different domain names involved.
How the Load Master handles this ‘problem’ is not new – far from it. The same principle is used in every reverse proxy and was also the way how our beloved – but sadly discontinued – TMG used to handle such scenarios. You do not necessarily need to enable the Load Master’s ESP capabilities.
Step 1: Creating Content Rules
First, we will start by creating the content rules which the Load Master will use to determine where to send the requests to. In this example we will be creating rules for the following host names:
- outlook.exchangelab.be (Exchange)
- meet.exchangelab.be (Lync)
- dialin.exchangelab.be (Lync)
- owa.exchangelab.be (Office Web Apps)
- Login to the Load Master and navigate to Rules & Checking and click > Content Rules:
- Click Create New…
- On the Create Rule page, enter the details as follows. Note the header field being set to “host“:
Step 2: creating a new Virtual Service
This step is fairly easy. We will be creating a new virtual service which uses the internal IP address that is mapped to the external IP address. If you already have create a virtual service previously, you can skip this step.
- In the Load Master, click Virtual Services and the click > Add New:
- Specify the internal IP address which you have previously mapped to an external IP address
- Specify port TCP 443
- Click Add this Virtual Service
Step 3: Configuring the Virtual Service
So how does the Load Master differentiate between the different host headers? Content Rules. Content rules allow you to use Regular Expressions which the Load Master will use to examine incoming requests. If a match is found through one of the expressions, the Load Master will forward the traffic to the real server which has been configured with that content rule.
First, we need to enable proper SSL handling by the Load Master:
- Under SSL Properties, click the checkbox next to Enabled.
- When presented with a warning about a temporary self-signed certificate, click OK.
- Select the box next to Reencrypt. This will ensure that traffic leaving the Load Master is encrypted again before being sent to the real servers. Although some services might support SSL offloading (thus not reencrypting traffc), it’s beyond the scope of this article and will not be discussed.
- Select HTTPS under Rewrite Rules.
Before moving to the next step, we will also need to configure the (wildcard) certificate to be used with this Virtual Service:
- Next to Certificates, click Add New
- Click Import Certificate and follow the steps to import the wildcard certificate into the Load Master. These steps include selecting a certificate file, specifying a password for the certificate file (if applicable) and setting an identifying name for the certificate (e.g. wildcard).
- Click Save
- Click “OK” in the confirmation prompt.
- Under Operations, click the dropdown menu VS to Add and select the virtual service.
- Now click Add VS
You’ve now successfully configured the certificate for the main Virtual Service. This will ensure the Load Master can decrypt an analyze traffic sent to it. Let’s move on to the next step in which we will define the “Sub Virtual Services”.
Step 4: Adding Sub Virtual Services
While still on the properties pages for the (main) Virtual Service, we will now be adding new ‘Sub Virtual Services’. Having a Sub Virtual Service per workload allows us to define different real servers per SubVS as well as a different health check. This is the key functionality which allows to have multiple different workloads live under a single ‘main’ Virtual Service.
- Under Real Servers click Add SubVS…
- Click OK in the confirmation window.
- A new SubVS will now have appeared. Click Modify and configure the following parameters:
- Nickname (makes it easier to differentiate from other SubVSs)
- Persistence options (if necessary)
- Real Server(s)
Repeat the steps above for each of the workloads you want to publish.
Note: a word of warning is needed here. Typically, you would add your ‘real servers’ using the same TCP port as the main Virtual Service, being TCP 443, in this case. However, if you are also using the Load Master as a reverse proxy for Lync, you will need to make sure your Lync servers are added using port 4443 instead.
Once you have configured the Sub Virtual Services, you still need to assign one of the content rules to it. Before you’re able to do so, you first have to enable Content Rules.
Step 5: enabling and configuring content rules
In the properties of the main Virtual Service, Under Advanced Properties click Enable next to Content Switching. You will notice that this option has become available after adding your first SubVS.
Once Content Switching is enabled, we need to assign the appropriate rules to each SubVS.
- Under SubVSs, Click None in the Rules column for the SubVS you just are configuring. For example, if you want to configure the content rule for the Exchange SubVS:
- On the Rule Management page, select the appropriate Content Matching rule (created earlier) from the selection box and then click Add:
- Repeat these steps for each Sub Virtual Service you created earlier
You can now test the configuration by navigating your browser to one of your published services or by using one of the service. If all is well, you should now be able to reach Exchange, Lync and Office Web Apps – all using the same external IP Address.
As you can see, there’s some fair amount of work involved, but it’s all in all relatively straightforward to configure. In this example we published Exchange, Lync and Office Web Apps, but you could just as easily add other services too. Especially with the many Load Balancing options you have with Exchange 2013, you could for instance use multiple additional Sub Virtual Services for Exchange alone. To get you started, here’s how the content rules for that would look like:
Note: if you are defining multiple Sub Virtual Services for e.g. Exchange, you don’t need to use/configure a Sub Virtual Service which uses the content rule for the Exchange domain name “^outlook.domain.com*”. If you still do, you’d find that – depending on the order of the rules – your workload-specific virtual services would remain unused.
I hope you enjoyed this article!