Step 1 - Logging In
Once the credentials have been created, they will be assigned to environment variables, making it simple to log in to the Azure CLI. Use the credentials to log in to the Azure CLI. From the CLI, you can create and manage your Azure resources. Run this command to log in to the Azure account:
az login -u $username -p $password
After you log in, you should get a block of JSON with details about your sign-in.
You can view the Azure account username with this command:
echo $username
You can view the Azure account password with this command:
echo $password
Step 2 - Create Virtual Network
Load balancing means evenly distributing incoming traffic across across a group of backend resources to process those requests. This helps scale traffic loads and process them effectively. In this lab we will build a public load balancer, which provides outbound connections access to a balanced set of virtual machines. Private IP addresses are converted to public IP addresses and effectively handle internet traffic.
We are going to create a load balancer between two virtual machines as depicted here:
Let's first create a resource group called $resource
:
az group create \
--name $resource \
--location eastus
Typically you want to use a Standard SKU for load balancers, but we will use a Basic SKU for this exercise. The difference is we will create VMs in an availability set rather than across several availability zones.
Let's first create a virtual network myVirtualNetwork
with a backend subnet called myBESubnet
:
az network vnet create \
--name myVirtualNetwork \
--resource-group $resource \
--location eastus \
--subnet-name myBESubnet \
--address-prefixes 10.1.0.0/16 \
--subnet-prefixes 10.1.0.0/24
Step 3 - Create Bastion and Security Group
Things get slightly more complicated here. We are going to create a bastion, which provides Remote Desktop Protocol (RDP) or SSH (Secure Shell) connectivity to your virtual machines over a TLS protocol. This protects your VMs from becoming vulnerable by exposing RDP/SSH ports to the public internet, but providing a means to connect through RDP/SSH. It will keep your VMs private.
Let's first create a public IP address for the bastion we are about to create. For security and availability reasons, we are likely to use a Standard SKU in practice instead of Basic.
az network public-ip create \
--name publicBastionIp \
--resource-group $resource \
--sku Standard
After that, create a subnet called AzureBastionSubnet
inside our virtual network for the bastion service. Note that this subnet name is required for the bastion service we are about to create:
az network vnet subnet create \
--name AzureBastionSubnet \
--vnet-name myVirtualNetwork \
--address-prefixes 10.1.1.0/24 \
--resource-group $resource
Create the bastion service. Note this may take several minutes:
az network bastion create \
--name myBastionHost \
--resource-group $resource \
--public-ip-address publicBastionIp \
--vnet-name myVirtualNetwork \
--location eastus
There are a few more administrative steps we need to take. We next have to create a network security group (NSG), which filters traffic and has rules on inbound and outbound traffic:
az network nsg create \
--name myNetworkSecurityGroup \
--resource-group $resource
Now let's add a rule to filter traffic. This is done with the command az network nsg rule create
as shown here:
az network nsg rule create \
--name myHTTPRule \
--nsg-name myNetworkSecurityGroup \
--resource-group $resource \
--protocol '*' \
--direction inbound \
--source-address-prefix '*' \
--source-port-range '*' \
--destination-address-prefix '*' \
--destination-port-range 80 \
--access allow \
--priority 200
Step 4 - Configure VMs
We now need to modify our network interfaces for two virtual machines. We will achieve this using the az network nic command. As stated earlier, we are going to load balance between just two machines for this exercise:
az network nic create \
--name myVMNic1 \
--resource-group $resource \
--vnet-name myVirtualNetwork \
--subnet myBESubnet \
--network-security-group myNetworkSecurityGroup
az network nic create \
--name myVMNic2 \
--resource-group $resource \
--vnet-name myVirtualNetwork \
--subnet myBESubnet \
--network-security-group myNetworkSecurityGroup
We are not quite ready to build the two virtual machines yet. Let's create an availability set, which is a redundancy configuration so if one virtual machine goes down, the other can take over. We will create them using the az vm availability-set create command:
az vm availability-set create \
--name myAvailabilitySet \
--location eastus \
--resource-group $resource
Let's create the two virtual machines. Notice that we configure the network interfaces for each by using the --nics
argument. This will attach our VMs to our myBESubnet
we created earlier:
az vm create \
--resource-group $resource \
--name myVirtualMachine1 \
--nics myVMNic1 \
--image UbuntuLTS \
--admin-username azureuser \
--public-ip-sku Standard \
--availability-set myAvailabilitySet \
--custom-data vm_settings.yaml \
--no-wait
az vm create \
--resource-group $resource \
--name myVirtualMachine2 \
--nics myVMNic2 \
--image UbuntuLTS \
--admin-username azureuser \
--public-ip-sku Standard \
--custom-data vm_settings.yaml \
--availability-set myAvailabilitySet
Step 5 - Configure Load Balancer
We have reached final approach for our flight landing. After all those other administrative tasks, it's time to get the public load balancer spun up. First, create a public IP address for the load balancer:
az network public-ip create \
--name myPublicIP \
--resource-group $resource \
--sku Basic
Now let's create the load balancer resource:
az network lb create \
--name myPublicLoadBalancer \
--resource-group $resource \
--public-ip-address myPublicIP \
--sku Basic \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool
Before we use the load balancer, we need to create a rule for it specifying the tcp
protocol as well as the ports and time-out policy, which we will set to 20 seconds:
az network lb rule create \
--resource-group $resource \
--lb-name myPublicLoadBalancer \
--name myLoadBalancerRule \
--protocol tcp \
--frontend-port 80 \
--backend-port 80 \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool \
--idle-timeout 20
Next, let's configure the VMs to the backend pool and attach those configurations to the load balancer:
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name myVMNic1 \
--resource-group $resource \
--lb-name myPublicLoadBalancer
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name myVMNic2 \
--resource-group $resource \
--lb-name myPublicLoadBalancer
Let's also apply the CustomScript extension to our VMs. This allows us to easily run scripts on our virtual machines, which in this case will be used to install NGINX as a web service. We will do that with this script and apply it with the az vm extension set
command:
az vm extension set \
--publisher Microsoft.Azure.Extensions \
--name CustomScript \
--resource-group $resource \
--vm-name myVirtualMachine1 \
--settings '{ "fileUris": ["https://raw.githubusercontent.com/Azure-Samples/compute-automation-configurations/master/automate_nginx.sh"], "commandToExecute": "./automate_nginx.sh"}'
az vm extension set \
--publisher Microsoft.Azure.Extensions \
--name CustomScript \
--resource-group $resource \
--vm-name myVirtualMachine2 \
--settings '{ "fileUris": ["https://raw.githubusercontent.com/Azure-Samples/compute-automation-configurations/master/automate_nginx.sh"], "commandToExecute": "./automate_nginx.sh"}'
Print the IP address and copy it to a browser. You should land on a page being greeted by one of the virtual machines:
az network public-ip show \
--resource-group $resource \
--name myPublicIP \
--query ipAddress \
--output tsv
That was a lot of work, but we reached the end! You will see your virtual machines are at work and serving up the web page greeting. However, the load balancer served as a proxy and served the page from one of the two virtual machines in the availability set. If the load balancer experienced a heavy load of requests, it would distribute those requests as effectively as possible between the allocated virtual machines.