ELB automatically distributes incoming traffic to backend servers based on configured forwarding policies. A Load balancer added to the AS Group helps to set up a load-balanced application. In Huawei Cloud, ELB service can work together with AS service. This way, inbound traffic is distributed to healthy ECS instances. Using the two services together brings some important advantages:
This tutorial is basically about how to use ELB and AS service together for a load-balanced application. Terraform, an IaC tool, will be used instead of the console when completing this tutorial.
Prerequisites for this tutorial:
KooCLI is a tool to manage your Huawei Cloud services. In the next steps, I will bind a Public IP to ECS using the KooCLI to be able to stress test. This step is not required. Also, you can use the Console for this step.
I also created a Private Image with NGINX installed. You can click on the link to get information about this step. This way I won't have to deal with dependencies like NGINX in my new ECS instances. When complete, your architecture should look similar to the following diagram:
Here are the steps in order:
Before starting the steps, let's specify and configure the provider requirements for Terraform.
terraform {
required_providers {
huaweicloud = {
source = "huaweicloud/huaweicloud"
version = ">= 1.20.0"
}
}
}
provider "huaweicloud" {
region = "ap-southeast-3"
access_key = "AK"
secret_key = "SK"
}
resource "huaweicloud_vpc" "vpc" {
name = "demo_vpc"
cidr = "192.168.0.0/16"
}
Let's create a Subnet under the VPC we created. Our entire project will run under this Subnet. We set the Subnet name, CIDR and Gateway IP. We also specify which VPC it is connected to with the vpc.id.
resource "huaweicloud_vpc_subnet" "subnet" {
name = "demo_subnet"
cidr = "192.168.0.0/24"
gateway_ip = "192.168.0.1"
vpc_id = huaweicloud_vpc.vpc.id
}
Step 3
At this stage, there are 4 different resource blocks. With the first (secgroup), we create the Security Group and delete the default rules. There are separate sets of rules for inbound traffic and outbound traffic. In the second block (secgroup_rule_v4_egress) we allow all outbound traffic on IPv4. In the third block (secgroup_rule_v4), we allow all traffic on port 80 of IPv4. In the last block (allow_ssh) we only allow SSH connection from a specific IP Address. In this example this IP address is my IP Address.
resource "huaweicloud_networking_secgroup" "secgroup" {
name = "demo-sg"
description = "Security Group for Demo App"
delete_default_rules = true
}
resource "huaweicloud_networking_secgroup_rule" "secgroup_rule_v4_egress" {
security_group_id = huaweicloud_networking_secgroup.secgroup.id
direction = "egress"
ethertype = "IPv4"
remote_ip_prefix = "0.0.0.0/0"
}
resource "huaweicloud_networking_secgroup_rule" "secgroup_rule_v4" {
security_group_id = huaweicloud_networking_secgroup.secgroup.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
ports = "80"
remote_ip_prefix = "0.0.0.0/0"
}
resource "huaweicloud_networking_secgroup_rule" "allow_ssh" {
security_group_id = huaweicloud_networking_secgroup.secgroup.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = "159.146.72.127/32"
}
Step 4
In this step, we create a Public IP with 20 Mbit/s bandwidth billed by Traffic. If you choose to be billed by Traffic, you can increase the Bandwidth Size up to 100 mbit/s and you will be billed at the same price.
resource "huaweicloud_vpc_eip" "traffic_20" {
publicip {
type = "5_bgp"
}
bandwidth {
share_type = "PER"
name = "bw_20"
size = 20
charge_mode = "traffic"
}
}
Step 5
In this step, we have two different resource blocks. First (demo_lb), we create a Load Balancer with the ELB service and use it under the subnet we created. In the second block (traffic_20), we bind the Public IP Address we created to the load balancer.
resource "huaweicloud_lb_loadbalancer" "demo_lb" {
name = "demo-lb"
vip_subnet_id = huaweicloud_vpc_subnet.subnet.ipv4_subnet_id
}
resource "huaweicloud_vpc_eip_associate" "traffic_20" {
public_ip = huaweicloud_vpc_eip.traffic_20.address
port_id = huaweicloud_lb_loadbalancer.demo_lb.vip_port_id
}
Step 6
In this step, we add a Listener that listens for HTTP requests from Port 80 for Load Balancer.
resource "huaweicloud_lb_listener" "demo_lb_listener" {
protocol = "HTTP"
protocol_port = "80"
loadbalancer_id = huaweicloud_lb_loadbalancer.demo_lb.id
}
Step 7
In this step, we configure the routing policies. Round Robin routing algorithm is preferred from this application.
resource "huaweicloud_lb_pool" "demo_lb_policy" {
protocol = "HTTP"
lb_method = "ROUND_ROBIN"
listener_id = huaweicloud_lb_listener.demo_lb_listener.id
}
Step 8
In this step, we create AS Configuration to use in AS Group. First, we config the ECS instances. For example, what type of flavor it will be, which image it will use. There is also a simple user_data value. This data only changes index.html when a new instance is created and shows the Private IP Address of the ECS. It is nothing more than a simple sed command. Also, we configure the system disk.
#!/bin/bash
sed -i "s/IP_ADDRESS/`hostname -I`/g" /var/www/html/index.html
resource "huaweicloud_as_configuration" "demo_as_config" {
scaling_configuration_name = "demo_as_config"
instance_config {
image = "cc2b52d9-5f29-44eb-a318-df879adfe323"
flavor = "s3.large.2"
key_name = "ga-kp"
security_group_ids = [huaweicloud_networking_secgroup.secgroup.id]
user_data = file("init.txt")
disk {
size = 40
volume_type = "SAS"
disk_type = "SYS"
}
}
}
Step 9
At this stage, we configure the AS Group settings. We specify the desired, minimum and maximum number of samples. Also, at this stage, we are adding a Load Balancer to the AS group, and in this way, incoming traffic will be distrubute to the backendservers created by AS.
resource "huaweicloud_as_group" "demo_as_group" {
scaling_group_name = "demo_as_group"
scaling_configuration_id = huaweicloud_as_configuration.demo_as_config.id
desire_instance_number = 1
min_instance_number = 0
max_instance_number = 3
vpc_id = huaweicloud_vpc.vpc.id
delete_publicip = true
delete_instances = "yes"
networks {
id = huaweicloud_vpc_subnet.subnet.id
}
lbaas_listeners {
pool_id = huaweicloud_lb_pool.demo_lb_policy.id
protocol_port = huaweicloud_lb_listener.demo_lb_listener.protocol_port
}
}
Step 10
In this step, we create two different alarm rules based on CPU using Cloud Eye service. The first one is for scale-up and the other is for scale-down. CPU usage equal to 45% or high alarm is triggered. In the other alarm rule, it is triggered when the CPU usage drops below 30%.
resource "huaweicloud_ces_alarmrule" "alarm_rule_scale_up" {
alarm_name = "as_alarm_rule"
metric {
namespace = "SYS.AS"
metric_name = "cpu_util"
dimensions {
name = "AutoScalingGroup"
value = huaweicloud_as_group.demo_as_group.id
}
}
condition {
period = 1
filter = "average"
comparison_operator = ">="
value = 45
unit = "%"
count = 1
}
alarm_actions {
type = "autoscaling"
notification_list = []
}
}
resource "huaweicloud_ces_alarmrule" "alarm_rule_scale_down" {
alarm_name = "as_alarm_rule_scale_down"
metric {
namespace = "SYS.AS"
metric_name = "cpu_util"
dimensions {
name = "AutoScalingGroup"
value = huaweicloud_as_group.demo_as_group.id
}
}
condition {
period = 1
filter = "average"
comparison_operator = "<"
value = 30
unit = "%"
count = 1
}
alarm_actions {
type = "autoscaling"
notification_list = []
}
}
Step 11
In this step, scaling policy actions are specified. If the CPU usage is 45% or more, a new instance is added. If the CPU usage is below 30%, the number of instances is set to one.
resource "huaweicloud_as_policy" "demo_aspolicy_add" {
scaling_policy_name = "cpu_aspolicy"
scaling_policy_type = "ALARM"
scaling_group_id = huaweicloud_as_group.demo_as_group.id
alarm_id = huaweicloud_ces_alarmrule.alarm_rule_scale_up.id
cool_down_time = 300
scaling_policy_action {
operation = "ADD"
instance_number = 1
}
}
resource "huaweicloud_as_policy" "demo_aspolicy_set" {
scaling_policy_name = "cpu_aspolicy"
scaling_policy_type = "ALARM"
scaling_group_id = huaweicloud_as_group.demo_as_group.id
alarm_id = huaweicloud_ces_alarmrule.alarm_rule_scale_down.id
cool_down_time = 300
scaling_policy_action {
operation = "SET"
instance_number = 1
}
}
Until this step, we have created the infrastructure. Let's deploy our services using the terraform apply command. Execute the terraform apply command and answer “yes” to the confirmation prompt.
$ terraform apply
If Deploy is successful, you will get the output like this:
huaweicloud_networking_secgroup.secgroup: Creating...
huaweicloud_vpc.vpc: Creating...
huaweicloud_vpc_eip.traffic_20: Creating...
huaweicloud_as_configuration.demo_as_config: Creating...
huaweicloud_networking_secgroup_rule.secgroup_rule_v4: Creating...
huaweicloud_networking_secgroup_rule.allow_ssh: Creating...
huaweicloud_networking_secgroup_rule.secgroup_rule_v4_egress: Creating...
huaweicloud_vpc_subnet.subnet: Creating...
huaweicloud_lb_loadbalancer.demo_lb: Creating...
huaweicloud_as_group.demo_as_group: Creating...
huaweicloud_ces_alarmrule.alarm_rule_scale_down: Creating...
huaweicloud_ces_alarmrule.alarm_rule_scale_up: Creating...
huaweicloud_as_policy.demo_aspolicy_add: Creating...
huaweicloud_as_policy.demo_aspolicy_set: Creating...
Apply complete! Resources: 17 added, 0 changed, 0 destroyed.
Now we can do our checks. First, let's check the ELB service and find the Public IP.
Public IP: 119.13.106.174. Let's send a request to this address.
Our application is currently running. A simple application that displays the private IP address of the ECS Instance. In this way, we can see more clearly where the incoming traffic is going.
An overview of As Group is as follows. Now let's bind Public IP to existing ECS and increase CPU usage. After that, let's login to ECS via ssh connection. After the connection, let's increase the CPU usage by using a simple tool.
sudo ssh -i ga-kp.pem root@114.119.188.39
stress-ng -c 2 -l 90
With the increase in CPU usage, a new instance is added. It is clearer with the images below.
We have two healthy backend servers in total. Incoming Traffic will be distributed to these two backend servers according to the Round Robin algorithm. Let's test it. First of all, I will send normal requests with Chrome, then I will send 20 requests with curl for a clearer.
for i in `seq 1 10`; do curl http://119.13.106.174; done
Machine Private IP Address = 192.168.0.231
Machine Private IP Address = 192.168.0.231
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.231
Machine Private IP Address = 192.168.0.231
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.231
Machine Private IP Address = 192.168.0.231
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.61
Requests were distributed as we wanted. Now let's get the CPU usage back to normal and back to the desired number of instances.
for i in `seq 1 5`; do curl http://119.13.106.174; done
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.61
Machine Private IP Address = 192.168.0.61
Before we finish our article, let's move on to the last step and destroy all services. Let's use terraform's magic and type the command "terraform destroy". All our services have been destroyed.
And we have come to the end of the article!
İletişime geçmek, yorum bırakmak veya hatalarımı düzetlmek istersen mail atabilirsin.