Enterprising your WordPress Application

So, you set up your WordPress blog on a micro instance, and you are pretty happy with it. But all of a sudden, your site gets posted on the front page of Reddit, and before you know it, your page is down because you just received the warm and friendly Reddit DDOS attack of love.

Well, you could stop your instance, change its size to a heftier type (m1.small, m1.medium, or m1.large), and then start it back up again, however this is not ideal for a number of reasons. One, your whole site depends on one instance. If any issues happen to the instance (degraded host, network issues, too much traffic), then your site will become overloaded and dead, yet again.

The ideal thing to do if you expect your application or site to blow up would be to follow the principals outlined in the following whitepaper, “Architecting for the AWS Cloud: Best Practices”: http://media.amazonwebservices.com/AWS_Cloud_Best_Practices.pdf

Essentially, this paper speaks to leveraging the AWS cloud in an effective way, so that your application can be highly scalable, available, and redundant.

It is quite a bit of work to turn your application from a 1 instance pony to an enterprise level set up, however the benefits are numerous, and it can even be quite fun.

Given the example of a WordPress site, the first thing we have to do is cut out the MySQL database from our instance, and place it onto an RDS instance. This will help with decoupling our application into separate components, so that individual pieces of our architecture can be replicated and brought up when needed.

Step one is to launch an RDS instance which has the same user name, password, and DB name of your currently running MySQL server on your micro instance (doing this makes things a lot easier on you). Once the RDS instance is launched, you will receive a host name that you can use to interact with it (ex. ‘wordpress.ceqcgo1ake2a.us-west-2.rds.amazonaws.com’)

After SSHing into your instance, you will then need to copy your MySQL db from your micro instance and then zip it so you can easily port it over to an RDS instance. I used this command: mysqldump –quick wordpress | gzip > wordpress.gz

Then, you have to move the zipped file over to your RDS instance. I used this command: gunzip < wordpress.gz | mysql -u root -p -h wordpress.ceqcgo1mqh2a.us-west-2.rds.amazonaws.com wordpress After entering your password, the DB should now be residing on your RDS instance. Now, you have to change your wordpress config file so that it pulls from the RDS host, instead of the 'localmachine' config which comes as default. To do this, SSH in to your micro, navigate to /var/www/html, then vi into wp-config file. Here you will have to change the host from localmachine, to your new RDS host name. If you set everything up correctly, you can now stop the mysql service on your instance, and your site should be pulling information from the db instance instead of the local. At this point, we need to clean up the webserver so that we can create an AMI of the instance which can be launched and be ready to go, with no user interaction to get up and running. This AMI needs to auto start each of your needed services (apache and php), load up your content, and make these actions automatic upon reboot of an instance. Once your AMi is nailed down, you are now ready to launch an Elastic Load Balancer to distribute all of the traffic that will be coming into your application. All you will need to do is set up an ELB to forward on all port 80 traffic to your instance, and you will want to modify the ping path so that the instances can identify as healthy in your set up (I modified mine to just '/' instead of 'index.html'. Once your ELB is created, you can now begin using the real magic of AWS, Auto Scaling. In order to use Auto Scaling you need to download and configure the Auto Scaling Command Line Tools, however an easy way of having these available to you is to launch an Amazon Linux AMI instance, which comes with AS CMD tools enabled by default. However, you still have to change your environment variables so that your access key and secret access key are included in each of your commands, unless you want to specify them with each command you do: http://docs.aws.amazon.com/AutoScaling/latest/GettingStartedGuide/SetupCLI.html You can test if your tools are running on your instance by typing 'as-cmd'. Setting up Auto Scaling involves 3 main steps, creating a launch configuration, creating an auto scaling group, and finally defining an auto scaling policy for the auto scaling group. The command I used to specify an Auto scaling launch config in US West Oregon is here: as-create-launch-config myblogvpc --image-id ami-fb55c4es --region us-west-2 --instance-type t1.micro --key superlinekey --group sg-c658b2dw Then, I had to create an auto scaling group for the config: as-create-auto-scaling-group myblogvpcgroup --launch-configuration myblogvpc --load-balancers vpcloadbalancer --region us-west-2 --vpc-zone-identifier subnet-d75b1s3s --min-size 2 --max-size 8 Finally, I had to create an auto scaling policy that will dictate how many back end instances are added to the load balancer if the base 2 I am running become over loaded: as-put-scaling-policy scalingblogpolicy --type ChangeInCapacity --auto-scaling-group myblogvpcgroup --adjustment 2 --region us-west-2 At this point you are almost done configuring your auto scaling policies, the last step is to go to your CloudWatch alarms and configure an alarm which will trigger your increase in instances given a certain condition is met. After navigating to CloudWatch, I created an alarm based upon the EC2: Aggregated by Auto Scaling Group. The only AS group I had popped up 'myblogvpcgroup', I then clicked on CPUUtilization, named my alarm and gave a description, then I set the alarm >= 80 for 5 minutes. Then in the next screen, you select Alarm, Auto Scaling Policy, choose your group, and then choose the policy, click add action, continue, and finally create Alarm. When you do this, your site is now ready to withstand the forces of the internet.

Finally, if you wanted to make it so that your users can access your site via a friendly domain name instead of the DNS Name which is automatically given for your ELB “ex ‘vpcloadbalancer-1521654646.us-west-2.elb.amazonaws.com'”, you can use Route 53 to create a hosted zone for your domain. After the hosted zone is set up, you can then create an Alias record which ties your domain (example.com) to the DNS record for your ELB. More information here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/using-domain-names-with-elb.html

After the Alias record is completed and Route 53 is done propagating the changes made to the DNS for your site, you will now have one of the most scalable and fault resistant WordPress applications known to man. The only major downside is that this set up was done in a 1 AZ fashion, however if you are concerned about availability, you can choose to replicate your entire setup in a different region, and in the case of a major outage for one region, you could just change your Route 53 alias record sets to point to an ELB in a different region, so that all of your traffic is routed through California, instead of Oregon.

TL:DR on how to Enterprise a WordPress Application:
1. Launch RDS instance
2. Migrate DB to RDS
3. Update WordPress to use RDS
4. Clean up webserver instance
5. Confirm bootstrap on webserver
6. Create AMI
7. Create ELB
8. Create AS Launch config
9. Create AS Group
10. Create AS policies + Alarms
11. Update DNS to ELB CNAME

2 thoughts on “Enterprising your WordPress Application

  1. Hey I’m very glad to see that people now care about *real* iusess in virtualized environments ! I spend my time explaining to some people that those are fine until you need to achieve high PPS rates. Another issue that you didn’t mention / experience here is that when you’re hitting the 100k PPS limit, the physical machine is on its knees, and other VMs running on the same host will be similarly impacted. So if you want to run a VM with a load balancer and another one with your web server, the limit will probably be around 50-75k PPS (assuming you’re alone on that host, which is obviously not the case).For these reasons, the only way to achieve high network data rates is to group together in same instances the services which need to exchange a lot of data, and to use a large number of instances in order to hope to keep some bandwidth on some physical hosts, in the event someone else’ instance sharing the same physical host as yours makes heavy use of the traffic.

Leave a Reply

Your email address will not be published. Required fields are marked *