How to perform network Bandwidth tests between EC2 Linux Instances

So, a lot of people in IT love to benchmark stuff, and I can see why. What better way to know if someone can promise what they deliver then by actually seeing the results for yourself?

In that regard, I was curious to find out what the network speeds were between EC2 instances that were in the same AZ/region in the same VPC. AWS advertises speeds up to 10 gigabit for their highest instances, and I wanted to see if I could achieve these speeds myself.

Because I love linux servers and can’t stand windows servers (open source ftw), I decided to launch 2 EC2 Amazon Linux nodes. These instances were both in the same region (us-west-2 oregon) and same AZ. I also launched them in the same VPC/subnet so that they would be as close together as possible with minimal hops in between them.

I decided to use iperf as this is a great network diagnostic tool (https://iperf.fr/). It was unavailable to download via yum :(, however its super easy to download if you just enter this command in your linux terminal after launching and connecting to your nodes:

$ wget https://iperf.fr/download/iperf_2.0.5/iperf_2.0.5-2_amd64 ; chmod +x iperf_2.0.5-2_amd64 ; sudo mv iperf_2.0.5-2_amd64 /usr/bin/iperf

Once installed on both instances, you then need to designate one instance as a server and one as a client (doesn’t matter what you decide just make a decision).

For the server instance, enter $ iperf -s
To enter into listening mode

For the client instance, enter $ iperf -c 10.0.0.3 -t 5 (where 10.0.0.3 is the private IP of your server instance, 5 is specifying 5 seconds of time for testing)

From here you should start getting read outs showing the true network speed between your instances. Neat!

Caveats-
*I found it easiest to just allow all traffic within the VPC/security group so that the testing could occur without issue as far as traffic being blocked
*For the fastest speed, you have to use the largest instances and also make sure enhanced networking is enabled. I found enhanced networking is enabled on the latest Amazon Linux AMIs by default, so that was nice. However, if you want to check if your instances have enhanced networking enabled check this guide: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html
*What I have is a very particular set of skills; skills I have acquired over a very long career. Skills that make me a nightmare for people like you. I can create websites in a relatively short amount of time.

Block IP addresses on Linux box

You got unwanted network traffic I feel bad for you son, I got 99 problems but a DDOS ain’t one.

So, your noticing your site is getting flooded by a ton of malicious traffic, and the traffic all originates from one or a set of IPs (let us say for example the bad range was 154.112.x.x. If you are using AWS VPC, you can simply create a deny http policy under the security group for the instance, ‘deny http 154.112.0.0/16’ and you would be set.

However, lets say you were in EC2 Classic, or lets even say you are not using AWS at all, well, now you have to rely on ip tables.

In order to block an IP address or IP range on your box, you need to SSH in then enter:

‘sudo iptables -A INPUT -s 154.112.0.0/16 -j DROP’

This will then deny all attempts to reach your website from the IP range between 154.112.0.0-154.112.255.255.

You can choose to remove this from your ip tables via this command:

‘sudo iptables -D INPUT -s 154.112.0.0/16 -j DROP’

Finally, if you want to view the rules of your iptables, simply enter ‘sudo iptables –list’

AWESOME GIF ACTION:

Enterprising your WordPress Application

So, you set up your WordPress blog on a micro instance, and you are pretty happy with it. But all of a sudden, your site gets posted on the front page of Reddit, and before you know it, your page is down because you just received the warm and friendly Reddit DDOS attack of love.

Well, you could stop your instance, change its size to a heftier type (m1.small, m1.medium, or m1.large), and then start it back up again, however this is not ideal for a number of reasons. One, your whole site depends on one instance. If any issues happen to the instance (degraded host, network issues, too much traffic), then your site will become overloaded and dead, yet again.

The ideal thing to do if you expect your application or site to blow up would be to follow the principals outlined in the following whitepaper, “Architecting for the AWS Cloud: Best Practices”: http://media.amazonwebservices.com/AWS_Cloud_Best_Practices.pdf

Essentially, this paper speaks to leveraging the AWS cloud in an effective way, so that your application can be highly scalable, available, and redundant.

It is quite a bit of work to turn your application from a 1 instance pony to an enterprise level set up, however the benefits are numerous, and it can even be quite fun.

Given the example of a WordPress site, the first thing we have to do is cut out the MySQL database from our instance, and place it onto an RDS instance. This will help with decoupling our application into separate components, so that individual pieces of our architecture can be replicated and brought up when needed.

Step one is to launch an RDS instance which has the same user name, password, and DB name of your currently running MySQL server on your micro instance (doing this makes things a lot easier on you). Once the RDS instance is launched, you will receive a host name that you can use to interact with it (ex. ‘wordpress.ceqcgo1ake2a.us-west-2.rds.amazonaws.com’)

After SSHing into your instance, you will then need to copy your MySQL db from your micro instance and then zip it so you can easily port it over to an RDS instance. I used this command: mysqldump –quick wordpress | gzip > wordpress.gz

Then, you have to move the zipped file over to your RDS instance. I used this command: gunzip < wordpress.gz | mysql -u root -p -h wordpress.ceqcgo1mqh2a.us-west-2.rds.amazonaws.com wordpress After entering your password, the DB should now be residing on your RDS instance. Now, you have to change your wordpress config file so that it pulls from the RDS host, instead of the 'localmachine' config which comes as default. To do this, SSH in to your micro, navigate to /var/www/html, then vi into wp-config file. Here you will have to change the host from localmachine, to your new RDS host name. If you set everything up correctly, you can now stop the mysql service on your instance, and your site should be pulling information from the db instance instead of the local. At this point, we need to clean up the webserver so that we can create an AMI of the instance which can be launched and be ready to go, with no user interaction to get up and running. This AMI needs to auto start each of your needed services (apache and php), load up your content, and make these actions automatic upon reboot of an instance. Once your AMi is nailed down, you are now ready to launch an Elastic Load Balancer to distribute all of the traffic that will be coming into your application. All you will need to do is set up an ELB to forward on all port 80 traffic to your instance, and you will want to modify the ping path so that the instances can identify as healthy in your set up (I modified mine to just '/' instead of 'index.html'. Once your ELB is created, you can now begin using the real magic of AWS, Auto Scaling. In order to use Auto Scaling you need to download and configure the Auto Scaling Command Line Tools, however an easy way of having these available to you is to launch an Amazon Linux AMI instance, which comes with AS CMD tools enabled by default. However, you still have to change your environment variables so that your access key and secret access key are included in each of your commands, unless you want to specify them with each command you do: http://docs.aws.amazon.com/AutoScaling/latest/GettingStartedGuide/SetupCLI.html You can test if your tools are running on your instance by typing 'as-cmd'. Setting up Auto Scaling involves 3 main steps, creating a launch configuration, creating an auto scaling group, and finally defining an auto scaling policy for the auto scaling group. The command I used to specify an Auto scaling launch config in US West Oregon is here: as-create-launch-config myblogvpc --image-id ami-fb55c4es --region us-west-2 --instance-type t1.micro --key superlinekey --group sg-c658b2dw Then, I had to create an auto scaling group for the config: as-create-auto-scaling-group myblogvpcgroup --launch-configuration myblogvpc --load-balancers vpcloadbalancer --region us-west-2 --vpc-zone-identifier subnet-d75b1s3s --min-size 2 --max-size 8 Finally, I had to create an auto scaling policy that will dictate how many back end instances are added to the load balancer if the base 2 I am running become over loaded: as-put-scaling-policy scalingblogpolicy --type ChangeInCapacity --auto-scaling-group myblogvpcgroup --adjustment 2 --region us-west-2 At this point you are almost done configuring your auto scaling policies, the last step is to go to your CloudWatch alarms and configure an alarm which will trigger your increase in instances given a certain condition is met. After navigating to CloudWatch, I created an alarm based upon the EC2: Aggregated by Auto Scaling Group. The only AS group I had popped up 'myblogvpcgroup', I then clicked on CPUUtilization, named my alarm and gave a description, then I set the alarm >= 80 for 5 minutes. Then in the next screen, you select Alarm, Auto Scaling Policy, choose your group, and then choose the policy, click add action, continue, and finally create Alarm. When you do this, your site is now ready to withstand the forces of the internet.

Finally, if you wanted to make it so that your users can access your site via a friendly domain name instead of the DNS Name which is automatically given for your ELB “ex ‘vpcloadbalancer-1521654646.us-west-2.elb.amazonaws.com'”, you can use Route 53 to create a hosted zone for your domain. After the hosted zone is set up, you can then create an Alias record which ties your domain (example.com) to the DNS record for your ELB. More information here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/using-domain-names-with-elb.html

After the Alias record is completed and Route 53 is done propagating the changes made to the DNS for your site, you will now have one of the most scalable and fault resistant WordPress applications known to man. The only major downside is that this set up was done in a 1 AZ fashion, however if you are concerned about availability, you can choose to replicate your entire setup in a different region, and in the case of a major outage for one region, you could just change your Route 53 alias record sets to point to an ELB in a different region, so that all of your traffic is routed through California, instead of Oregon.

TL:DR on how to Enterprise a WordPress Application:
1. Launch RDS instance
2. Migrate DB to RDS
3. Update WordPress to use RDS
4. Clean up webserver instance
5. Confirm bootstrap on webserver
6. Create AMI
7. Create ELB
8. Create AS Launch config
9. Create AS Group
10. Create AS policies + Alarms
11. Update DNS to ELB CNAME

Fun with CloudFront

So lets say that you are trying to distribute content around the globe, and you want to have as little latency as possible for every consumer who will be accessing your files. What do you choose as a service? You could use an EC2 instance, but this means everyone will be grabbing the content from one specific location at any given time. Same thing if you go with S3, if you only provide a direct link to a file from S3, everyone will hit the ‘one entrance’ bottleneck when accessing your content.

Well, CloudFront would be a great choice for you. With CloudFront, you can take a file in your S3 bucket (ex. mybucket/images/file.jpg) and instead of directly serving the file only from the bucket, you can choose to create either a streaming or download distribution, which utilizes a global edge network where your content is cached onto local edge servers near each user, which allows for a much better user experience while accessing your content.

Setting up a CloudFront Distribution is pretty simple. Navigate over to the CloudFront Management console, then click Create Distribution. You will need to choose either Download or Streaming. After choosing, it will prompt you for an Origin Domain name and also an origin ID. The easiest way to use CloudFront is to create a bucket in S3 that will distribute your content. If you click in the field for Origin Domain Name, it will auto populate each of your S3 buckets. It is also possible to create a custom origin domain name, but for simplicity sake we will just roll with an S3 bucket. Once choosing the bucket, an Origin ID will auto generate for you.

From here on out, you will just need to modify the Cache Behavior Settings and also the Distribution Settings. If you are just streaming normal HTTP content, you can pretty much leave everything at its defaults. However, if you want more information about when your files are being downloaded, I would advise to enable logging of your resources.

From this point, it will take CloudFront about 15 minutes to create your distribution. Here is the perfect opportunity to walk your dog, call your mom or dad, or argue with your significant other. After that time has passed, you should have a distribution that looks like dc8gtmXXXXXX.cloudfront.net

If your s3 bucket is set up to have an images/ and videos/ folders, and content within them, you can utilize your distribution by grabbing the domain name (dc8gtmXXXXXX.cloudfront.net) and then adding the file path to the files (dc8gtmXXXXXX.cloudfront.net/images/surfing.jpb). If your back end code or the link that your users are clicking on is pulling directly from this distribution, then CloudFront will be doing its job by serving its content via the glorious edge network.

That wraps up how to get up and running with CloudFront, until next time, signing out.

Random gif:

How to set up S3CMD on an Amazon Linux AMI

So, S3CMD is awesome. Whole bunch of commands to use with S3 that are infinietely better than using the Management console. So how do you set it up? Easy, read this blog (picture coming andrewgo).

First, launch an Amazon Linux AMI from the EC2 console. This tutorial assumes you know how to SSH into the instance. Good.

Now, once you are SSH’ed in to your EC2 instance, you would want to perform this command to gain the most recent tools from s3tools.org:

wget http://sourceforge.net/projects/s3tools/files/s3cmd/1.5.0-alpha1/s3cmd-1.5.0-alpha1.tar.gz

This will give you your base install file (s3cmd-1.5.0-alpha1.tar.gz), you then want to issue this command to extract the file:

tar -zxvf s3cmd-1.5.0-alpha1.tar.gz

At this point you have a few different options to install, the easiest (since python is auto installed on Amazon linux) is to issue this command:

sudo python setup.py install

This should have everything set up for you at this point. Now, you only need to configure your access keys, and how you want everything to be sent back and forth. To do this, enter

s3cmd –configure (this will ask you for your access key and secret access key, this is really the only NEEDED information)

After this point, you should be free to issue any of the commands listed here and have fun with S3:

http://linux.die.net/man/1/s3cmd

Random picture per andrewgo: