Fun with CloudFront

So lets say that you are trying to distribute content around the globe, and you want to have as little latency as possible for every consumer who will be accessing your files. What do you choose as a service? You could use an EC2 instance, but this means everyone will be grabbing the content from one specific location at any given time. Same thing if you go with S3, if you only provide a direct link to a file from S3, everyone will hit the ‘one entrance’ bottleneck when accessing your content.

Well, CloudFront would be a great choice for you. With CloudFront, you can take a file in your S3 bucket (ex. mybucket/images/file.jpg) and instead of directly serving the file only from the bucket, you can choose to create either a streaming or download distribution, which utilizes a global edge network where your content is cached onto local edge servers near each user, which allows for a much better user experience while accessing your content.

Setting up a CloudFront Distribution is pretty simple. Navigate over to the CloudFront Management console, then click Create Distribution. You will need to choose either Download or Streaming. After choosing, it will prompt you for an Origin Domain name and also an origin ID. The easiest way to use CloudFront is to create a bucket in S3 that will distribute your content. If you click in the field for Origin Domain Name, it will auto populate each of your S3 buckets. It is also possible to create a custom origin domain name, but for simplicity sake we will just roll with an S3 bucket. Once choosing the bucket, an Origin ID will auto generate for you.

From here on out, you will just need to modify the Cache Behavior Settings and also the Distribution Settings. If you are just streaming normal HTTP content, you can pretty much leave everything at its defaults. However, if you want more information about when your files are being downloaded, I would advise to enable logging of your resources.

From this point, it will take CloudFront about 15 minutes to create your distribution. Here is the perfect opportunity to walk your dog, call your mom or dad, or argue with your significant other. After that time has passed, you should have a distribution that looks like dc8gtmXXXXXX.cloudfront.net

If your s3 bucket is set up to have an images/ and videos/ folders, and content within them, you can utilize your distribution by grabbing the domain name (dc8gtmXXXXXX.cloudfront.net) and then adding the file path to the files (dc8gtmXXXXXX.cloudfront.net/images/surfing.jpb). If your back end code or the link that your users are clicking on is pulling directly from this distribution, then CloudFront will be doing its job by serving its content via the glorious edge network.

That wraps up how to get up and running with CloudFront, until next time, signing out.

Random gif:

How to set up S3CMD on an Amazon Linux AMI

So, S3CMD is awesome. Whole bunch of commands to use with S3 that are infinietely better than using the Management console. So how do you set it up? Easy, read this blog (picture coming andrewgo).

First, launch an Amazon Linux AMI from the EC2 console. This tutorial assumes you know how to SSH into the instance. Good.

Now, once you are SSH’ed in to your EC2 instance, you would want to perform this command to gain the most recent tools from s3tools.org:

wget http://sourceforge.net/projects/s3tools/files/s3cmd/1.5.0-alpha1/s3cmd-1.5.0-alpha1.tar.gz

This will give you your base install file (s3cmd-1.5.0-alpha1.tar.gz), you then want to issue this command to extract the file:

tar -zxvf s3cmd-1.5.0-alpha1.tar.gz

At this point you have a few different options to install, the easiest (since python is auto installed on Amazon linux) is to issue this command:

sudo python setup.py install

This should have everything set up for you at this point. Now, you only need to configure your access keys, and how you want everything to be sent back and forth. To do this, enter

s3cmd –configure (this will ask you for your access key and secret access key, this is really the only NEEDED information)

After this point, you should be free to issue any of the commands listed here and have fun with S3:

http://linux.die.net/man/1/s3cmd

Random picture per andrewgo:

How to connect to a SQL Server RDS instance through a Windows instance using SQL Server Management Studio

Yo Dawg, I heard you like managing your RDS db instance with SQL Server management studio. I got you covered.

First step is to launch an RDS SQL Server Standard db instance from the AWS management console. It can be version 2008 or 2012. It will pay to remember what user and password you selected when first creating this Instance.

Once the RDS instance is up and available, you will need to grab its FQDN: (example: db.XXXXXXXXXX.us-west-2.rds.amazonaws.com) to use later. You will also need to make sure that the security group on the db instance allows for communication with the windows instance we are launching next.

At this point, we can launch any generic Windows AMI instance. After a successful RDP, we then need to download and install SQL Server 2012 Express Management Studio (http://www.microsoft.com/en-us/download/details.aspx?id=29062).

Once this is installed, you can then finally open up the program on your windows box (should be under start, programs, microsoft sql server 2012, sql server management studio) and where it says server name, we will input the FQDN that we got from our RDS instance before. One important thing to note is you need to you need to switch from Windows Authentication to SQL Server Authentication in order to input your user and password.

At this point, you should be logged into the pretty GUI and able to do what it do.

Auto Scaling – Bread and Butter of AWS

So, what makes AWS so great overall?

The cheap prices, the ease of use, the vast resources? All of these things are good, but Auto Scaling is really what makes AWS truly unique and flexible.

With Auto Scaling, you are able to create a policy whereby if your site or application experiences a significant increase in traffic or CPU utilization, you can then have your AS policy create new back-end instances behind an Elastic Load Balancer to help supply the resources needed to fill the demand. Alternatively, if your site or application is not needing all of the resources currently running on your account, you can configure the AS group to scale down during these times to save you money.

At the current time, it is only possible to create an Auto Scaling group using the CLI tools, if you are not wanting to use the Elastic Beanstalk service or a third party service. Our documentation is a great place to get started with using the CLI tools needed to create your first launch config, and then also your Auto Scaling group:

http://docs.aws.amazon.com/AutoScaling/latest/GettingStartedGuide/SetupCLI.html

Using this method, 1) you can ensure you never pay too much for your AWS resources, and 2) you always have the supply needed to keep your application online and ready to go