Classic and VPC Instances

Amazon offers 2 main environments to launch their instances, EC2 Classic and Amazon VPC.

EC2 Classic essentially refers to launching instances that are all public facing. All instance ports launched within EC2 Classic can be reached from anyone on the internet, as long as the security group allows it.

VPC gives the customer much more flexibility and modification with how their environment is created and maintained. If a customer chooses to launch within a VPC environment, they are able to edit both egress and ingress port filtering, they can assign private IP addresses within different subnets, and they can also create both ‘public’ and ‘private’ VPC environments for their resources.

There is no price difference between running instances in the classic environment or running in a VPC, however data transfer costs can become more complicated in a VPC environment as you can have data transfer going out of one VPC into another VPC which can then incur data costs.

For more information about Amazon VPC, please view the below documentation set:

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html

Creating an Instance Store backed AMI from an original EBS backed AMI

Well, usually you want to avoid instance store, and only go with EBS for the flexibility and redundancy offered, however sometimes instance store is the preferred method, so therefore it $pays$ to be familiar with the ins and outs of both.

In our activity today, I learned how to create an instance store backed AMI from an originally EBS backed AMI. The documentation to get started on this can be found here:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html

Here are all the steps I took to make it happen. First, I launched a regular Amazon linux 64 bit EBS backed AMI. Then, after SSH’ing and configuring the instance how I wanted it, I had then had to upload my keypair, my private key file, and also my certificate file directly onto my instance. I decided to load mine directly onto /home/ec2-user

In order to upload all of the files onto the server, I used WinSCP to remote into the box, and then drag and drop the files from my local machine onto the server.

Once all of the files were uploaded, I had a few commands to run. First was the ec2-bundle-vol command, specifically I used:

ec2-bundle-vol -k pk-DC2ZORVQYSSOOG2J3ZCQIAK3LN4L7MF4.pem -c cert-DC2ZORVQYSSOOG2J3ZCQIAK3LN4L7MF4.pem -u (myAWS12digitaccountnumber) -b root=/dev/sda1

Then, I had to upload the bundle to S3 to make it super redundant:

ec2-upload-bundle -b troybucket -m /tmp/image.manifest.xml -a (myaccesskey) -s (mysecretaccesskey)

After refreshing my S3 bucket, I was delighted to find lots of little happy files residing there. At this point, there was only one more step to go, registering the AMI so it can be used. To do this, I used the ec2-register command.

ec2-register troybucket/image.manifest.xml -n deadpoolami -O (myaccesskey) -W (mysecretaccesskey) –region us-west-2

At this point, I was done. I now could browse my new instance store based ami (ami-6ded7c5d) via the management console, or through CLI using the ec2-describe-images -a –region us-west-2 command.

Creating your own Key Pairs

Today we learned how to create a new rsa key directly from your instance, and then swap out the old key pair on the instance and replace it with the new one. We then successfully tested the key pair by using putty gen to create a .ppk file from the .pem file so that we could successfully log into the instance with the newly created key.

Generating and replacing an instance’s SSH keypair:

1. First, SSH into the instance

2. Generate a new RSA key ssh-keygen -t rsa

3. Copy the contents of the public key and append these to your authorized keys file:
$ cat NewSSHkey.pem.pub >> .ssh/authorized_keys

4. Using a remote copy protocol (WinSCP, `scp`, `rsync`, etc.), copy the private key to your local computer.

5. Test that specifying this SSH key locally allows you to log into the instance successfully.

Also note that if you wish to upload your newly created key so that you can launch new instances of a different type with this same key, you can use the ec2-import-keypair command from the EC2 Command Line API Tools. Some more information on this command is provided at the link below:
http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-ImportKeyPair.html

Troubleshooting Windows – Enabling DHCP through offline registry edit

Today, the main task was to troubleshoot why a specific Windows AMI was unable to launch and pass health checks.

Ultimately, the best way to troubleshoot issues where you can not connect to a Windows instance due to a previous corruption is to detach the volume from the problem instance, launch a “fix-it” instance of the same OS and same Availability Zone, reattach that volume to the new instance which you can connect to, and then you can modify the old EBS volume as needed to allow access.

The specific error with the old volume that we were dealing with was that a static IP has been assigned, instead of using DHCP. So we needed to get into the registry edit to allow access back to the instance. If you search our internal knowledge base tool for “Enabling DHCP”, it will bring up an article which gives a guide through the process.

After the volume had been edited, it was then ok to detach the volume from the fix it instance, reattach it to the old instance, and start it up to allow for sweet sweet RDP.

Converting Instance Store Instances to EBS backed Instances

Sounds simple enough right?

Well, yes and no. There are quite a few blogs and posts about how to do this, and they all vary in their approach. Essentially, the theory goes that you start with an instance store instance, you then allocate an EBS volume and attach it to the instance (10GB), you then copy all of the contents of the root instance store volume into EBS volume, detach the volume, take a snapshot of the volume, create an ami from the volume, and voila, you then have an ami which will launch an EBS backed version of the previously instance store based instance.

This all sounds great in theory, but what are the commands actually needed to initiate the transfer of the root instance store storage over to the newly attached EBS volume? Turns out the best command to use is dd. DD initiates a full block by block copy of one drive to another. Turns out, if you attach the EBS volume via the management console, you would only need to SSH into your instance and then perform one command:

dd if=/dev/sda1 of=/dev/sdf bs=4096 conv=notrunc,noerror

And with that command, everything I said above can then be done.

The process is not quite as easy with Windows based instances as the Windows OS is not natively equipped with DD like Linux is. You can download DD for Windows and more or less follow the same process, however it takes longer to perform each function on a Windows OS compared to Linux for many various reasons

.

Using CLI Tools

Today, the main focus was on using the command line to interact with your AWS account and all of the EC2 resources on your account.

To start with, I downloaded the ec2-api-tools so that I can use the CLI with my windows laptop. I used this guide: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/setting_up_ec2_command_windows.html

After getting the CLI tools up and running, I then began playing around with the features, mainly that of launching instances, stopping and restarting them, creating volumes, and attaching them to instances.

How to create a wordpress site on EC2, THE AWS CLOUD!

Launch instance. … Website. Magic!

Just kidding, first thing was to launch an Amazon Linux AMI from the management console, in particular a 64 bit AMI of the micro instance size in the Oregon region. Once the instance was up and running, I allocated an EIP so that it would be easier to SSH into the instance, and also to help with DNS routing later on.

Once the instance was launched, I ran sudo yum update to update the AMI. After that I had to install Apache, PHP, and then MySQL. The commands for this were sudo yum install httpd, service httpd start, sudo yum install php php-mysql, service httpd restart, finally sudo yum install mysql-server, service mysqld start.

At this point I needed to configure mySQL, so I created a data base: mysqladmin -uroot create wordpress, secured it: mysql_secure_installation, then I followed the wizard until SQL was fully installed.

After that I began the install of wordpress onto the box. To do this I used wget to download the latest version of wordpress, then extracted that using tar, and finally I needed to create and edit the base wordpress config file, wp-config.php.

Finally I could set the login details for wordpress and get into the nitty gritty of the setting up each page and post on this site, including awesome pics.

Now that wordpress is all configured, I needed to update the DNS on my registrar to point to the Public EIP which I had associated to the instance. I also created a new subdomain through my registrar (blog.troysite.com), and assigned the same IP to this newly created subdomain, so that the same content could be served both to troysite.com and also blog.troysite.com.

Ta-dah!

 

dat link doe