How to connect to a SQL Server RDS instance through a Windows instance using SQL Server Management Studio

Yo Dawg, I heard you like managing your RDS db instance with SQL Server management studio. I got you covered.

First step is to launch an RDS SQL Server Standard db instance from the AWS management console. It can be version 2008 or 2012. It will pay to remember what user and password you selected when first creating this Instance.

Once the RDS instance is up and available, you will need to grab its FQDN: (example: db.XXXXXXXXXX.us-west-2.rds.amazonaws.com) to use later. You will also need to make sure that the security group on the db instance allows for communication with the windows instance we are launching next.

At this point, we can launch any generic Windows AMI instance. After a successful RDP, we then need to download and install SQL Server 2012 Express Management Studio (http://www.microsoft.com/en-us/download/details.aspx?id=29062).

Once this is installed, you can then finally open up the program on your windows box (should be under start, programs, microsoft sql server 2012, sql server management studio) and where it says server name, we will input the FQDN that we got from our RDS instance before. One important thing to note is you need to you need to switch from Windows Authentication to SQL Server Authentication in order to input your user and password.

At this point, you should be logged into the pretty GUI and able to do what it do.

Auto Scaling – Bread and Butter of AWS

So, what makes AWS so great overall?

The cheap prices, the ease of use, the vast resources? All of these things are good, but Auto Scaling is really what makes AWS truly unique and flexible.

With Auto Scaling, you are able to create a policy whereby if your site or application experiences a significant increase in traffic or CPU utilization, you can then have your AS policy create new back-end instances behind an Elastic Load Balancer to help supply the resources needed to fill the demand. Alternatively, if your site or application is not needing all of the resources currently running on your account, you can configure the AS group to scale down during these times to save you money.

At the current time, it is only possible to create an Auto Scaling group using the CLI tools, if you are not wanting to use the Elastic Beanstalk service or a third party service. Our documentation is a great place to get started with using the CLI tools needed to create your first launch config, and then also your Auto Scaling group:

http://docs.aws.amazon.com/AutoScaling/latest/GettingStartedGuide/SetupCLI.html

Using this method, 1) you can ensure you never pay too much for your AWS resources, and 2) you always have the supply needed to keep your application online and ready to go

Classic and VPC Instances

Amazon offers 2 main environments to launch their instances, EC2 Classic and Amazon VPC.

EC2 Classic essentially refers to launching instances that are all public facing. All instance ports launched within EC2 Classic can be reached from anyone on the internet, as long as the security group allows it.

VPC gives the customer much more flexibility and modification with how their environment is created and maintained. If a customer chooses to launch within a VPC environment, they are able to edit both egress and ingress port filtering, they can assign private IP addresses within different subnets, and they can also create both ‘public’ and ‘private’ VPC environments for their resources.

There is no price difference between running instances in the classic environment or running in a VPC, however data transfer costs can become more complicated in a VPC environment as you can have data transfer going out of one VPC into another VPC which can then incur data costs.

For more information about Amazon VPC, please view the below documentation set:

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html

Creating an Instance Store backed AMI from an original EBS backed AMI

Well, usually you want to avoid instance store, and only go with EBS for the flexibility and redundancy offered, however sometimes instance store is the preferred method, so therefore it $pays$ to be familiar with the ins and outs of both.

In our activity today, I learned how to create an instance store backed AMI from an originally EBS backed AMI. The documentation to get started on this can be found here:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html

Here are all the steps I took to make it happen. First, I launched a regular Amazon linux 64 bit EBS backed AMI. Then, after SSH’ing and configuring the instance how I wanted it, I had then had to upload my keypair, my private key file, and also my certificate file directly onto my instance. I decided to load mine directly onto /home/ec2-user

In order to upload all of the files onto the server, I used WinSCP to remote into the box, and then drag and drop the files from my local machine onto the server.

Once all of the files were uploaded, I had a few commands to run. First was the ec2-bundle-vol command, specifically I used:

ec2-bundle-vol -k pk-DC2ZORVQYSSOOG2J3ZCQIAK3LN4L7MF4.pem -c cert-DC2ZORVQYSSOOG2J3ZCQIAK3LN4L7MF4.pem -u (myAWS12digitaccountnumber) -b root=/dev/sda1

Then, I had to upload the bundle to S3 to make it super redundant:

ec2-upload-bundle -b troybucket -m /tmp/image.manifest.xml -a (myaccesskey) -s (mysecretaccesskey)

After refreshing my S3 bucket, I was delighted to find lots of little happy files residing there. At this point, there was only one more step to go, registering the AMI so it can be used. To do this, I used the ec2-register command.

ec2-register troybucket/image.manifest.xml -n deadpoolami -O (myaccesskey) -W (mysecretaccesskey) –region us-west-2

At this point, I was done. I now could browse my new instance store based ami (ami-6ded7c5d) via the management console, or through CLI using the ec2-describe-images -a –region us-west-2 command.

Creating your own Key Pairs

Today we learned how to create a new rsa key directly from your instance, and then swap out the old key pair on the instance and replace it with the new one. We then successfully tested the key pair by using putty gen to create a .ppk file from the .pem file so that we could successfully log into the instance with the newly created key.

Generating and replacing an instance’s SSH keypair:

1. First, SSH into the instance

2. Generate a new RSA key ssh-keygen -t rsa

3. Copy the contents of the public key and append these to your authorized keys file:
$ cat NewSSHkey.pem.pub >> .ssh/authorized_keys

4. Using a remote copy protocol (WinSCP, `scp`, `rsync`, etc.), copy the private key to your local computer.

5. Test that specifying this SSH key locally allows you to log into the instance successfully.

Also note that if you wish to upload your newly created key so that you can launch new instances of a different type with this same key, you can use the ec2-import-keypair command from the EC2 Command Line API Tools. Some more information on this command is provided at the link below:
http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-ImportKeyPair.html

Troubleshooting Windows – Enabling DHCP through offline registry edit

Today, the main task was to troubleshoot why a specific Windows AMI was unable to launch and pass health checks.

Ultimately, the best way to troubleshoot issues where you can not connect to a Windows instance due to a previous corruption is to detach the volume from the problem instance, launch a “fix-it” instance of the same OS and same Availability Zone, reattach that volume to the new instance which you can connect to, and then you can modify the old EBS volume as needed to allow access.

The specific error with the old volume that we were dealing with was that a static IP has been assigned, instead of using DHCP. So we needed to get into the registry edit to allow access back to the instance. If you search our internal knowledge base tool for “Enabling DHCP”, it will bring up an article which gives a guide through the process.

After the volume had been edited, it was then ok to detach the volume from the fix it instance, reattach it to the old instance, and start it up to allow for sweet sweet RDP.

Converting Instance Store Instances to EBS backed Instances

Sounds simple enough right?

Well, yes and no. There are quite a few blogs and posts about how to do this, and they all vary in their approach. Essentially, the theory goes that you start with an instance store instance, you then allocate an EBS volume and attach it to the instance (10GB), you then copy all of the contents of the root instance store volume into EBS volume, detach the volume, take a snapshot of the volume, create an ami from the volume, and voila, you then have an ami which will launch an EBS backed version of the previously instance store based instance.

This all sounds great in theory, but what are the commands actually needed to initiate the transfer of the root instance store storage over to the newly attached EBS volume? Turns out the best command to use is dd. DD initiates a full block by block copy of one drive to another. Turns out, if you attach the EBS volume via the management console, you would only need to SSH into your instance and then perform one command:

dd if=/dev/sda1 of=/dev/sdf bs=4096 conv=notrunc,noerror

And with that command, everything I said above can then be done.

The process is not quite as easy with Windows based instances as the Windows OS is not natively equipped with DD like Linux is. You can download DD for Windows and more or less follow the same process, however it takes longer to perform each function on a Windows OS compared to Linux for many various reasons

.

Using CLI Tools

Today, the main focus was on using the command line to interact with your AWS account and all of the EC2 resources on your account.

To start with, I downloaded the ec2-api-tools so that I can use the CLI with my windows laptop. I used this guide: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/setting_up_ec2_command_windows.html

After getting the CLI tools up and running, I then began playing around with the features, mainly that of launching instances, stopping and restarting them, creating volumes, and attaching them to instances.