Manually exporting then importing a MySQL database off RDS

Today, I was wanting to export my MySQL database off of my RDS instance, and then place it onto a new instance. Doing this was pretty simple, I just needed to use mysqldump and figure out the right parameters:

mysqldump –databases my_data_base_name -v -h my_data_base_name.xxxxxx1mqh2a.us-west-2.rds.amazonaws.com -u my_user_name -P 3306 -p > rdsdump.sql

After I had the dump, I just then needed to copy it to my local computer, then reupload it to the computer that I was wanting to install it on. I used winscp as I was on my Windows laptop and not my mac.

After installing MySQL on the new server and uploading the rdsdump.sql file onto my new machine, I then just had to import the sql file so that it would work:

mysql -u my_user_name -p < rdsdump.sql

And voila, now have a ready to go back up of my database and can take it wherever it needs to go.

How to manually stand up a second ENI on an Ubuntu Instance

So as it turns out, if you manually attach a second ENI to an EC2 Ubuntu instance, it will not work right away. Neither will it work if you reboot the instance. In fact it just won’t work at all.

This is because Ubuntu is not able to recognize additional EC2 ENIs as ‘plug-in-play’ devices. You can confirm this by trying to ping from the first private IP to the second private IP, or by simply issuing an $ ifconfig command. You will see only eth0, and not eth1.

So, in order to be able to SSH into the secondary private IP (or public IP of the ENI), you first need to manually create a config file for the eth1 device at the same directory location of the eth0 config. This happens to reside at /etc/network/interfaces.d/

I created the file as eth1.cfg, and this is what I put inside it:

# secondary eth1 interface
auto eth1
iface eth1 inet static
address 10.0.0.49
netmask 255.255.255.0

At this point you can reboot your instance, and upon reboot you should now be able to ping from the first private IP to the second, and the eth1 should show up when you enter $ ifconfig. If not, you probably set up the config file wrong.

After testing to confirm the eth1 device is set up and configured correctly, you then need to add an internal route so that traffic can flow properly to and from the IP. For this, I used the following route add command:

sudo route add -net 10.0.0.0 netmask 255.255.255.0 dev eth1 gw 10.0.0.1

Now you should be good to go.

EC2 Network bandwidth performance tests!

So using the post I did below, I had some fun testing the network capabilities of some different instances. For my tests, I first tried using micros in the same AZ/region/VPC using private IP, then public IP, then I tried the big boy r3.8xlarge instances (which offers 10 Gigabit speeds), then I tried to bandwidth test these instances across the ocean (Oregon to Sydney regions).

–all instances are using Amazon Linux
–all instances had enhanced networking enabled (enhanced networking not offered with micros)

Here are my results! (I encourage you to do your own testing, because 1 its fun 2 the internet has many hops and network traffic can vary wildly from one day to the next)

Super local, same AZ/region/VPC (best performance you can get)
t2.micro to t2.micro within Oregon private IP: 3.00 Gbits/sec (nice for a micro!)
t2.micro to t2.micro within Oregon public IP: 1.16 Gbits/sec
r3.8xlarge to r3.8xlarge within Oregon private IP: 9.85 Gbits/sec (SICK)
r3.8xlarge to r3.8xlarge within Oregon public IP: 5.09 Gbits/sec

Across the Ocean bro!
t2.micro to t2.micro Oregon to Sydney public IP: 68.1 Mbits/sec
r3.8xlarge to r3.8xlarge Oregon to Sydney public IP: 116 Mbits/sec

What does this show/mean? First off, if you need to get the best speeds, you are best off putting your instances right next to each other in the same AZ same VPC same subnet and using private IPs. However, if you need to go across regions, speeds are still decent.

RANDOM GIF TIME!