Passed CISSP! My guide on how I passed the CISSP, and how you can too.

Today (6/23/2017), I successfully passed the CISSP exam! It took quite a lot of studying and exam prep, and despite all the prep done, there were still many questions which were tough to answer due to having multiple ‘right’ answers. I feel fortunate that I passed on the first try after reading a lot of other people’s experience where they barely failed and had to retake one or multiple times. For those wanting to know my process of how I passed, please read on. My personal thought process was that this is a really expensive test ($599 at time of writing) and I did not want to have to take this test twice so I would ‘overstudy’ to succeed. In the end, I feel I did enough to pass (obviously) but I’m not really sure I over-studied, seeing as how the test was still difficult for me. Most questions, if you know the material well you can eliminate two wrong answers, but still have 2 pretty convincing answers left.

First, I am the type of learner where I mainly like to learn by reading and doing, and there are thankfully a lot of reading materials out there for the CISSP. Books/study materials I engaged with include the Official CISSP study guide (no links that I use contain referral URLs or anything else like that, they are clean), CISSP all in one study guide by Shon Harris, 11th hour CISSP study guide,  CISSP study guide 3rd edition by Conrad, and finally the complete CISSP course offered by Kelly on Cybrary (FREE).

If I had to rate these materials 1-5 on how relevant they were to passing, 1 being crap and 5 being great, I would give 5s to the official study guide (mainly due to the over 1000 practice test questions and also great content and organization), the 11th hour guide and the 3rd edition by Conrad as these were all very on point related to the exam (however 11th hour guide is just the 3rd edition condensed, if I had to pick one or other go with 3rd edition). 4 out of 5 goes to Cybrary, Kelly is great and the course is on point but I don’t feel it is deep enough to help you pass. 3 out of 5 go to Shon Harris book, this book is long and in my opinion not organized well. Some chapters are over 200 pages!! It took me by far the longest to get through this book, it has the meat you need but also the stuffing, fries, desserts, and crab cakes that you don’t need. I would only recommend reading that if you have a lot of extra time and can persevere through it. I wasn’t surprised to read on Reddit that many people quit the book early on, if it weren’t for the feeling of hating to waste money I would have done the same. I read/watched all of the following guides cover to cover and performed every practice test/end of chapter test.

My strategy was to consistently score 80% or above in exams, and my practice exams were all 77-83% so I felt comfortable taking the test. My main strategy was to read all the material and do all the end of chapter exams for the majority of my studying, and then wait to perform the full 250 question practice exams after finishing reading all of the content. It took me a few months to get through all books, then I did a week of non stop testing every day (one full practice exam per day plus studying all wrong answers), until I finally took the test today. Also the last week I re-read the 11th hour guide, great book with everything you need (almost) and nothing you don’t.

Random gif to appease the gods

He shoots, he scores! 

Passed the AWS Certified SysOps Administrator – Associate certification today

Hello All,

I decided it was time to get some more certifications under my belt so I recently sat the AWS certified SysOps administrator associate exam today.

Obviously it helps that I work at AWS so I get exposure to the platform and services, but I found it was a really big help to study the courses available at, there course for the exam was pretty comprehensive, and in addition to the course they also offered a practice exam which had many similar questions to the real exam, so it was great in helping me prepare.

I have also heard great things from the courses at however I have yet to actually use their courses yet so just putting it out there if you find it helpful.

Overall I got an 83% on the exam which I am happy with, immediately after you take the exam you find out if you passed or not, you also then shortly get an email afterward showing what % you got for each subsection of the test. For reference, my highest section was High Availability at 100%, and my lowest was Networking at 71%.

The test had 55 questions and you had 80 minutes to do the test, I only took about 40 minutes but I felt pretty well prepared. From what I understand, there is no set minimum amount needed to pass, however I think you should shoot for a 70% or higher to ensure a successful pass grade.

If any of you have questions about the test please feel free to reply to this post.

Oh no helicopter going down way to fast!

Great FREE IT Training!

Just wanted to throw this out there as a GREAT and FREE resource for a lot of different IT training, ranging from all of your CompTIA certs up to the CISCO certifications, as well as different Microsoft certifications along with many other things. It is called Cybrary,

And did I mention it is free? If you are interested in learning more about IT but do not really want to spend any money doing it, this is a great option. Currently I am going through the Penetration testing and Ethical Hacking course, Leo does a great job of both going through concepts and then doing labs which you can also follow along with if you have/install Windows/Kali linux (which I run in a virtual environment).

Go ahead and create an account and get some learning, for someone just new to IT or a seasoned pro there is info for everyone to learn and get better with.

user@host $ ./random_gif

Using Cloudfront and AWS WAF with WordPress

So, for today’s post I decided to beef up my AWS cloud environment for my site I detailed in an earlier post how I turned my site from a one instance LAMP stack hosted site into a distributed, loosely coupled “Enterprise” class environment here

Well, some new services have come out since then and one of the coolest is the Web Application Firewall ( At first I thought I would just place it in front of my ELB but then I learned that you have to use it with Cloudfront, so it looked like my site was going to get beefed up in the form of having Cloudfront in front of my ELB, and an AWS WAF in front of the Cloudfront distribution blocking traffic I deem unnecessary (pretty awesome right?)?

Since my site is a wordpress site, it is often targeted by an annoying attack that occurs where attackers try to hit /xmlrpc.php on the server, ( I wanted my WAF to take care of this by blocking the traffic before it even gets to my ELB. To do this, I first created the WAF by going to the WAF console: and then clicking create web ACL. From here I named it ‘WordPressfilter’ then scrolled down to the ‘String match conditions’ and clicked create condition. Here I gave it a name ‘xmlrpc.php’, and then the interesting part was figuring how I wanted to block the traffic. I settled on using the URI, match type contains, transformation none, and value to match ‘/xmlrpc.php’.

From here I then had to create a rule, where after naming it, you select ‘does’, ‘match at least one of the filters in the string match condition’ and select the condition you created before, then I chose to block traffic with the rule, and below that Allow all requests that don’t match rules. From here you can click create, and now we have to create a Cloudfront distribution to use this with. Here are our docs on creating a WAF (, I also found this helpful (

After creating the WAF, I had to create a Cloudfront Distribution that would use my ELB as an origin.  Doing this was pretty easy, first you click on Cloudfront in the AWS console then click create distribution – web. From here in the “Origin Domain Name” section, click and select your ELB, it should auto load for you in the field. Then I found it necessary to click the third option under ‘Allowed HTTP Methods’ to allow both GET and POST. I also changed forward headers to ALL and YES for forward query string. I used the default for rest of answers.

Under ‘Distribution settings’ I found it necessary to select the AWS WAF Web ACL I just created, then add the CNAMES for my site (,, and and finally I turned on logging and fed it to an S3 bucket I own. Here is our docs on that (

After all this was done, all that was left was to wait for the Cloudfront distribution to spin up (like 15 minutes!!!) and then change my Route 53 settings. Namely, I had to delete the A records of and and that were pointing to my ELB, and instead create new Alias records where I point them to the new Cloudfront Distribution that was span up. This way, when someone now goes to my site, they first hit the WAF, then Cloudfront, then ELB, then autoscaled EC2 web instances, and finally this then talks to my RDS mysql db containing the data. I’ll have to sketch a post of this as it is useful to know how data flows through your environment.

Doing all of the above was great as it makes my site much, much more resilient against DDOS and other attacks as the first thing attackers now hit is Cloudfront, which is a global network of edge locations which can distribute traffic fast and efficiently to all over the world, and also I can cut them off at the knees with my WAF in the sense that I can add new rules on the fly to fight unwanted traffic from even hitting my back end architecture.

Here is a graph showing the WAF in action. This is really cool, because previously this malicious traffic was free to hit and try and disrupt my site, but now it does not even get past Cloudfront, saving load and strain on my back end servers.

Happy blocking! And to the people attacking other people’s sites for no good reason…

Amazon Elastic File System

So Amazon Web Services recently released a pretty cool service that has been asked for by many different customers. Traditionally in the EC2 environment, shared storage is some what of a headache. EBS volumes can only be attached to one instance, and S3 buckets are supposed to function more in an object storage role and are not intended to act as a file system. To fix this issue, AWS released the EFS or Elastic File System. With EFS, you can have a shared storage file system that can be mounted and shared between multiple different instances residing in the same AZ.

Setting it up is easy, first you head over to your AWS management console and head over to the EFS page where you then click to create a new file system (please note, this service is currently in preview stage so you have to request access). Once you pick the VPC and AZs you want it to reside in and give it some tags, you are ready to create.

Once the EFS is created, you then need to ready your instances so that the new EFS can be mounted locally within your instance.

  1. Using the Amazon EC2 console, associate your EC2 instance with a VPC security group that enables access to your mount target. For example, if you assigned the “default” security group to your mount target, you should assign the “default” security group to your EC2 instance. (learn more about using VPC security groups with Amazon EFS)
  2. Open an SSH client and connect to your EC2 instance. (find out how to connect)
  3. Install the nfs client on your EC2 instance.
    • On an Amazon Linux, Red Hat Enterprise Linux, or SuSE Linux instance:
      sudo yum install -y nfs-utils
    • On an Ubuntu instance:
      sudo apt-get install nfs-common

Mounting your file system

  1. Open an SSH client and connect to your EC2 instance. (find out how to connect)
  2. Create a new directory on your EC2 instance, such as “efs”.
    • sudo mkdir efs
  3. Mount your file system using the DNS name. The following command looks up your EC2 instance’s Availability Zone (AZ) using the EC2 instance metadata URI, then mounts the file system using the DNS name for that AZ. (what is EC2 instance metadata?)
    • sudo mount -t nfs4 $(curl -s efs

If you are unable to connect, please see our troubleshooting documentation.

Rinse and repeat for every other instance that you want to have a shared filesystem with, and voila, you now have easy and scalable shared storage within EC2.

AWS Pop Up Loft!

I just spent the week acting as an architect at the AWS Pop Up Loft in San Francisco. Pretty cool little spot, it’s a 3 floor building which has wifi and free food/drinks throughout. It even comes equipped with a free keg! You can check out more about it here:

It was cool to meet face to face with customers using AWS and to help them with issues they had or to help them better architect their resources with AWS. Questions ranged wildly from using Lambda to using Cognito with DynamoDB, to setting up streaming services while minimizing data out costs, to helping people with failed elastic beanstalk deployments. I even had an EMR cluster issue which was interesting to look at. Shout out to Mark as he is a Solutions Architect who also worked with me behind the bar helping people with their concerns.

Highlight of the trip has to be eating at In-N-Out however, pictures incoming.



Manually exporting then importing a MySQL database off RDS

Today, I was wanting to export my MySQL database off of my RDS instance, and then place it onto a new instance. Doing this was pretty simple, I just needed to use mysqldump and figure out the right parameters:

mysqldump –databases my_data_base_name -v -h -u my_user_name -P 3306 -p > rdsdump.sql

After I had the dump, I just then needed to copy it to my local computer, then reupload it to the computer that I was wanting to install it on. I used winscp as I was on my Windows laptop and not my mac.

After installing MySQL on the new server and uploading the rdsdump.sql file onto my new machine, I then just had to import the sql file so that it would work:

mysql -u my_user_name -p < rdsdump.sql

And voila, now have a ready to go back up of my database and can take it wherever it needs to go.

How to manually stand up a second ENI on an Ubuntu Instance

So as it turns out, if you manually attach a second ENI to an EC2 Ubuntu instance, it will not work right away. Neither will it work if you reboot the instance. In fact it just won’t work at all.

This is because Ubuntu is not able to recognize additional EC2 ENIs as ‘plug-in-play’ devices. You can confirm this by trying to ping from the first private IP to the second private IP, or by simply issuing an $ ifconfig command. You will see only eth0, and not eth1.

So, in order to be able to SSH into the secondary private IP (or public IP of the ENI), you first need to manually create a config file for the eth1 device at the same directory location of the eth0 config. This happens to reside at /etc/network/interfaces.d/

I created the file as eth1.cfg, and this is what I put inside it:

# secondary eth1 interface
auto eth1
iface eth1 inet static

At this point you can reboot your instance, and upon reboot you should now be able to ping from the first private IP to the second, and the eth1 should show up when you enter $ ifconfig. If not, you probably set up the config file wrong.

After testing to confirm the eth1 device is set up and configured correctly, you then need to add an internal route so that traffic can flow properly to and from the IP. For this, I used the following route add command:

sudo route add -net netmask dev eth1 gw

Now you should be good to go.

EC2 Network bandwidth performance tests!

So using the post I did below, I had some fun testing the network capabilities of some different instances. For my tests, I first tried using micros in the same AZ/region/VPC using private IP, then public IP, then I tried the big boy r3.8xlarge instances (which offers 10 Gigabit speeds), then I tried to bandwidth test these instances across the ocean (Oregon to Sydney regions).

–all instances are using Amazon Linux
–all instances had enhanced networking enabled (enhanced networking not offered with micros)

Here are my results! (I encourage you to do your own testing, because 1 its fun 2 the internet has many hops and network traffic can vary wildly from one day to the next)

Super local, same AZ/region/VPC (best performance you can get)
t2.micro to t2.micro within Oregon private IP: 3.00 Gbits/sec (nice for a micro!)
t2.micro to t2.micro within Oregon public IP: 1.16 Gbits/sec
r3.8xlarge to r3.8xlarge within Oregon private IP: 9.85 Gbits/sec (SICK)
r3.8xlarge to r3.8xlarge within Oregon public IP: 5.09 Gbits/sec

Across the Ocean bro!
t2.micro to t2.micro Oregon to Sydney public IP: 68.1 Mbits/sec
r3.8xlarge to r3.8xlarge Oregon to Sydney public IP: 116 Mbits/sec

What does this show/mean? First off, if you need to get the best speeds, you are best off putting your instances right next to each other in the same AZ same VPC same subnet and using private IPs. However, if you need to go across regions, speeds are still decent.