Tim MalcomVetter caused a bit of stir over his Responsible Red Teams post last week, which is not surprising since he was pretty fucking rude about the whole thing.
Responsible red teams don’t throw C2 servers all over virtual private cloud providers. Yes, it’s cool, your neat little red team automation script can deploy Empire and Cobalt Strike servers all over Digital Ocean, Linode, AWS, Azure, Google Cloud, with zero clicks … but why?
Considering I have a 2-part series on automated infrastructure deployment, this comment is more-or-less aimed directly at people like myself, byt3bl33d3r and InvokeThreatGuy.
are you just an insensitive jerk
¯\_(ツ)_/¯
The ethos behind what Tim was saying however, obviously has merit.
Our goal is to decrease risk, not increase it. […] remember that as simulated attackers, we are really still defenders.
I don’t think anybody would argue with this, regardless of which side of the public-cloud-c2 fence you come down on. However, a blanket statement that all public cloud C2 infrastructure is irresponible, even for attacking production systems, just isn’t fair in my opinion.
If you, as an operator, have carried out reasonable steps to secure and protect your C2 infrastructure, can articulate those control measures to the target organisation, and they’ve approved its use; then you’re good. They are the risk owners (it’s their data) and therefore it’s their risk to accept. If it’s within their tolerance/appetite, they’ll accept it. If not, they’ll reject it. But that should be their choice, do not make it for them - that’s the difference.
In parts 1 and 2, we discussed several security configurations such as limiting network access with cloud firewalls and SSH key management. In this post, we’ll look at how to create encrypted EBS volumes for use with AWS EC2 instances.
AWS doesn’t allow us to encrypt an existing AMI, so we can’t just take one of their quick start images and spin it up in an encrypted form. Intead, we have to hop through a one-time process of creating our own encrypted AMI and use that as a base image for later deployment.
So first, create a new instance from an AMI of your choice.
EC2 > Instances > Launch Instance
I’m using Ubuntu Server 14.04 LTS (HVM), SSD Volume Type: ami-3fc8d75b
Once the instance has booted, login and make any customisation you want (instal any software you want, make config changes, apply host hardening etc), then power the instance off. See here for hardening guidance.
From this instance, create a new image: EC2 > Instances > Actions > Image > Create Image
This will create a new AMI that is available to you in: EC2 > AMIs
Now copy the AMI you just created.
EC2 > AMIs > Actions > Copy AMI
Obviously click the Encryption
tickbox :)
There is a default Master Key, but you can specificy your own in the IAM
section of the AWS Console.
I am also copying the AMI to the same region (but you can share across regions too).
You will now have a new AMI available, in this example ami-d82fcbbf
.
Whenever you want to deploy a new EC2 instance, you can do so from this customised, encrypted AMI. Deployment options are up to you of course - via the normal GUI or use the AMI number in Terraform.
resource "aws_instance" "http-c2" {
ami = "ami-d82fcbbf"
instance_type = "t2.medium"
key_name = "${aws_key_pair.rasta.key_name}"
vpc_security_group_ids = ["${aws_security_group.http-c2.id}"]
subnet_id = "${aws_subnet.default.id}"
associate_public_ip_address = true
}
You can now terminate the orginal EC2 instance and deregister the unencrypted AMI.