Deploying resiliant Red Team infrastructure can be quite a time consuming process. This wiki maintained by Steve Borosh and Jeff Dimmock is probably the best public resource I’ve seen in regards to design considerations and hardening tips.
For someone like myself, who destroys and stands fresh infrastructure up for each engagement, building everything by hand is a long, laborious process. Anything that can be automated is a good thing.
For the purpose of this post, this is what we’re going to build:
1 I’m using the same SSH key across all instances for this post - separate them out as much as you like. 2 Again, I’m leaving things loosey-goosey. You may want to restrict these to something more sensible (e.g. the CIDR range of your victim).
3 You can leave these open for initial installations, then close them afterwards.
dontgethacked.site
& rekt.site
are already configured to use Cloudflare DNS, but are currently without records.
rekt.site
will be used for DNS Beacons, for which we’ll need an A & NS record.NS
record -> DNS redirector IP.A
record for webdisk
-> ns1.rekt.site
.{support, cpanel}.dontgethacked.site
-> HTTP/S redirector.static.dontgethacked.site
-> HTTP/S C2 server.static.dontgethacked.site
.To accomplish this, we’ll be using Terraform - an open source tool that codifies APIs into declarative configuration files. It supports many different providers, including AWS, Azure, Bitbucket, Cloudflare, DigitalOcean, Docker, GitHub, Google Cloud, OpenStack, OVH and vSphere to name a few.
First, we define custom variables for the things we’ll need to refer to in the upcoming configurations. These include API tokens, IP address, SSH keys and so on.
variable "aws-akey" {}
variable "aws-skey" {}
variable "do-token" {}
variable "cf-email" {}
variable "cf-token" {}
variable "rasta-key" {}
variable "attacker-ip" {}
variable "dom1" {}
variable "sub1" {}
variable "sub2" {}
variable "sub3" {}
variable "dom2" {}
variable "sub4" {}
aws-akey = "[removed]"
aws-skey = "[removed]"
do-token = "[removed]"
cf-email = "[removed]"
cf-token = "[removed]"
rasta-key = "rasta.pub"
attacker-ip = "2.31.13.109/32"
dom1 = "dontgethacked.site"
sub1 = "support"
sub2 = "cpanel"
sub3 = "static"
dom2 = "rekt.site"
sub4 = "webdisk"
Here we define the provider parameters that we’re going to use. Each provider is structured slightly differently - Digital Ocean, for instance, allows you to specify a region in the Droplet configuration, whereas AWS requires it here.
provider "aws" {
access_key = "${var.aws-akey}"
secret_key = "${var.aws-skey}"
region = "eu-west-2"
}
provider "digitalocean" {
token = "${var.do-token}"
}
provider "cloudflare" {
email = "${var.cf-email}"
token = "${var.cf-token}"
}
resource "aws_key_pair" "rasta" {
key_name = "rasta"
public_key = "${file("${var.rasta-key}")}"
}
resource "digitalocean_ssh_key" "rasta" {
name = "rasta"
public_key = "${file("${var.rasta-key}")}"
}
I’m storing rasta.pub
on disk, but you could also place the entire key within the variable, e.g. rasta-key = "ssh-rsa blahblahblah"
.
In AWS, we must create a Virtual Private Cloud, Subnet, Internet Gateway and Routing Table. We’re not using private networking, so the ranges are quite inconsequential.
resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
}
resource "aws_subnet" "default" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "10.0.0.0/24"
}
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.default.id}"
}
resource "aws_route_table" "default" {
vpc_id = "${aws_vpc.default.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
}
resource "aws_route_table_association" "default" {
subnet_id = "${aws_subnet.default.id}"
route_table_id = "${aws_route_table.default.id}"
}
Security Groups define the inbound/output firewall rules for AWS Instances. Notice how we can reference IP variables.
resource "aws_security_group" "dns-rdir" {
name = "dns-redirector"
vpc_id = "${aws_vpc.default.id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.attacker-ip}"]
}
ingress {
from_port = 53
to_port = 53
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 53
to_port = 53
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "http-rdir" {
name = "http-redirector"
vpc_id = "${aws_vpc.default.id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.attacker-ip}"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 53
to_port = 53
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Here we create the redirector instances.
resource "aws_instance" "dns-rdir" {
ami = "ami-489f8e2c" # Amazon Linux AMI 2017.03.1
instance_type = "t2.micro"
key_name = "${aws_key_pair.rasta.key_name}"
vpc_security_group_ids = ["${aws_security_group.dns-rdir.id}"]
subnet_id = "${aws_subnet.default.id}"
associate_public_ip_address = true
}
resource "aws_instance" "http-rdir" {
ami = "ami-489f8e2c" # Amazon Linux AMI 2017.03.1
instance_type = "t2.micro"
key_name = "${aws_key_pair.rasta.key_name}"
vpc_security_group_ids = ["${aws_security_group.http-rdir.id}"]
subnet_id = "${aws_subnet.default.id}"
associate_public_ip_address = true
}
I replicated most of the settings here from Raffi’s demo video.
resource "aws_cloudfront_distribution" "http-c2" {
enabled = true
is_ipv6_enabled = false
origin {
domain_name = "${var.sub3}.${var.dom1}"
origin_id = "domain-front"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
default_cache_behavior {
target_origin_id = "domain-front"
allowed_methods = ["GET", "HEAD", "OPTIONS", "PUT", "POST", "PATCH", "DELETE"]
cached_methods = ["GET", "HEAD"]
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
forwarded_values {
query_string = true
headers = ["*"]
cookies {
forward = "all"
}
}
}
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["GB"]
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
resource "digitalocean_droplet" "http-c2" {
image = "ubuntu-14-04-x64"
name = "http-c2"
region = "lon1"
size = "2gb"
ssh_keys = ["${digitalocean_ssh_key.rasta.id}"]
}
resource "digitalocean_droplet" "dns-c2" {
image = "ubuntu-14-04-x64"
name = "dns-c2"
region = "lon1"
size = "2gb"
ssh_keys = ["${digitalocean_ssh_key.rasta.id}"]
}
resource "digitalocean_droplet" "paydel" {
image = "ubuntu-14-04-x64"
name = "payload-delivery"
region = "lon1"
size = "512mb"
ssh_keys = ["${digitalocean_ssh_key.rasta.id}"]
}
Cross-provider configuration is one of my favourite aspects to Terraform. Notice that we refer to the public IP of our AWS redirector instances within this Digital Ocean configuration.
resource "digitalocean_firewall" "http-c2" {
name = "http-c2"
droplet_ids = ["${digitalocean_droplet.http-c2.id}"]
inbound_rule = [
{
protocol = "tcp"
port_range = "22"
source_addresses = ["${var.attacker-ip}"]
},
{
protocol = "tcp"
port_range = "80"
source_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "443"
source_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "50050"
source_addresses = ["${var.attacker-ip}"]
}
]
outbound_rule = [
{
protocol = "udp"
port_range = "53"
destination_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "80"
destination_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "443"
destination_addresses = ["0.0.0.0/0"]
}
]
}
resource "digitalocean_firewall" "c2-dns" {
name = "c2-dns"
droplet_ids = ["${digitalocean_droplet.dns-c2.id}"]
inbound_rule = [
{
protocol = "tcp"
port_range = "22"
source_addresses = ["${var.attacker-ip}"]
},
{
protocol = "udp"
port_range = "53"
source_addresses = ["${aws_instance.dns-rdir.public_ip}"]
},
{
protocol = "tcp"
port_range = "50050"
source_addresses = ["${var.attacker-ip}"]
}
]
outbound_rule = [
{
protocol = "udp"
port_range = "53"
destination_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "80"
destination_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "443"
destination_addresses = ["0.0.0.0/0"]
}
]
}
resource "digitalocean_firewall" "paydel" {
name = "paydel"
droplet_ids = ["${digitalocean_droplet.paydel.id}"]
inbound_rule = [
{
protocol = "tcp"
port_range = "22"
source_addresses = ["${var.attacker-ip}"]
},
{
protocol = "tcp"
port_range = "80"
source_addresses = ["${aws_instance.http-rdir.public_ip}"]
},
{
protocol = "tcp"
port_range = "443"
source_addresses = ["${aws_instance.http-rdir.public_ip}"]
},
{
protocol = "tcp"
port_range = "50050"
source_addresses = ["${var.attacker-ip}"]
}
]
outbound_rule = [
{
protocol = "udp"
port_range = "53"
destination_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "80"
destination_addresses = ["0.0.0.0/0"]
},
{
protocol = "tcp"
port_range = "443"
destination_addresses = ["0.0.0.0/0"]
}
]
}
resource "cloudflare_record" "http-rdir1" {
domain = "${var.dom1}"
name = "${var.sub1}"
value = "${aws_instance.http-rdir.public_ip}"
type = "A"
ttl = 300
}
resource "cloudflare_record" "http-rdir2" {
domain = "${var.dom1}"
name = "${var.sub2}"
value = "${aws_instance.http-rdir.public_ip}"
type = "A"
ttl = 300
}
resource "cloudflare_record" "http-df" {
domain = "${var.dom1}"
name = "${var.sub3}"
value = "${digitalocean_droplet.http-c2.ipv4_address}"
type = "A"
ttl = 300
}
resource "cloudflare_record" "dns-c2-ns1" {
domain = "${var.dom2}"
name = "ns1"
value = "${aws_instance.dns-rdir.public_ip}"
type = "A"
ttl = 300
}
resource "cloudflare_record" "dns-c2-a" {
domain = "${var.dom2}"
name = "${var.sub4}"
value = "ns1.${var.dom2}"
type = "NS"
ttl = 300
}
Outputs are printed at the end of deployment, so we can print all the IPs etc as they get assigned. You can also print them on-demand after deployment with > terraform.exe output dns-rdir-ip
for example.
output "dns-rdir-ip" {
value = "${aws_instance.dns-rdir.public_ip}"
}
output "http-rdir-ip" {
value = "${aws_instance.http-rdir.public_ip}"
}
output "paydel-ip" {
value = "${digitalocean_droplet.paydel.ipv4_address}"
}
output "http-c2-ip" {
value = "${digitalocean_droplet.http-c2.ipv4_address}"
}
output "dns-c2-ip" {
value = "${digitalocean_droplet.dns-c2.ipv4_address}"
}
output "cf-domain" {
value = "${aws_cloudfront_distribution.http-c2.domain_name}"
}
We can finally deploy our infrastrucutre and test it out.
> terraform.exe plan -out plan
> terraform.exe apply plan
Apply complete! Resources: 23 added, 0 changed, 0 destroyed.
Outputs:
cf-domain = d2x0m979j4p9ih.cloudfront.net
dns-c2-ip = 138.68.188.159
dns-rdir-ip = 35.177.246.178
http-c2-ip = 138.68.188.160
http-rdir-ip = 35.176.5.164
paydel-ip = 178.62.74.205
Verify that the DNS records were created and resolve to the expected IPs with nslookup
.
Name: cpanel.dontgethacked.site
Address: 35.176.5.164
Name: support.dontgethacked.site
Address: 35.176.5.164
Write a test file into the web root of the payload delivery server and grab it with curl
.
╭─[email protected] ~
╰─➤ curl http://cpanel.dontgethacked.site/test
this is my payload delivery server
╭─[email protected] ~
╰─➤ curl http://support.dontgethacked.site/test
this is my payload delivery server
Verify that the NS
record was create and resolves to the expected IP.
Name: ns1.rekt.site
Address: 35.177.246.178
Create a DNS Beacon listener on the DNS C2 server and test DNS responses from it.
Name: webdisk.rekt.site
Address: 0.0.0.0
Name: blahblah.webdisk.rekt.site
Address: 0.0.0.0
Again, verify the DNS record.
Name: static.dontgethacked.site
Address: 138.68.188.160
Host a test file on the HTTP/S C2 server and verify that we can read it using the direct CloudFront URL.
╭─[email protected] ~
╰─➤ curl -A 'notcurl' http://d2x0m979j4p9ih.cloudfront.net/test
this is my http/s c2 server
Finally, verify that we can also read it via a0.awsstatic.com
and specifying the host header
.
╭─[email protected] ~
╰─➤ curl -A 'notcurl' http://a0.awsstatic.com/test -H 'Host: d2x0m979j4p9ih.cloudfront.net'
this is my http/s c2 server
Looks good to me.
In Part 2 we cover the automatic installation of software & tools such as Oracle Java, CS, Apache & Socat.