Infrastructure as Code (Terraform + Ansible)
2021-06-21 22:55:18 Author: rastamouse.me(查看原文) 阅读量:75 收藏

Blog / June 21, 2021 /

If you’ve any experience with building infrastructure designed to support a red team or adversary simulation exercise, you’ll have likely come across the Red Team Infrastructure Wiki. If not, it’s a curated collection of resources for creating secure and resilient infrastructure – covering everything from high-level design considerations to step-by-step setup instructions.

Just a cursory scroll through the content will give you a feel for the volume of information; and after building several VMs for delivering payloads, performing phishing and redirecting HTTP/DNS a few times, you’ll think “geez, it would be so much better if this could be automated”.

In 2017, I authored a couple of blog posts entitled “Automated Red Team Infrastructure Deployment with Terraform”. If you’re also not familiar with Terraform, it’s an application that allows you to define infrastructure as code and deploy said infrastructure onto a platform such as AWS, Azure or VMware. This post is nothing new – more of a refresh to reflect the newer Terraform HCL syntax.

There are many attractions for using Terraform. Some that come to mind:

  • Design (and test) infrastructure once and re-use that design multiple times.
  • Configuration files can be source controlled.
  • Create and destroy infrastructure with a single command, whilst you drink <insert beverage of choice>.
  • Monitor state changes (and implement design changes on-the-fly).

Once VMs have been deployed, you’ll also want to install applications and/or make other configuration changes to them. Although Terraform can be used (in a limited fashion), it’s not specifically designed for this job. There are much better solutions out there such as Chef, Puppet and Ansible.

I like Ansible because I think it’s by far the easiest to start working with. You can install and run it locally; it doesn’t require a “management server”; it doesn’t require agents on the target VMs (only inbound SSH); and the playbooks are also repeatable (and sharable).

Let’s get started…

Design

For this post, I’m going to deploy 2 Linux VMs. One will run Covenant and the other will act as an Apache HTTP redirector. This is a nice setup for labs, training/learning etc.

The idea is to not make Covenant publicly available. A security group will only allow the home/office IP address of the “attacker” to access ports 22 and 7443 for management.

Port 8080 will only be available from the private IP of the redirector (they will exist in the same subnet).

And the redirector will only be publicly available on port 80.

When the HTTP C2 traffic hits the redirector, it will be proxied over AWS internal networking to the Covenant listener. Even though the Covenant VM will have a public IP address, the very existence of the VM should not be visible from the perspective of the “victim”.

We’ll also add some specific rules so that only traffic matching our C2’s profile will get proxied through. Everything else can be dropped, redirected or proxied to a fake web page.

Terraform

As mentioned above, Terraform has many providers (a “provider” is an abstraction of an upstream API) – each one with their own means for providing authentication. I’ll be using AWS which requires you to download the AWS CLI and complete the initial configuration with your Access and Secret keys.

I’m also cheating slightly by allowing Terraform to deploy into my default VPC. This means that some resources already exist – obviously the VPC itself, as well as several subnets. The only other resource you should create ahead of time is the SSH key to be used with the Linux VMs.

Terraform requires one or more configuration files with the *.tf file extension. You can throw all of your configurations into one file, but it’s more manageable to split them out by function. My folder structure looks like this:

[email protected]:~/infrastructure-demo$ ls -l
total 28
drwxr-xr-x 2 rasta rasta 4096 Jun 20 20:49 ansible
-rw------- 1 rasta rasta 1678 Jun 13 18:24 deployment.pem
-rw-r--r-- 1 rasta rasta 1564 Jun 20 20:30 instances.tf
-rw-r--r-- 1 rasta rasta  155 Jun 13 18:24 main.tf
-rw-r--r-- 1 rasta rasta  147 Jun 20 18:32 outputs.tf
-rw-r--r-- 1 rasta rasta  952 Jun 13 18:24 security_groups.tf
-rw-r--r-- 1 rasta rasta  177 Jun 14 10:19 variables.tf

I use main.tf to define the providers I want to use and any configurations that they require. You can build infrastructure with multiple providers at the same time.

Terraform configuration files use a JSON syntax.

terraform {
   required_providers {
     aws = {
       source = "hashicorp/aws"
     }
   }
 }

 provider "aws" {
   profile = "default"
   region  = "eu-west-2"
 }

variables.tf contains entries that are used multiple times throughout the other configurations, and are also prone to change.

variable "ami" {
  type    = string
  default = "ami-0194c3e07668a7e36"
}

variable "private_key" {
  type    = string
  default = "deployment.pem"
}

variable "whitelist_cidr" {
  type    = string
  default = "1.2.3.4/32"
}

These variable declarations have “default” values, but can also be overridden on the command line. In my AWS region, this AMI is Ubuntu 20.04. The whitelist_ip is my home IP address, and will be used in security_groups.tf.

To create an EC2 instance, define an aws_instance resource. At a minimum we should provide an AMI, the instance size and the SSH key name (this is the SSH key name in the AWS console, not the filename on disk).

resource "aws_instance" "covenant" {
  ami           = var.ami
  instance_type = "t2.micro"
  key_name      = "deployment"

  tags = {
    Name = "covenant"
  }
}

outputs.tf is useful for printing information about the deployment. For instance, we’d like too know what they’re public IP address are.

output "covenant_ip" {
  value = aws_instance.covenant.public_ip
}

output "redirector_ip" {
  value = aws_instance.redirector.public_ip
}

You can test deployment at any time:

$ terraform fmt
$ terraform validate
Success! The configuration is valid.

$ terraform plan -out deployment_plan
$ terraform apply deployment_plan

Outputs:
covenant_ip = "3.10.9.247"
redirector_ip = "3.10.138.79"

$ terraform destroy

Defining the Security Groups are quite straightforward. Each rules needs a name, as well as ingress and egress rules.

resource "aws_security_group" "covenant_sg" {
  name = "covenant_sg"

  ingress {
    from_port   = 7443
    to_port     = 7443
    protocol    = "tcp"
    cidr_blocks = [var.whitelist_cidr]
  }

ingress {
    from_port       = 8080
    to_port         = 8080
    protocol        = "tcp"
    security_groups = [aws_security_group.redirector_sg.id]
  }

egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

You will then need to return to instances.tf and assign your security groups to the relevant instances.

vpc_security_group_ids = [aws_security_group.ssh-in_sg.id, aws_security_group.covenant_sg.id]

I also made sure to add a subnet_id and private_ip for each VM. This subnet is one of the default ones on my VPC and has the range 172.31.0.0/20.

subnet_id  = "subnet-5f9ffb36"
private_ip = "172.31.15.10"

I need to define the private IP of the Covenant VM so that I can use it in the Apache configuration on the redirector.

Ansible

Like Terraform’s configuration files, Ansible Playbooks are declared using a YAML syntax and can be executed against machines over SSH. There are lots of examples in the ansible-examples repo.

$ ansible-playbook -u <ssh-username> -i <target IP> --private-key <ssh-private-key> playbook.yaml

Ansible also has loads of built-in modules that can perform different tasks on a target. For instance, the apt module can install apt packages, as simple as:

- name: Install apache2
  apt:
    name: apache2
    state: latest 

Within my ansible directory, I have all the files required to setup Apache on the redirector VM, and Covenant.

[email protected]:~/infrastructure-demo/ansible$ ls -l
total 20
-rw-r--r-- 1 rasta rasta  335 Jun 20 11:01 000-default.conf
-rw-r--r-- 1 rasta rasta 1130 Jun 20 11:38 covenant-setup.yaml
-rw-r--r-- 1 rasta rasta  307 Jun 14 12:35 covenant.service
-rw-r--r-- 1 rasta rasta  257 Jun 20 20:30 htaccess
-rw-r--r-- 1 rasta rasta  900 Jun 21 12:30 redirector-setup.yaml

The covenant-setup playbook will:

  • Install Microsoft’s apt signing key
  • Download and install .NET Core 3.1
  • Clone Covenant from GitHub
  • Build the Release
  • Register a new systemd service for Covenant
  • Start the service and enable it on boot

The redirector-setup playbook will:

  • Install Apache
  • Enable mod_rewrite, proxy and proxy_http modules
  • Overwrite the default Apache config file
  • Copy a .htaccess file to /var/www/html
  • Restart Aapche

We want the playbooks to execute automatically by Terraform after the VMs have been deployed. For that we can use the local-exec provisioner within each aws_instance block.

provisioner "local-exec" {
  command = "sleep 60; ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook -u ubuntu -i '${self.public_ip},' --private-key ${var.private_key} ansible/redirector-setup.yaml"
}

I use sleep 60 to give the VM a chance to finish booting up and for SSH to become available; and setting HOST_KEY_CHECKING to false prevents ansible from asking for confirmation on the SSH fingerprint.

Once everything is deployed, you should see the default “It works!” page on port 80 of the redirector, and the Covenant UI on port 7443 of the Covenant VM.

When configuring the Covenant listener, use the private IP of the Covenant VM as the Bind Address; and use the public IP of the redirector as the Connect Address.

Generate and execute a Launcher, and the Grunt should appear.

If the Covenant listener was Internet facing, you’d be able to browse to a URL defined in the C2 profile and see the page that Covenant serves up, which would look something like this:

However, the neat aspect to the redirector is that we can check elements of the incoming HTTP request and if it doesn’t match what we expect from a Grunt (such as the User-Agent string and known Cookie values etc), we can return different content.

This means that the Grunt traffic is allowed through to Covenant, whilst a casual observer would see something completely different (in the below example, I’m just serving the default Apache page).

Whilst the Grunt still functions.

(rasta) > WhoAmI

GHOST-CANYON\Daniel

The full set of Terraform files and Ansible playbooks are available on Patreon.


文章来源: https://rastamouse.me/infrastructure-as-code-terraform-ansible/
如有侵权请联系:admin#unsafe.sh