Every application has its own specific goals, critical aspects, and needs. So, the logical conclusion would be that every app needs an in-depth manual configuration, right?
Well, here at Wallarm, we’re security experts and developers from the real world, and we know that in many cases time, learning curve, and maintainability are crucial factors. That’s why we continuously try to make things as easy and straightforward they can be, and that’s why you can include Wallarm WAF in your application packed as a Helm chart, the most widespread package manager for Kubernetes.
Highly appreciated for its modular approach, Helm contributes to a smoother, more standardized and reusable application deployment, reducing complexity and enhancing operational readiness. With the integration of Wallarm WAF as a sidecar container, keeping your application secure won’t give you a headache.
Zoom in to Helm
There are many solid reasons to package your application as a Helm chart. If you are familiar with apt/yum/brew and their role in different Operating Systems, you already know the importance of a package manager.
When you dig into containerized applications, you soon notice that Kubernetes can become very complex with all the objects you need to handle, from ConfigMaps to services, from pods to Persistent Volumes, in addition to the number of releases you need to manage. Helm offers a simple way to package everything into one application.
Especially if you’re new to container applications orchestrated by Kubernetes, it can take a long time to learn how to use it, resulting in high lead times to deploy production-ready apps. Helm charts provide a quick way to deploy and update apps and a more natural integration with third-party solutions you can directly plug into your containerized products, such as CI/CD or blogging platforms.
By sharing the Helm charts within an organization or across organizations, you can avoid duplicate efforts, leading to higher efficiency and reduced errors. A central App catalog reduces duplication and spreads best practices by encoding them into Charts.
As we recently explained, you can install Wallarm WAF node as a sidecar container to the same pod as the main application container in your Helm chart packaged app, or with plain manifest files. The WAF node filters incoming requests and forwards valid requests to the application container.
Installing Wallarm WAF as a sidecar container is particularly helpful if you have the right balance between the available resources and the amount of pods your application is made by. This way, you can route the requests smarter, with more granular control and supervision, and avoid any choking point in your Ingress interface.
An application container accepts incoming requests on port 8080/TCP, and the Service object forwards incoming requests to another port (for example, 80/TCP) on the Wallarm sidecar container. Wallarm sidecar container filters requests and forwards the valid ones to the 8080/TCP port on all healthy pods of the application (Kubernetes Deployment object).
You can directly jump to our documentation, or keep reading!
Hands-on!
The first thing you need is installing Helm by following the official documentation and the short installation process.
Once you’ve done this, you’re ready to take your first step into your application’s bundling as a Helm chart. Let’s start creating the chart:
helm create myapp
Helm will create a directory tree with sample templates, and Chart definition files:
myapp
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
In order to install Wallarm WAF as a sidecar container into your Helm chart bundled application, we’ll basically focus on the already existing Deployment
and Service
objects, both located into templates
folder, and the values.yaml
configuration file, plus a custom ConfigMap.
Do you think that there is too much to configure? Well, the moment you will complete your first setup you’ll notice that it is way shorter to do than to explain.
Let’s start from here: create the wallarm-sidecar-configmap.yaml
template into the templates folder, with the content Wallarm provides you:
apiVersion: v1
kind: ConfigMap
metadata:
name: wallarm-sidecar-nginx-conf
data:
default: |
geo $remote_addr $wallarm_mode_real {
default {{ .Values.wallarm.mode | quote }};
# IP addresses and rules for US cloud scanners
23.239.18.250 off;104.237.155.105 off;45.56.71.221 off;45.79.194.128 off;104.237.151.202 off;45.33.15.249 off;45.33.43.225 off;45.79.10.15 off;45.33.79.18 off;45.79.75.59 off;23.239.30.236 off;50.116.11.251 off;45.56.123.144 off;45.79.143.18 off;172.104.21.210 off;74.207.237.202 off;45.79.186.159 off;45.79.216.187 off;45.33.16.32 off;96.126.127.23 off;172.104.208.113 off;192.81.135.28 off;35.235.101.133 off;34.94.16.235 off;35.236.51.79 off;35.236.87.46 off;35.236.16.246 off;35.236.110.91 off;35.236.61.185 off;35.236.14.198 off;35.236.96.31 off;35.235.124.137 off;35.236.100.176 off;34.94.13.81 off;35.236.55.214 off;35.236.127.211 off;35.236.126.84 off;35.236.3.158 off;35.235.112.188 off;35.236.118.146 off;35.236.1.4 off;35.236.20.89 off;
# IP addresses and rules for European cloud scanners
139.162.130.66 off;139.162.144.202 off;139.162.151.10 off;139.162.151.155 off;139.162.156.102 off;139.162.157.131 off;139.162.158.79 off;139.162.159.137 off;139.162.159.244 off;139.162.163.61 off;139.162.164.41 off;139.162.166.202 off;139.162.167.19 off;139.162.167.51 off;139.162.168.17 off;139.162.170.84 off;139.162.171.141 off;139.162.172.35 off;139.162.174.220 off;139.162.174.26 off;139.162.175.71 off;139.162.176.169 off;139.162.178.148 off;139.162.179.214 off;139.162.180.37 off;139.162.182.156 off;139.162.182.20 off;139.162.184.225 off;139.162.185.243 off;139.162.186.136 off;139.162.187.138 off;139.162.188.246 off;139.162.190.22 off;139.162.190.86 off;139.162.191.89 off;85.90.246.120 off;104.200.29.36 off;104.237.151.23 off;173.230.130.253 off;173.230.138.206 off;173.230.156.200 off;173.230.158.207 off;173.255.192.83 off;173.255.193.92 off;173.255.200.80 off;173.255.214.180 off;192.155.82.205 off;23.239.11.21 off;23.92.18.13 off;23.92.30.204 off;45.33.105.35 off;45.33.33.19 off;45.33.41.31 off;45.33.64.71 off;45.33.65.37 off;45.33.72.81 off;45.33.73.43 off;45.33.80.65 off;45.33.81.109 off;45.33.88.42 off;45.33.97.86 off;45.33.98.89 off;45.56.102.9 off;45.56.104.7 off;45.56.113.41 off;45.56.114.24 off;45.56.119.39 off;50.116.35.43 off;50.116.42.181 off;50.116.43.110 off;66.175.222.237 off;66.228.58.101 off;69.164.202.55 off;72.14.181.105 off;72.14.184.100 off;72.14.191.76 off;172.104.150.243 off;139.162.190.165 off;139.162.130.123 off;139.162.132.87 off;139.162.145.238 off;139.162.146.245 off;139.162.162.71 off;139.162.171.208 off;139.162.184.33 off;139.162.186.129 off;172.104.128.103 off;172.104.128.67 off;172.104.139.37 off;172.104.146.90 off;172.104.151.59 off;172.104.152.244 off;172.104.152.96 off;172.104.154.128 off;172.104.229.59 off;172.104.250.27 off;172.104.252.112 off;45.33.115.7 off;45.56.69.211 off;45.79.16.240 off;50.116.23.110 off;85.90.246.49 off;172.104.139.18 off;172.104.152.28 off;139.162.177.83 off;172.104.240.115 off;172.105.64.135 off;139.162.153.16 off;172.104.241.162 off;139.162.167.48 off;172.104.233.100 off;172.104.157.26 off;172.105.65.182 off;178.32.42.221 off;46.105.75.84 off;51.254.85.145 off;188.165.30.182 off;188.165.136.41 off;188.165.137.10 off;54.36.135.252 off;54.36.135.253 off;54.36.135.254 off;54.36.135.255 off;54.36.131.128 off;54.36.131.129 off;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
wallarm_mode $wallarm_mode_real;
# wallarm_instance 1;
{{ if eq .Values.wallarm.enable_ip_blocking "true" }}
wallarm_acl default;
{{ end }}
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
location / {
proxy_pass http://localhost:{{ .Values.wallarm.app_container_port }};
include proxy_params;
}
}
Then, we can move to the tuning of the existing Deployment
and Service
objects, both located in the templates
folder. As we explain in our documentation, a complex application can have many Deployment
objects for different components, so you’ll need to find an object which defines pods which are actually exposed to the Internet, such as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
# Definition of your main app container
- name: myapp
image: <Image>
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
# Port on which the application container accepts incoming requests
- containerPort: 8080
In order to install Wallarm WAF as a sidecar container, you’ll need to edit some parts of the Deployment
object.
First of all, the running pods must update themselves after a change in the ConfigMap object we just created. So, you need to add the checksum/config annotation in the metadata section.
metadata:
annotations:
# Wallarm element: annotation to update running pods after changing Wallarm ConfigMap
checksum/config: {{ include (print $.Template.BasePath "/wallarm-sidecar-configmap.yaml") . | sha256sum }}
name: myapp
Then, you are ready to define the Wallarm sidecar container in the spec.template.spec.containers
section, and to define the wallarm-nginx-conf
volume in the spec.template.spec.volumes
section.
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
# Wallarm element: definition of Wallarm sidecar container
- name: wallarm
image: {{ .Values.wallarm.image.repository }}:{{ .Values.wallarm.image.tag }}
imagePullPolicy: {{ .Values.wallarm.image.pullPolicy | quote }}
env:
- name: WALLARM_API_HOST
value: {{ .Values.wallarm.wallarm_host_api | quote }}
- name: DEPLOY_USER
value: {{ .Values.wallarm.deploy_username | quote }}
- name: DEPLOY_PASSWORD
value: {{ .Values.wallarm.deploy_password | quote }}
- name: DEPLOY_FORCE
value: "true"
- name: TARANTOOL_MEMORY_GB
value: {{ .Values.wallarm.tarantool_memory_gb | quote }}
ports:
- name: http
# Port on which the Wallarm sidecar container accepts requests
# from the Service object
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/sites-enabled
readOnly: true
name: wallarm-nginx-conf
# Definition of your main app container
- name: myapp
image: <Image>
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
# Port on which the application container accepts incoming requests
- containerPort: 8080
volumes:
# Wallarm element: definition of the wallarm-nginx-conf volume
- name: wallarm-nginx-conf
configMap:
name: wallarm-sidecar-nginx-conf
items:
- key: default
path: default
Before moving to the Service
object, let’s take a step back to the ports.containerPort
value in the sidecar container definition. Change it if you need, matching your application routing.
Now we’re ready to update the Service
object in Kubernetes, located in the templates
folder, to make sure the ports.targetPort
value is identical to ports.containerPort
from the definition of Wallarm sidecar container.
spec:
selector:
app: myapp
ports:
- port: {{ .Values.service.port }}
# Wallarm sidecar container port;
# the value must be identical to ports.containerPort
# in definition of Wallarm sidecar container
targetPort: 8080
Let’s return to the Helm chart directory and open the values.yaml
file. In this file, we need to include the wallarm object definition.
wallarm:
image:
repository: wallarm/node
tag: 2.14
pullPolicy: Always
wallarm_host_api: "api.wallarm.com"
deploy_username: "username"
deploy_password: "password"
app_container_port: 80
tarantool_memory_gb: 2
mode: "block"
There are some parameters we need to update. The first one is wallarm_host_api
. You can choose between two values defining the right Wallarm API endpoint, depending on where your Wallarm account is located. If you are in the EU cloud, you should use api.wallarm.com
. If your account is located in the US cloud, please use value us1.api.wallarm.com
.
The deploy_username
and deploy_password
refer to the user having the Deploy role in your Wallarm account. If you didn’t create it yet, it’s time to do it by following our instructions.
The app_container_port
value defines the port on which your container accepts incoming requests, and it must match the ports.containerPort
you defined in your main app container.
Now we reach the real gold of this quick configuration process: how do you want to run the request filtering? Setting a value for mode
, you can disable the request processing (off
), or just monitor it (monitoring
). Still, the typical use is the block
mode, so Wallarm WAF can process all requests and block the malicious ones using its own AI-powered approach. Are you in doubt? Well, let us tell you that 88% of our customers use Wallarm WAF in blocking mode!
The setup is almost over: with a simple command you can check if the values.yaml
file is valid.
helm lint
If everything looks right (i.e. no error is returned and no chart is marked as failed), then we are ready to deploy the modified Helm chart in the Kubernetes cluster.
helm upgrade RELEASE CHART
where RELEASE
is the name of an existing Helm chart, and CHART
is the path to the Helm chart directory. For any fine-tuning, our documentation can provide advice and clear instructions.
The very last step, before sending our upgraded bundle in the wild, is a real-world simulation, made possible by Wallarm tools.
Let’s send a malicious test attack to our new bundled application, something like this:
http://<resource_URL>/?id='or+1=1--a-<script>prompt(1)</script>
In the Events section of your Wallarm account dashboard, you should find a new attack on the list, describing SQLI and XSS attacks. If you see this, your application is now protected by the Wallarm WAF installed as a sidecar container in your Helm chart bundled application. And this took just 10 minutes!
Conclusion
If you need to deploy your Kubernetes-orchestrated application in a fast, reliable, and maintainable way, bundling it as a Helm chart is a good option. Wallarm WAF installed as a sidecar container is the best solution for making it secure, too, and the configuration process is quick and easy. With less than 10 minutes of setup, your production-grade application will be ready to analyze the requests, blocking the malicious ones and routing the good ones to the healthy pods.