How to easily protect any Kubernetes application?
2020-07-24 04:34:32 Author: lab.wallarm.com(查看原文) 阅读量:231 收藏

The king of container orchestration needs the best security companion: Wallarm WAF.

When it comes to speed, portability, and the advantages of microservices architecture, no other product can compete with Kubernetes as a container orchestrator. Nevertheless, even the best solutions have challenges, and security is always one of these.

According to the CNCF’s Cloud Native Landscape, today the market offers more than 100 tools to manage containers, but 89% of the software architects, DevOps managers, and back-end developers who work in containerized environments are using some different forms of Kubernetes. What’s even more impressive is the growth rate of Kubernetes-based solutions in the production phase. In 2016, only 23% of the responders to the CNCF annual survey used it in production. In 2019, that number rocketed to 84%. These numbers show that organizations have every day more trust in containers, and they’re ok in using them in user-facing applications. 

The growing adoption also means an increasing trust in large projects relying on containers. In the last edition of its survey, CNCF reports that «the number of respondents using 249 containers or less dropped by 26% since 2018. Conversely, the number of respondents using 250 or more increased by 28%. The most significant change was in those using fewer than 50 containers, which dropped by 43%». Is this good news? Yes, because it shows that Kubernetes is more mature day after day, and its potential for scaling is now clear. At the same time, this evolution carries emerging challenges.

Mind the (last) step

When we jump from development to the deployment/production phase, we have to face real-world security challenges. 

The point is quite simple: web and API applications are the favorite target of criminals. In the last years, we’ve seen an increase in distributed denial-of-service (DDoS) attacks and ransomware, but web application attacks are still the most common cause of data breaches and financial crimes. 

With the COVID-19 pandemic, the whole world has started to work remotely, and many popular applications have become crucial in our life. Web applications today involve every level of our daily routine. It means that attacking web applications gives criminals access to more and more credentials and information that can be used for financial gain, blackmailing, or socio-political engineering.

Familiar threats in a new environment

The latest edition of the OWASP Top 10, a standard awareness document for developers and web application security listing the most critical security risks to web applications, identifies SQL injection flaws, broken authentication, sensitive data exposure, broken access control, cross-site scripting XSS, and security misconfiguration as the most common web application attack strategies. Nothing new, one could say. But with the containerized architectures gaining more and more popularity, the challenges evolve, and so do our need for strategies and solutions. The distributed nature of applications relying on containers and orchestrators makes it difficult to quickly investigate which containers might be misconfigured or vulnerable. The traditional solutions are no longer effective anymore.

Generally speaking, there’s good news. The attack surface on a containerized application is significantly smaller than on a monolithic one. A traditional host, indeed, contains multiple services, and an attack can compromise the whole system. On the other side, with a microservices model, attacks only impact the container and the service to which it belongs. Having said that, we need to secure our production environment.

An evolving approach for an evolving security

The most common solution is to set up a Web Application Firewall (WAF) to issue protection rules on our software. But there’s a problem: legacy WAFs use a system-scale preventive approach. Because of this, in a containerized architecture, they often make bad decisions, producing irrelevant alerts and a high false-positive rate. Besides, they are quite ineffective in containerized environments because they examine each network connection in isolation. Where microservices are involved, legacy WAFs have no visibility into the container-to-container network traffic that attackers use to persist in that environment.

Attacks on container-based web applications require a container-based approach to security, which can associate specific behaviors with specific containers, improving the accuracy without affecting the performance.

Wallarm Advanced Cloud-Native WAF seamlessly integrates with the microservices architecture, requiring no manual rule configuration and granting ultra-low false positives, at the point that 98% of customers use Wallarm Advanced Cloud-Native WAF in blocking mode. The AI-powered core of Wallarm WAF enables the detection of malicious behavior by learning from real-world threats and stopping them. Wallarm WAF identifies injection attacks in network traffic and recognizes programming languages, encodings, data types, and credential formats. Preventing and responding to malicious activity can minimize disruption to other operations.

The last step: a secure and easy way

Speed, portability, and scalability. These Kubernetes qualities are usually paired with another one: the configuration is quick and easy. So, why should securing our application be difficult and time-consuming? 

Wallarm Cloud-Native WAF offers automated application security. It installs directly as Kubernetes NGINX Ingress Controller. Alternatively, Wallarm can be installed as a sidecar Docker container within Kubernetes pods with extensive support, spanning from Google GKE to Amazon EKS, from Azure AKS to Kubernetes in a private cloud. 

Adding an Ingress controller allows the quick implementation or modification of external resources to work with your internal Kubernetes microservices. For example, through an Ingress controller, end users can access your company’s catalog. 

Adding an Ingress controller allows you to accept external web and API calls. Ingress controllers process inbound requests based on rules, such as URI path, or backing service name, defined to add external connectivity to the cluster and also provide HTTP(s) load balancing.

As you can see in the documentation, the Wallarm-enabled Ingress controller can be installed with a simple Helm command, adding security to all this newly inbound traffic. And there’s more on our Github page.

While the Ingress controller is the primary deployment method, another approach is gaining more and more popularity: the installation of Wallarm WAF as a sidecar container.

It installs to the same pod as the main application container, alongside the main container image, sharing its same lifecycle, as you can see in our documentation. The WAF node filters incoming requests and forwards valid requests to the application container.

Many companies are encouraging this approach. Does this mean that the sidecar method is better than the Ingress one? Of course not, because it depends on what we need to achieve and the size of our application. In a few words, the main disadvantage of using an Ingress controller is the continuous need for an exact configuration to prevent it from becoming a bottleneck. Given its “intermediary” nature between external access and services within the cluster, an Ingress controller has to manage load balancing, SSL terminations, and name-based virtual hosting. With a growing traffic load, you may experience performance loss and an ever-increasing number of connection resets.

Today, Kubernetes is an industry-standard, and developers use it for larger products every day. Thus, as you can see in the official Kubernetes Github page, the “bottleneck” issue has become more and more common. On the other side, the use of Wallarm WAF as a sidecar controller can be quite demanding in terms of memory if you need to run a high number of services and applications. As usual, the best solution is the most suitable for your needs, and that’s why Wallarm offers both methods.

Let’s focus on the sidecar container approach: the evolution of our architecture is quite noticeable. Commonly, Kubernetes uses a Service object with the ClusterIP or NodePort type to be exposed directly to the Internet or other Kubernetes applications.

Scheme of the traffic flow without Wallarm sidecar container

Now, let’s see how the Wallarm sidecar container transforms the data flow. The application container accepts incoming requests on port 8080/TCP, and the Service object forwards them to another port (for example, 80/TCP) on the Wallarm sidecar container. Then, Wallarm WAF filters the requests and delivers the valid ones back to the 8080/TCP port on all the healthy pods of the application.

You don’t need to reinvent the wheel

Wallarm WAF’s AI engine solves the security challenges, also in a containerized environment, by implementing three essential tasks. First of all, it applies machine learning to the task of pattern recognition to identify application functions by identifying features in the application traffic, deriving from it a full application business logic profile. 

Then, for each of the application functions, Wallarm creates a behavior profile. This profile consists of two different machine learning models: data format model and user behavior model. When a request falls outside the regular behavior model, the attack detection phase enters into play. Once the anomaly is detected, Wallarm classifies the attack type, as described in the “Evolution of Real-Time Attack Detection”, securing the application without disrupting the performance.

Conclusion

Web applications architecture evolve, and so the security challenges do. When it comes to containers, the traditional approach to the protection of our production environment is ineffective, so it’s critical to choose a dedicated container security solution with in-depth monitoring capabilities and robust machine learning.


文章来源: https://lab.wallarm.com/how-to-easily-protect-any-kubernetes-application/
如有侵权请联系:admin#unsafe.sh