tl;dr You can now have Scout Suite scan not
only your cloud environments, but your Kubernetes clusters. Just have
kubeconfig ready and run the following commands:
$ pip3 install --user https://github.com/nccgroup/ScoutSuite/archive/develop.zip $ scout kubernetes
NCC Group’s Container Orchestration Security Service (COSS) practice regularly conducts Kubernetes cluster configuration reviews spanning platform-managed Kubernetes clusters across different cloud platforms and self-hosted clusters.
As a first step, consultants delivering these assessments generally
download target cluster resources for offline static analysis. To
automate some of the more rote steps, we have several scripts and tools
to batch together certain
kubectl configuration gathering
and analysis steps. These types of automations greatly increase the
efficiency of an assessment, leaving more time for deeper manual review
(and custom scripting), enabling overall greater depth and quality of
coverage when assessing a cluster.
kubectl — and its raw output — is generally not that
great to work with by itself. Additionally, from our use of open source
Kubernetes security tooling, we have found the current overall tooling
situation to be non-ideal, with most tooling spitting out text-based
output to stdout and/or dot files for graphviz that must be rendered
manually. To remedy this, we have been working to integrate our tooling
and methodologies into Scout Suite, our open-source cloud environment
scanner. This scanner has a mature output framework for reviewing
Overall, the process for the static analysis phase of a Kubernetes cluster configuration review is similar to a cloud configuration review (e.g. for AWS, Azure, GCP, etc.), and Scout Suite already has a mature user interface for displaying most, if not all, resources pulled from a platform.
Thus the birth of Kubescout, a project to develop a Kubernetes cluster auditing feature integrated into Scout Suite.
To audit a cluster, a
kubeconfig file must be present on
the file system that has Scout Suite installed. On a Linux host, the
location is typically
Using the cluster credentials, Kubescout first determines the cluster context and downloads all cluster resources from the cluster’s API endpoint; however, Kubescout will ensure that the actual values of Secrets are redacted before they are stored on disk. Additionally, if a supported cluster provider (currently EKS, GKE, and AKS) is given, it will also attempt to use the relevant platform credentials, if available, to download resources relevant to the cluster configuration review, such as control plane logging configurations.
After the relevant data is retrieved, it is aggregated and processed to be consumed by Scout Suite’s ruleset engine for finding generation and subsequently the user interface, which eventually becomes a static HTML page powered by custom Handlebars templates. No local web server is required to properly view the HTML page, although the addition of such functionality is part of Scout Suite’s own roadmap for improved performance and development flows.
With a graphical user interface, one can better navigate resources to better identify issues and reduce the rate of false positives. For example, finding hard-coded secrets in ConfigMap objects is easier. And unnecessarily privileged subjects are easier to detect (courtesy of Iain Smart, the COSS practice lead).
Kubescout additionally provides full support for custom resources, enabling not only review of their definitions (CRDs), but of the objects themselves, including for rule processing. This is important as the absence of obvious admission webhooks may belie the existence of an admission controller, that may otherwise be identified from the presence of custom resources.
Kubescout is currently enabled within the
of the main Scout Suite repository.
Users can clone and install the specific branch using the following
commands. Installing the
develop branch of Scout Suite in a
virtual environment (e.g. virtualenv) is
recommended as the branch is under active development.
$ # optionally use a virtualenv $ virtualenv scoutsuite-develop $ source scoutsuite-develop/bin/activate $ # Scout Suite installation $ git clone -b develop https://github.com/nccgroup/ScoutSuite.git $ cd ScoutSuite $ pip3 install . $ scout kubernetes
Alternatively, you can also
pip install the
develop branch zip URL:
$ # optionally use a virtualenv $ virtualenv scoutsuite-develop $ source scoutsuite-develop/bin/activate $ # Scout Suite installation $ pip3 install https://github.com/nccgroup/ScoutSuite/archive/develop.zip $ scout kubernetes
Kubescout uses several options to determine the cluster context for scanning:
|–config-file KUBERNETES_CONFIG_FILE||Name of the kube-config file. By default, it will use Kubernetes’ default directory.|
|–context KUBERNETES_CONTEXT||Cluster context to scan. By default, current_context from config file will be used.|
|–do-not-persist-config||If specified, config file will NOT be updated when changed (e.g GCP token refresh).|
Specifying the cluster provider can be done through
--cluster-provider. The following options are supported
at the moment:
To scan the cluster, use the
kubernetes subcommand such
This initial release of Kubernetes support for Scout Suite is a feature preview providing a base subset of rules, including CIS Benchmarks rules, and core integrations for building out futher Kubernetes security analyses and analysis UXs. We plan to continue our work on Kubescout and hope to introduce the following features in the future:
With this new Scout Suite functionality, we hope to ease the pain of anyone looking to gain some insight into the security posture of their cluster, or who simply wants to learn more about Kubernetes (and may be surprised to see what is in their cluster ;).
Scout Suite welcomes GitHub issues and pull requests. The
--debug option can be used to print exceptions in detail
during development. The
-l option can be used to test
custom Handlebars templates.
The project repository can be found here.