Manual security testing services and automated AppSec tools have their place in DevOps. Knowing which to use will make your security efforts more effective.
AppSec tools that can quickly identify secrets or sensitive data accidentally (or intentionally) inserted in source code are crucial in automatically scanning millions of lines of code to find critical security issues. But even the best automated AppSec tools can’t find all security vulnerabilities, especially the ones that require hacking into a website or system architecture. This is where manual testing of business logic flaws in web apps and threat modeling of system designs is necessary.
Business logic, or application logic, is the set of rules that defines how an application operates and functions. Static application security testing (SAST) tools find issues by examining static code, but they can’t easily identify business logic weaknesses—flaws in the design or implementation of an application that allow an attacker to elicit unintended behavior by interacting with the applications in ways that developers never intended.
Flaws in application logic can allow attackers to circumvent these rules. An attacker passing unexpected values into server- side logic could potentially cause, for example, a transaction to complete outside of a normal purchase workflow. A deep understanding of an application’s business logic—and the ways attackers can interact with it—helps testers model potential attack vectors and perform a more-focused code review.
OWASP states on its wiki, “testing of business logic flaws is similar to the test types used by functional testers that focus on logical or finite state testing. These types of tests require that security professionals think a bit differently, develop abuse and misuse cases, and use many of the testing techniques embraced by functional testers. Automation of business logic abuse cases is not possible and remains a manual art relying on the skills of the tester and their knowledge of the complete business process and its rules.”
Creating a threat model helps ensure that business requirements are protected against malicious actors, accidents, or other causes. Performing threat modeling to identify and prioritize potential threats and security mitigations early in the design phase helps prevent security vulnerabilities and weaknesses from being introduced, even before any coding happens.
A threat model usually includes a description, design, or model of the primary concerns; a list of assumptions that can be checked against; potential system threats and actions to be taken for each threat; and a way of validating the model and threats and verifying the success of actions taken. The scope of the threat model needs to be defined, and the model must be based on a deep understanding of what the application does. Architecture diagrams, dataflow transitions, and data classifications are also very useful in the process.
Threat modeling requires close collaboration between people who perform different roles (e.g., security architects, DevOps engineers, and development leads) and who have sufficient technical and risk awareness to agree on the framework to be used during the threat modeling exercise. The variety of perspectives and expertise helps bring awareness of current threats to development teams that may not necessarily have deep security knowledge.
In the context of cyber security and IT, secrets are private data or credentials that must be stored securely with tight access control. Typically secrets include:
Secrets provide users and applications with access to resources (e.g., sensitive data, systems, and services) and enable authorized developers to make changes to source code in application development.
There are tools and services (e.g., Hashicorp Vault and AWS Secrets Manager) that enable central management of secrets, manage access control lists of people and machines and what they can access, handle dynamic rotation of credentials, encrypt data at rest and in transit, and generate audit logs. Secrets management is crucial for organizations regardless of their usage of DevOps, because almost all organizations use digital secrets to some extent.
Secrets such as private keys should not be stored in public repositories of code, although they often are. A recent study from researchers at North Carolina State University found that over 100,000 public GitHub repositories were leaking secret keys, and “thousands of new, unique secrets are leaked every day.” There is also the horror story of the developer who accidentally checked his AWS S3 key in to GitHub. Although he pulled it within five minutes, he still racked up a large bill from the bots that crawl open source sites looking for secrets.
Secrets often consist of sensitive information that a continuous integration (CI) build server (or job) needs in order to complete work. What secrets management tools don’t do is scan code to identify secrets or back doors that have been accidentally left in source code. If discovered by hackers, those secrets and back doors could lead to data breaches. Some companies, such as GitHub, scan public and private repositories for secrets (e.g., access tokens, API keys, private keys, etc.) and notify the service providers of credentials that have been committed accidentally, to prevent fraudulent use.
Organizations can also use AppSec tools (e.g., SAST, SCA, IAST, and DAST) that use dedicated security checkers or rulesets to scan their own private code repositories and web apps for secrets and other forms of sensitive data. Application security tools can identify CI workflow-centric secrets that have been accidentally left in code, and they can also automate the detection of other types of sensitive data, such as credit card numbers and user credentials, and whether these types of sensitive data have not been encrypted or inadequately encrypted. These security weaknesses can then be fixed before an application is released to production.
System back doors that allow access to data and processes at the system level include malware, such as remote access Trojans. Application back doors are versions of legitimate software that can compromise data, transactions, and whole systems.
Static analysis tools examine applications in a similar way to how attackers look at them. They create a detailed model of an application’s data and control flows. Although application back doors are often obfuscated and difficult to detect, static analysis of source code or binaries can identify them as part of a malicious code detection review process. For compiled software or a subverted development tool chain, back door detection may require static binary analysis since the back door only exists after compilation or linking. Frameworks and libraries that are available as binaries also need to be scanned for security vulnerabilities and weaknesses.