Many organizations have experienced significant data breaches after inadvertently exposing secrets such as tokens, API keys, digital certificates, and user credentials that attackers gained access to. Many factors have made it harder to avoid secrets exposure, including the adoption of cloud services and DevOps practices, the increase in distributed work environments, the rise of automated workflows, and the use of unmonitored tools and external services.
Cybernari researchers recently found that public code and image repositories are the most common place for organizations to leave secrets lying around in an unprotected manner. Other likely trouble spots include FTP servers and other self-managed public services, SaaS environments, package managers such as npm and PyPI, and cloud storage buckets such AWS S3 and GCP Google cloud storage.
At a strategic level, organizations need to adopt a multilayered approach to prevent accidental exposure of sensitive information via such sources. To do that, they must implement strict policies for secrets, conduct code reviews, and carry out a final check on all software before it goes out the door.
Here are five essential tips for ensuring that your organization’s development secrets stay secret.
[ Take a deep dive with our Special Report: Secrets Exposed ]
The key to keeping secrets from being exposed in public code repositories is to ensure they don’t end up there in the first place and, failing that, to remove them quickly when they do. Eric Schwake, director of cybersecurity strategy at API security vendor Salt Security, said, “It’s important to prioritize [approaches] that centralize storage, enforce access controls, and allow for secrets rotation.”
“A unified secrets management strategy should seamlessly integrate with diverse environments, providing a consistent approach to storing, controlling access, and rotating secrets across on-premises, cloud, and hybrid deployments.”
—Eric Schwake
Liav Caspi, CTO at Legit Security, said every code change submitted to a repository stays there forever as code history, so it’s vital to use preventive scanning at the developer endpoint or integrated development environment to catch secrets before they end up in code that’s stored on public repositories. Equally important are regular repository audits and scanning for secrets that might find their way into a public repository anyway.
“The combination of both is critical. It is important to note that because secrets can exist in code history and are not easily visible when reviewing code, the secrets scanner must be able to scan the historical changes in addition to the most recent code.”
—Liav Caspi
The need to educate developers on the importance of not inadvertently including secrets in code they commit to public repositories is also important, said Tim Erlin, a security strategist at the API security company Wallarm. Organizations should enforce the use of environment variables or secure secrets management systems instead of hardcoding sensitive information. Regular training for developers on best practices for handling secrets is crucial, he said.
“Developers must be trained in secure coding practices, such as avoiding hardcoding API keys, tokens, and other secrets. Mandatory code reviews are necessary to identify and fix any accidental exposure of secrets.”
—Tim Erlin
Scanning tools need to be integrated into the CI/CD pipeline to automatically analyze code for hardcoded secrets, identify vulnerabilities in real time, and provide actionable insights for remediation.
Many of the breaches that organizations have experienced in recent years via AWS S3 buckets and other cloud storage environments have resulted from misconfigurations, poor visibility, and improper controls for data access.
Preventing secrets exposure in cloud storage buckets involves setting strict access policies, enabling encryption at rest and in transit, and regularly auditing permissions, said Jason Soroko, senior fellow at certificate lifecycle management firm Sectigo. Organizations should also stay aware of the state of the art for diagnosing and detecting misconfigurations in cloud security systems and implement them as required.
“Utilizing tools like AWS Config Rules or Azure Policy can automate compliance checks and enforce best practices for bucket configurations.”
—Jason Soroko
Cloud storage bucket policies should enforce restrictions, such as blocking public access and using object-level permissions to limit exposure. Similarly, enabling multifactor authentication (MFA) for access to sensitive resources, configuring logging and monitoring with services such as AWS CloudTrail to track access patterns, and setting up lifecycle policies to manage the retention and deletion of sensitive data can all contribute to stronger security. Because misconfigurations are a common problem, organizations should enforce least-privilege access controls to ensure that only authorized users, applications, and services can access sensitive data in cloud storage.
Thorough vendor assessments are critical, since organizations often rely on third-party providers to host their infrastructure, said Patrick Tiquet, vice president for security and architecture at Keeper Security.
“Ensuring that cloud vendors have robust security controls and compliance with data privacy regulations helps reduce the risks associated with using external cloud services.”
—Patrick Tiquet
The rising use of third-party SaaS applications has exposed organizations to several new risks, including those stemming from misconfigurations, third-party security lapses, shadow IT, API abuse, and lack of visibility.
Mitigating these issues requires measures such as strong access control, encryption, real-time monitoring, and regular security audits of all third-party vendors. “Secrets management is really important when dealing with SaaS environments. Every key generated and provided to a service should be named, have an expiration date, and have its usage monitored,” said Legit Security’s Caspi. “Organizations should be in control of the credentials they generate and constantly scan for forgotten/misplaced secrets.”
Many SaaS applications use APIs for integration, and abuse of poorly secured APIs is a particular concern, said Tiquet.
“Secrets like API keys, tokens, and credentials are highly valuable to cybercriminals because they often provide direct access to sensitive data and systems.”
—Patrick Tiquet
One big challenge in SaaS environments is that these credentials can become scattered across different platforms, environments, and teams. So, without a centralized management system for managing these keys, secrets could end up stored in unsafe places such as plaintext config files or developer workstations or could be hardcoded into applications, Tiquet said.
A centralized system ensures that all API keys, tokens, and credentials are stored securely, that access is controlled, and that usage is monitored, he said. Organizations should enforce least-privilege access to ensure only authorized users and systems can interact with secrets.
“Centralized systems also make it easier to rotate and update secrets without risking operational downtime.”
—Patrick Tiquet
For legacy systems that still use passwords, enforcing MFA and single sign-on can enhance security for user credentials and access tokens within SaaS applications, Sectigo’s Soroko said.
A common misconception enterprise organizations have about cloud SaaS environments is that the cloud provider fully manages all aspects of security. In reality, SaaS providers typically are responsible only for securing the infrastructure and platform itself, which means organizations still need to secure their data, user access, and secrets within the application.
Package manager environments such as npm and PyPI have become a common target of attackers trying to infiltrate enterprise environments. Common risks include typosquatting attacks, where a threat actor deliberately creates packages with names nearly identical to popular packages; dependency confusion attacks, where a package on a public repository might have the same name as an internal package; and abandoned packages and over-permissioned packages.
To alleviate such risks, organizations should implement strict vetting processes for packages, use private registries where possible, regularly audit dependencies, and employ automated scanning tools, Legit Security’s Caspi said.
“Package supply chain attack risk is one of the most devastating risks. Attackers gaining access to artifact storage can carry out a supply chain attack either by tampering with the organization’s deliverables or replacing some package the organization consumes with a malicious version.”
—Liav Caspi
The authentication tokens for package upload and download should be protected, have the least privilege needed for developers, and never be stored in an unsafe location, Caspi said.
“Another risk that should be taken into account is distributing a package, like a container image, with keys and password inside. Advanced secret scanning tools can detect these before the package is distributed publicly.”
—Liav Caspi
Sectigo’s Soroko said one risk that organizations need to be mindful of is developers accidentally publishing secrets within packages or using dependencies containing malicious code. Developers should thoroughly review code before publishing, use automated tools to scan for embedded secrets, and verify the integrity of third-party packages. “Implementing dependency management practices and using package signing can mitigate these risks,” Soroko said.
FTP servers and other public-facing, self-managed services such as DNS servers and VPN servers can often leak secrets and other sensitive information. Common causes include misconfigurations, use of cleartext and hardcoded passwords to access these services, insufficient or overly permissive access controls, incorrectly set file permissions, and exposed admin interfaces.
Salt Security’s Schwake said legacy systems such as FTP servers often lack modern security features such as encryption and MFA, making them particularly vulnerable to breaches and credential theft. The safest recourse is to upgrade where possible to technologies that offer better security natively. If upgrading is not feasible, isolate these systems and implement strong perimeter security and network segmentation. “Utilizing a secrets management solution with proxy capabilities may enable secure access to secrets for legacy systems,” Schwake said.
Additionally, organizations should adopt a zero-trust model for legacy systems, where all access is verified continuously and strict segmentation isolates these systems from the broader network, Schwake said.
“Regular patching — even for legacy systems — is critical, and where patches are unavailable, virtual patching or compensating controls, like intrusion detection systems, can offer additional protection. Continuous monitoring of both legacy and modern systems ensures that any suspicious activity is quickly identified and addressed.”
Follow these key secrets security steps, and your team will be on the right track. But with the rise in software supply chain attacks, the need for application security testing (AST) tools that provide a final check is critical. The CircleCI secrets exposure incident was widely seen as a wakeup call for security teams to update their approach.
The reality is that traditional AST tooling is out of sync with the modern threat landscape, and that means secrets exposures can go undetected, said Matt Rose, former evangelist for ReversingLabs.
“A lot of AppSec technologies are just too slow and can’t keep up with the speed of DevOps.”
—Matt Rose
Take a deeper dive — and learn what your organization can do about it — in our Special Report, “Secrets Exposed.”
*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Jai Vijayan. Read the original post at: https://www.reversinglabs.com/blog/keep-secrets-secret-5-essential-tips