Microsoft executives said the IT giant has taken numerous steps as it bulks up the security of its products and services in the almost 11 months since unveiling its company-wide undertaking in the wake of high-profile cybersecurity incidents over the past couple of years.
The Secure Future Initiative (SFI) was launched in November 2023, months after a hack by Chinese threat group Storm-0558 that resulted in hundreds of thousands of emails from top US officials being stolen. The bad actors stole a Microsoft signing key and broke into Microsoft 365 and Exchange Online accounts, exfiltrating email from government and corporate accounts.
US lawmakers blamed faulty security practices by Microsoft led to the hack. Months later, the US Cyber Safety Review Board pummeled Microsoft in a report that said the company’s “security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations.”
Earlier this year, another threat group – Russian-backed Midnight Blizzard, which also is known Cozy Bear, Nobelium, and APT29 – hacked into the Microsoft corporate email accounts of some employees, including senior leadership via a password spray attack that compromised a legacy non-production test tenant account. CEO Satya Nadella told employees in a memo in May that security had become the company’s top priority.
“At Microsoft, we recognize our unique responsibility in safeguarding the future for our customers and community,” Charlie Bell, executive vice president of Microsoft Security, wrote in a blog post this week. “As a result, every individual at Microsoft plays a pivotal role to ‘prioritize security above all else.’”
Bell’s post accompanies Microsoft’s new SFI progress report, noting that the company has dedicated the equivalent of 34,000 full-time engineers to the effort, created a Cybersecurity Governance Council to help define what needs to be done and plan for future initiatives, and appointed 13 deputy chief information security officers (CISOs) who are accountable to specific domains, such as AI, Azure, Microsoft 365, government, identity, and regulated industries.
In addition, Microsoft’s senior leaders every week review SFI progress and updates are given to the company’s board. In addition, Microsoft’s security performance is linked to the compensation of those senior leaders. There also is a Security Skilling Academy for employees, enabling them to “prioritize security in their daily work and identify the direct part they have in securing Microsoft,” according to the report.
The vendor’s Entra ID and Microsoft Account for its public and US government clouds will generate, store, and automatically rotate access signing keys using the Azure Managed Hardware Security Module and more than 73% of tokens for Microsoft-owned applications issued by Entra ID are now covered by standard identity SDKs. The company is enforcing the use of phishing-resistant credentials in its production environments and is using video-based user verification for 95% of the internal users in those environments.
Other measures included eliminating 730,000 unused apps and 5.75 million inactive cloud tenants and deploying more than 15,000 new production-ready locked-down devices. Almost all physical assets on its production network are recorded in a central inventory system, virtual networks with backend connectivity are isolated from Microsoft’s corporate network, 85% of production build pipelines for the commercial cloud are using governed pipeline templates for more consistent deployments, and production infrastructure and services are adopting standard libraries for security audit logs.
This adds onto other steps Microsoft has taken, including mandating multi-factor authentication (MFA) for all sign-ins to Azure cloud accounts and new security and safety capabilities for AI unveiled Tuesday, including corrections in Azure AI to help fix hallucination issues in real time, letting customers embed Azure AI Content Safety features into devices, evaluations in Azure AI Studio to make it easier for organizations to assess the quality and relevance of outputs, and a feature in preview for Azure AI Content Safety to detect pre-existing content and code.
The SFI is being pushed at a company with more than more than 100,000 engineers, project managers, and designers with over 500,000 work items modified per day and 5 million builds per month. Scaling the initiative is “an enormous task that requires significant alignment and coordination” enabled by Microsoft’s platform engineering practices, according to the progress report’s authors.
Recent Articles By Author