To Breach or Not to Breach
2024-4-19 01:6:54 Author: securityboulevard.com(查看原文) 阅读量:0 收藏

The rapid adoption of could computing was yesterday’s news 5 years ago. Today’s news is that one of the most critical cloud security technologies is woefully ineffective. In addition to efficacy, it is critical to measure operational efficiency. In other words, it doesn’t matter how effective a solution can be if you can’t manage it.

According to the Harvard Business Review, it is estimated that more than 60% of the world’s corporate data is stored in the cloud. There you have it, the case for the advanced cloud firewall (ACFW). For many readers this is an no ship (SIC) Sherlock statement. As we all know, the cloud is a candy store for cybercriminals and the reason is because that is where the ultimate selection of tasty data treats resides. This in turn means that advanced security solutions are needed to protect data stored in the cloud.

Now let’s talk about marketing efficiency, or not. When it comes to AI, I can be a bit of a cynic, as is displayed in my 2024 cyber security predictions blog. But AI is not marketing hype for cybercriminals, and as such is becoming essential for security products.  In the realm of the advanced Cloud Firewall (ACFW), some form of AI is pretty much entry stakes. Machine learning (ML) models are only as good as the teacher, and not all teachers are equally skilled.

We encountered a handful of great ML teachers in SecureIQLab’s 2024 Advanced Cloud comparative validation testing, but as a field, both efficacy and operational efficiency scores were concerning. In alphabetical order, Check Point, Forcepoint, Fortinet, and Palo Alto Networks lead the field in both security efficacy and operational efficiency We’ll take a deeper dive into efficacy, but first let’s talk about operational efficiency. Operational efficiency measures how complex it is to use the product, and can be a tiebreaker or a deal breaker when it comes to selecting an ACFW.  SecureIQLab is the only test lab that has the enterprise perspective in mind when we decided that operational efficiency is a critical metric. How critical? Just ask the NSA.

According to the NSA, in 2023 over 80% of data breaches involved data stored in the cloud, and cloud misconfigurations are the most prevalent cloud vulnerability and are being exploited by hackers to access cloud data and services. The more complex a product is to manage; the more likely configuration and management errors that result in costly breaches are to occur. Excessive complexity in the deployment, configuration, and maintenance of ACFWS create serious security lapses that result in acute CISO Scapegoat Syndrome.

AIE

SecureIQLab assessed operational efficiency scores based on each product’s performance in 12 critical categories. This includes the all-important business continuity management metric, which is unique to SecureIQLab’s security validation testing. Objective real-world operational efficiency metrics are recorded in real-time during security efficacy testing. The inclusion of operational efficiency metrics in conjunction with security efficacy validation sets SecureIQLab apart from others in the breadth and depth of security product validation. Security efficacy metrics are highly theoretical in the absence of operational efficiency assessment.  The metrics used to evaluate the operational efficiency of a product are included and can be found in the SecureIQLab Advanced Cloud Firewall Methodology V1.6.

Of course, efficacy is extremely important, and efficacy is commonly tested, but real-world efficacy is insufficient. Efficacy is useless if you can’t manage it in a way that is cost-effective and not overly complex. We at SecureIQLab have hands-on enterprise and government IT experience; This is why we understand the importance of measuring operational efficiency.

I’ve got good news and bad news. Shall I start with the good news or the bad news? Yeah, I knew you would choose the bad news.

The overall state of affairs in the ACFW space is, how to put this nicely, YIKES!!!

When it comes to operational efficiency, the average score was 75.22%, which sounds much better than the 63.9% average score that results from removing the top four performers in the field. In measuring operational efficiency, the most critical deficiencies were with respect to business continuity management and risk assessment and mitigation. There were also significant challenges when it comes to security policy management. Remember what the NSA said about misconfiguration? The good news is that the top four performers turned in a 92.2% operational efficiency score.

When testing for false positives, other labs will tie this to efficacy testing. If you don’t test for false positives then a vendor can set everything to detect or block and get a 100% score. In addition, default configurations are used, and that isn’t how enterprises usually work.

In the realm of the enterprise default is not the norm. Tested vendors were allowed to tune their products as recommended for enterprise deployments. Security efficacy and false positive rates can be quite different for a tuned product than is the case for a default configuration. False positives are really more aligned with operational efficiency than efficacy, but are such an important metric that they are being called out separately. There was a time that the impact of false positives on business continuity was such that enterprise security professionals were far more tolerant of missed threats than of false positives. The threats have gotten worse, but the impact on business continuity has not decreased.

I’ve got good news, bad news, and better news. Of course you want the better news first. The better news is that the top performers in operational efficiency were also the top performers in security efficacy.

SecureIQLab tested security efficacy based upon 24 different types of attack vectors across four different categories.  The bad news you were waiting for is that, on average, security efficacy was just over 67%. The average for operational efficiency is a bit better than 75%. Gotta love statistics. As it turns out there wasn’t a single average product. Palo Alto Networks, Forcepoint, Fortinet, and Check Point posted the highest security efficacy scores, and the highest operational efficiency scores as well. If we remove the leaders from the equation, then we find that the average security efficacy score is 51%, and the average operational efficiency score is just below 64%.

The good news is that only two products exceeded 0.4% false positive reporting, and four products completely avoided false positives.

Each ACFW solution evaluated in this test underwent scrutiny across multiple distinct enterprise-centric categories, involving attack vectors of more than 1000 real-world operational scenarios in four distinct categories. These scenarios used real-world attacks, such as cross-site scripting attacks, threats delivered via HTTPS, obfuscation, advanced evasive techniques, etc., that have targeted small-to-medium-sized businesses, enterprises, and other organizations.

You can read the SecureIQLab 2024 ACFW CyberRisk reports Validation reports for all of the efficacy and operational efficiency scores and details, but here’s part of what you will find with respect to security efficacy. The categories of attacks included standard threats and advanced threats, covering application-based threats, Malware & Botnets, Browser-based Threats, and Data-loss & Leakage.

One last point. Not only are security validation tests important in helping vendors improve their offerings, they must be relevant to you, the consumer. If there are any metrics that you would like to see that are not included in this test, please reach out to us with your requests. Additionally, if there are specific vendors whose products you would like to see tested, let us know that as well. So, drop us a line at https://secureiqlab.com/contact-us-page/

Randy Abrams

Senior Security Analyst

SecureIQLab


文章来源: https://securityboulevard.com/2024/04/to-breach-or-not-to-breach/
如有侵权请联系:admin#unsafe.sh