Carsten: For 10 years now, since the inception of VMRay, we’ve been talking about full visibility and I really like that term. But just over the recent year or so I learned that visibility is one thing, it means the data point is there. Perception is another thing. That means I see the data point amongst all the other data points. And comprehension is what we really want to achieve, that I understand and can act and respond and use that data point.
Alexandra: So, there are a lot of large platform vendors out there. And of course, government entities tend to turn to them because, homogeneity of the platform may be makes it easier to roll out. Do you see that as a good idea, a bad idea?
Carlos: For me, it depends on who you’re talking to and what are the requirements of the organization.
Because you need to have a balance of both worlds. You need to have a balance of here’s a platform and this is within our security stack. This is where it makes sense to have a platform offering. But if it makes more sense in terms of comfort and skill and comprehensiveness for your workforce, does it make sense to have a best of breed in this area, especially if it’s to fill a gap.
All these platforms have multiple offerings, but they’re always going to have a gap in coverage. How are you going to fill that gap? And this is where this best of breed kind of solutions can come into play. You may have a platform that may be offering a form of advanced malware analysis or sandboxing capability, but if it’s not to the degree or up to the par that you prefer, and maybe you want something that’s best of breed, that’s what you’re going to start looking at.
You start looking at that balance and integration, interoperability of a best of breed solution with a platform offering.
Carsten: My view is clear. What we typically see and what we recommend is combining platforms with best of breed point products for mission-critical use cases. I mean, I hate complexity, I love simplicity. So, I totally get the point why platform makes a lot of sense for different reasons.
However, I the platform vendor can never be laser-focused on one thing because they do 20 things. And also security is a team play, that’s for sure. Each vendor, each vendor, no matter how large, how experienced, how old, whatever that is, each vendor has its strengths and its weaknesses and its focus areas and its blind spots.
And the only way how we can increase and get to the best security posture is by combining different technologies, different approaches. I mean, we don’t want to have a single point of failure.
Alexandra: How can we find the balance between proactive and being reactive. So, incident response by definition is reactive. And zero trust, has always propagated the idea that assume breach, right? You’re going to have to assume breach no matter which platform you actually have or no matter how much money you invested into it. So, what’s the role of incident response? Or where does it sit? Where does also maybe where do the sandboxing or the advanced malware then need to sit in the architecture?
Carlos: I believe that Advanced Malware Analysis (AMA) is still going to be a SOC capability because you have those preventative measures and firewalls are a preventative type of tool from the get go. They have that capability but that’s informed through other analytics such as AMA, right? Those are going to be informed by that machine learning and information and telemetry that they collect to make more decisions.
But I believe that AMA can exist in both in that kind of sense. It could most primarily in the SOC. But like I mentioned before, zero trust is primarily going to be a very preventative type of approach to your architecture. And AMA is in taking advantage of that visibility. There’s going to be gain through all those various different controls and enforcement points to start, you know, making sense and turning that information into human readable information, making it more comprehensive, to Carsten’s point.
But it’s ultimately, it’s to enhance the incident response. It’s to improve the overall experience for the analyst. When you have this information, you’re getting all this noise. Where does it make sense to make it more automated in terms of the firewall to respond to and respond and address a basic and more known kind of threat that is detected versus an analyst can actually focus on a zero take, zero-day type of incident and then to invest that time and effort in doing that investigation with that.
Carsten: Sandboxing was invented in the 2000s, mid of 2000s. And the main idea was incident response. So, incident response was the main use case. And then, it’s being used for generating threat intelligence. But at least with our solutions, we see more and more use cases in the SOC.
Also with other security functions where in the past only analysts could have done that work manually. And either you put those analysts there, which was super expensive, or you said it’s too expensive, I don’t do this job. And now with sandboxing capabilities that produce much less noise, that are much more trustworthy, that are much faster, we see that our solutions at least are being used for use cases that were not on our radar years ago, like for threat hunting, detection engineering, user reported phishing.
But you started the question with being proactive. The imminent problem and challenge of security, that it’s normally reactive because it’s easy for the attacker just to find one way the defender needs to protect all possible ways, and it’s always typically one step behind.
However, using a solution like VMRay, advanced threat detection already shifts a bit from reactive in the direction of proactive, because now you detect something that has never seen before on Day 0… and not only after a week, after a month, after a year. And what I already mentioned before, I think that is even more important is the usage of threat intelligence. Organizations use the IOCs for detection engineering, for threat hunting, for blocking, and they’re using threat intelligence also on a more strategic level to assess their defense capabilities against the techniques that are being used in the wild right now.
So I believe leveraging threat intelligence really puts you into a position where you can predict and preempt if that works.
And last but not least, the number of attacks is increasing, the number of victims or targets is increasing, the number of attackers is increasing, but the number of defenders is not increasing at the same speed. So, in order to get rid of that backlog and this endless incoming stream of alerts, we need other ways, we need automation, we need powerful tools, we need to make better use of our very scarce security people and staff, to put them into the position to become proactive. Otherwise, they will always be reactive.
So I believe it’s a combination of different things that is needed here. And sandboxing – or advanced threat detection, can play a role in different pieces. It’s not the silver bullet like nothing else out there, but it’s a very critical capability that is needed in so many different places to automate and accelerate experts and enable juniors.
And that is what we need.
A question from the audience: How do you think the balance between on- premise and cloud capabilities is working out now? Obviously there’s a push in various government entities, for example in the United States to go cloud. What is a smart way to approach that?
Carsten: There’s a simple view like the Americans are in the cloud and everyone else is not. But obviously, that is not the case. So we see also in Europe a lot of shift already to the cloud. We also see that a bit in Asia, and Middle East.
But we also have a lot of US Government customers who want to still remain on on-prem or even we have one customer who moved back from cloud to on-prem. I mean there are these compliant things and stuff like that, but still for very sensitive and very critical use cases which we more find with governments than with the private sector.
Carlos: I think that for a while earlier on there was such a huge drive globally to be in the cloud, right? Because there was a lot of good selling points and benefits from migration to cloud. But especially with the public sector it was cheaper. Allegedly, it was cheaper.
But we’re starting to see that it’s kind of balancing out, costing the same thing. But the thing is, is with, especially with the public sector, and at least from my experience, there’s a portion that you could probably say, this is going to be cloud. We can, we can migrate this, this, this portion of our environment to the cloud because it’s unclassified in nature, but we just have to make sure we have some controls around it and it’s public facing.
But there’s other aspects such as a critical sensitivity, secret, top secret type information. These are entities that are not going to be like, “let me throw that in Azure, let me throw that in Google Cloud platform, throw that into whatever public cloud service provider that’s going to host my services.”
For me there’s a, essentially because trust is a big play in here and I know we’re talking about zero trust, but there’s a level of trust that the government has with letting their data exist in an environment that they don’t have control over. And that’s pretty much where we’re at.
We’re going to live in a world of hybrid architecture, meaning we’re going to have a foot in both doors. We have some portion of our environment, it’s going to be cloud and some portion of our environment is going to be on prem. Looking at manufacturing, utilities, services and government, they are probably going to be the more prominent verticals and use cases where there’s going to be an on prem need. There’s going to be a need for these controls to be deployed on prem, but they’re also going to have a percentage of cloud that needs to be addressed as well.
And that’s why I always mention invest in a solution that can be interoperable and flexible and adaptable based on your environment and where you’re deploying when you need to have coverage on.