Introducing Qwiet AI AutoFix! Reduce the time to secure code by 95% Read More

Season 1  |  Episode 1  |  Part 2

Welcome back for part 2 of the first episode of Hacking Exposed, Qwiet Edition!

Our conversation continues with Stu McClure, Chris Hatter, Chetan Conikee, and Ben Denkers.

In Part 2, our hosts give their takes on a variety of recent developments in Cybersecurity.

The discussion touches on:

  • clever ways to trick security professionals, and why you should always check code before deploying it
  • a modest proposal for cleaning up open source
  • why credit card theft is so hot right now
  • how ChatGPT has a soft spot for grandmas
  • how much Spanish you need to know for your next vacation

 


Resources for this episode:

Bleeping Computer on the GitHub impersonation attack.

The inimitable Hacksplaining.com. (Thank us later.)

The Verizon DBIR.

 


Show Notes:

Segment 1: GitHub Gets Got

[00:02:04] On the cleverness of tricking security-minded professionals

[00:03:30] The “macro problem” with GitHub

[00:04:09] Discord + Chrome = Crypto?

[00:05:00] Always look at your code before you deploy it, for cryin’ out loud

Segment 2: Supply Chain Risks and Open Source Software

[00:08:00] No, seriously, look at code before you deploy it

[00:08:11] Attributes of the supply chain that make it susceptible to attack

[00:09:13] Importance of code signing and vetting open source software for risk assessment

[00:09:50] Education is the key

[00:11:10] The solution has to scale

[00:12:10] A modest proposal: let open source developers implement controls for their own software

[00:13:38] Stu is hopeful but he’s not holding his breath

[00:15:15] One team’s exploit is another team’s feature

[00:16:00] The first of what are sure to be many shoutouts for hacksplaining.com over the life of this show

[00:17:18] How to learn Spanish, and how to teach Cybersecurity

Segment 3: Exploiting AI and ML in Cybersecurity

[00:18:40] Credential harvesting can be an effective way to exploit ChatGPT

[00:20:04] ChatGPT hopes that software piracy will help your grandma feel better

[00:23:15] The wisdom of clouds

Segment 4: Highlights from the Verizon DBIR

[00:25:52] Credit card theft is back

[00:26:47] People are the weak link in living-off-the-land attacks

[00:27:50] Thoughts on anomaly detection

[00:28:32] Alarming statistics on the time it takes to patch vulnerabilities

[00:30:10] Sometimes mitigation is the best you can do

 


Episode Transcript

[00:00:00] Stu McClure: Hey Hacking Exposed fans. This is Stu McClure again. Thanks so much for listening to our new podcast, hacking Exposed Quiet AI edition. This next episode is a continuation of the discussion that started in our first episode. So you may wanna enjoy episode one before you dive in here, if you haven’t already.

[00:00:32] Let’s transition if we can. Let’s talk a little bit about some of the other sort of interesting stuff that’s been coming out. I mean, you know, we talk about open platforms like GitHub, you know, GitLab, sort of any of these environments and the ability for hackers to create these fake accounts. To, uh, you know, push malicious code that can be run and executed on these developers systems.

[00:00:58] We just saw [00:01:00] this last week or two on, in and around GitHub. It just seems to go on and on and on and on and on. I mean, are, is there any way to mitigate this in any way, shape or form? And, you know, how, how are these guys managing it other than just whack-a-mole, you know, like, ah, new one comes up, knock it down, knock it down.

[00:01:18] How, how are you guys thinking about it? 

[00:01:22] Ben Denkers: Yeah, I mean, I would say like you just, you can’t trust anybody these days, right? It seems like is the unfortunate thing. And so, you know, it’s the old adage, trust, but verify. And so you have all of these projects or potential projects that are, are being, um, you know, crafted maliciously.

[00:01:38] And, and if you know it, it comes down to the human element of being able to ascertain whether or not what you’re working on, or what you’re installing or implementing, is it actually malicious? And so, you know, again, it goes back to training, and being able to recognize when something just feels off. But it’s certainly an interesting dynamic as relates, you know, from my perspective of [00:02:00] who they’re actually targeting, specifically from a developer perspective.

[00:02:04] Which generally are gonna be more technical, technically savvy than, you know, an average user. And so where I look at it from a risk perspective is, you know, depending upon what those developers are doing or what privileges they may have, you actually have a lot more exposure as well.

[00:02:24] So, you know, high risk, you know, and reward in that sense. 

[00:02:31] Chris Hatter: Just to add, I think, a little bit of context in this conversation, we talk about the open platforms. Read an article that there was a threat actor group that built what amounts to fake profiles releasing malicious code on these open platforms under the guise that these malicious pieces of software or POCs or exploit code for zero days on [00:03:00] Discord and Chrome. Now, I think about this in a couple of ways. One is in large enterprise you have engineering teams and, and security practitioners are constantly trying to figure out, well, how do we make sure that we know and have vetted the components that software engineers are going to use?

[00:03:21] Right. And in this particular case, we again go back to the incentive structure. Those engineers wanna move fast so they get components that help them build, right? So that’s like the macro problem with GitHub. In this specific case, I find it very interesting, that basically these people created fake security company profiles attached real world security practitioners pictures to the profile, and they were releasing fake zero day exploit code.

[00:03:48] And I, I immediately noticed that they were doing it for, you know, like I said, discord and chrome. And I’m wondering, who are the personalities that are very interested in zero day exploit code? [00:04:00] Right. And so it typically tends to be security research groups, other malicious actors who are interested in using it.

[00:04:08] And of those malicious actors, Chrome and Discord to me screams crypto. And crypto is a topic that we should eventually get to as a part of this conversation. The amount of cyber attacks that are happening in that space and the amount of money that’s being lost is incredibly high with very little coverage, right?

[00:04:26] And so I just found this particular case to be quite interesting. It’s like who are they going after and what type of success are they seeing, right? Because it’s fake exploit code. And who downloads? 

[00:04:39] Stu McClure: Oh, it’s, it’s very targeted to security researchers and to, other hackers, other hacker groups, things like this.

[00:04:46] And usually the, you know, I won’t say ankle biters, but the ones that are very simple, almost like research teams inside, like corporations. You know, I’m just thinking, trying to take advantage of anybody [00:05:00] that wouldn’t actually look at the code before they run it, thinking it’s gonna do one thing, which is proof of concept, an exploit.

[00:05:08] But it does something entirely different and so, yeah, they went to great lengths, like you said. I mean, found real researchers, put ’em up there on a website, you know, they had a whole bunch of accounts. I mean, I, it’s just exhausting how much, I mean, if you, I wish they could just put their energies onto something else.

[00:05:25] Like it’s just, yeah. 

[00:05:27] Chris Hatter: This, this example that I’m using, I looked at the executable. I mean, it was very obviously malware, right? It was, it was in an enterprise world, first of all, like Discord generally would not be permitted if you think about this. Most organizations are not using Discord as their primary method of engaging with each other.

[00:05:51] And it was like, like I said, fairly obviously malicious and so I don’t know how far it would’ve gotten and you know, against any security [00:06:00] organization that knows what they’re doing. And that’s again, what brought my mind to, well, it could be the crypto space. It could be targeting specific research groups, or it could be going after other malicious actors wanting to gain a foothold with.

[00:06:13] You named the threat actor who’s interested in Discord and . . .

[00:06:16] Ben Denkers: Yeah, I, you know, I, what I thought about this was really the concept of what’s happening, right? So in, in the execution and the, the great lengths that they, they took, uh, you know, in terms of the fake profiles and making it seem legitimate, I think more conceptually, Uh, I thought was interesting because, you know, again, if you took, if you were to, if you were able to compromise, you know, or, or embed a project with, with something a little bit more subtle that would be used as part of, you know, a larger project, think about the impact that you could have and, and kind of that Trojan horse mindset of, the more stealthy, long approach that an attacker could [00:07:00] potentially take.

[00:07:00] Right. And so, you know, I think this is just kind of this, a new wave of potential attacks and risks that organizations have to be very mindful of that not everything, is as it seems, and to start to have processes in place or be able to validate that things are actually valid.

[00:07:18] Stu McClure: This does bring up the supply chain issues too. Exactly. A lot of attacks are starting to attack the developer themselves. You know, in this case it was security researchers or sort of, you know, a specific target of developer perhaps, or maybe just researcher. But a lot of those new supply chains are coming right on through the GitHubs and GitLabs of the world to exploit the developer themselves, or the package managers out there that aren’t vetted, et cetera, et cetera.

[00:07:47] I mean, I know we’re seeing more of it, but it seems like we should be seeing a lot more of it because it seems like such an easy low hanging fruit attack vector. I mean, are you seeing anything on that [00:08:00] from a developer perspective? I mean, they aren’t looking at packages and thinking, oh, is this a malicious package? They’re just installing a package, you know that. Right.

[00:08:09] Chetan Conikee: Exactly. I mean, like all three of you stated, right? First of all, the question is the supply chain is broad and wide. The fact is a small package can be compromised, impersonated, and it’s gonna proliferate and make its way into your enterprise software.

[00:08:26] So the two questions, detection and prevention. But let’s kind of look at this with the sense making from our own perspectives. Now, uh, I might be working for a very small organization, a startup that, or startup that is at mid stage or, or early stage low revenue. So my focus is move fast and break things, whereas at the enterprise, I wanna make sure that I write secure software.

[00:08:51] So we cannot have controls embedded that is a one-size-fits-all across this whole realm. So the important thing is how do we do [00:09:00] the right thing at the right time to ensure, first of all, that no one could proliferate into our assembly line and place these, you know, I would say bad actors, bad software in that space.

[00:09:13] So, first step is code signing. You gotta make sure that you know, you sign your code so that you know your code is your code. Second is when you are herding the cats, which is when you’re bringing open source into your ecosystem, you gotta make sure that that open source fits a risk profile. What is the bus factor of developers?

[00:09:34] How many developers have moved in and out of that software? Are there any symptoms in that software that could present threats to you in the future? So these are small checks. They kind of fit the realm of early startups to enterprises. What a good start.

[00:09:50] Stu McClure: It’s simple. It’s simple education, but we just aren’t doing it. That’s what’s killing me.

[00:09:55] Chris Hatter: It’s education. But the other issue, I agree with everything Chetan is [00:10:00] saying. The pushback that I would have at the enterprise level is how do you scale those checks? You know, if you got a couple thousand engineers, tens of thousands of apps, whether they’re internal or external, you’ve got software development flying around, happening all day all the time, 24 by seven.

[00:10:19] You have only a few mechanisms right now at your disposal, right? One would be you can vet all of the open source software that is going to be used. And basically deny all other packages. Right. And you go to a pre-vetted security

[00:10:36] Stu McClure: Yeah. White, white listed solution that you gotta stand the whole team to go through.

[00:10:40] Chris Hatter: Exactly. Right. And so how much does that cost to maintain and can you move as fast as the world wants to move from an engineering perspective with that model? The alternative is it’s much more of a floodgate. You rely on education and say, developers, here’s what you need to think of. Here are all the things that you need to do to check, and then you implement a model that’s more permissive.

[00:11:02] What is the way of scaling and how, I think, what does the future look like from technology to help security teams scale here?

[00:11:12] Chetan Conikee: Let me add a quick point to this. Um, you know, unfortunately the press is painting a bad picture. You know, we mostly say open source is bad. That’s how  we are portraying the situation for, and all transfer of risk mitigation and controls is pushed to those that consume open source.

[00:11:33] I would say that we have to switch the perspective a bit. Today we have many platforms that enable fostering community of open source developers to create value which is consumed at layers above. So rather than transferring risk and control to those that use open source, we should push those controls to those that define, create open source so that they have effective measures and practice.[00:12:00]

[00:12:00] And I’d emphasize on this because open source developers are best at what they do, creating value. And when you define controls, they’re also really good at implementing those controls, unlike the enterprises and those that,

[00:12:15] Stu McClure: So what, what would that look like then? Chetan? So, so GitHub would say you have to meet these standards before you can push code into GitHub. Is that it?

[00:12:25] Chetan Conikee: Yes. It’s something like that. Or, The typical act of open source is you invite developers, co-developers to contribute to your project. Now, when you invite someone, you gotta have an effective way of vetting that someone where you bring them on board, you help them patch. But until they build a reputation in your ecosystem, you do not let them commit to your production quality code.

[00:12:50] So they’re small controls, but they’re good controls where you help someone build a reputation. Yeah. And continue to add value rather than shunt them off saying, you know, we, we [00:13:00] gotta treat everyone as bad actors.

[00:13:02] Stu McClure: Yeah.

[00:13:02] Chris Hatter: Are you seeing this happen in practice yet?

[00:13:07] Chetan Conikee: Yes. Code signing is absolutely, code signing is seeping into open source.

[00:13:12] Effective controls are seeping in. And, uh, you know, the biggest issue with open source, again, boils down to incentive. Chris, they are almost working pro bono. They get fatigued and tired. And it’s obvious when they get fatigued and tired, someone’s taking advantage of that fatigue. So how do we prevent that, which means incentives have to flow down into the ecosystems as well?

[00:13:38] Stu McClure: Yeah. I just don’t, uh, I’m not holding my breath for the code platforms to ensure and require security, but, I, I’m hopeful. I mean, I guess I’ll say that.

[00:13:52] Chris Hatter: yeah, I think it’s a great model. I just think that if I throw the CISO hat on, I feel an intense obligation [00:14:00] for the safety and security of my environment.

[00:14:03] And, and I can’t just assume that someone else has it covered. Like I have to have positive control over the situation. Right. Absolutely. And so that’s my only reservation. I think it’s an intelligent way of starting to solve the problem. It’s just, for me, in this exact situation, I didn’t wanna say open source was bad.

[00:14:22] Me and my CTOs were very aligned that it helps us build software fast. And oftentimes it was secure.

[00:14:29] Stu McClure: Maybe it’s a belt and suspenders model. You know, the belt is, you gotta handle it yourself. You gotta manage it somehow with the process and the team. But the suspenders are, well, we really need to ask the Microsofts of the world and the GitHubs and the GitLabs, et cetera, to be able to implement those security controls inside of the code mix. It’s multi–

[00:14:51] Ben Denkers: I think it’s perspective too, right? I mean, because it depends, right? I mean, you have the concept of it, it may be a feature to somebody, right? And a vulnerability, [00:15:00] depending upon how you leverage it as part of the code, I think is also an interesting kind of concept to think about, right?

[00:15:05] So as the organization’s learning how to deal with the potential vulnerabilities, you know, of the individual applications that we’re talking about here. You know, they may have been developed with this concept of an actual feature for usability requirements or, or something along those lines.

[00:15:27] And so I think that kind of plays into effect as well, just better understanding, what was the original intent? And so everyone has a different perspective, I think.

[00:15:38] Stu McClure: Well, yeah, I mean, I might be overly biased, but I just think educate, educate the individual developer because you’re gonna kill not just the 80:20, but probably the 99:1 of attacks if you just educate them on, you know, the four core attacks and the four core fixes, and just get them to really be at least [00:16:00] knowledgeable and catch this stuff. But that just doesn’t seem to be a big priority. I mean, I will sort of throw out, I’ll do a shout out to hacksplaining, by the way, cause I love that website.

[00:16:09] Hacksplaining.com. If you guys, we have no affiliation, but if you guys, wanna point your developers into one place to learn real quickly about cyber attacks. On the web and web applications like that, that would be a great place to go.

[00:16:24] Chetan Conikee: Stu. Given you mentioned education, I just wanna challenge you on one thing very quickly, right?

[00:16:30] Uh, do you think education happens best when the stakes are high? Like, what I mean by that is if I ask you to learn Spanish you might drag your feet. But if I ask you to learn Spanish because you’re gonna go to Spain tomorrow, right after, oh. How would learning happen? How can it be effective?

[00:16:52] Stu McClure: Great. Great point.

[00:16:53] Right. And so I do believe that sort of a forced function of education is a [00:17:00] very, very valuable technique. But what we also have to do is we have to make learning the language of Spanish less difficult. I mean, if you, if you picked up, you know, any of the current  language-learning systems today to try and learn Spanish, it’s a full on commitment.

[00:17:20] I mean, you’re not able to just learn it overnight or learn it on the plane, but that’s exactly what we need to provide. If you were to go to Spain today, okay, as a business person or as on personal travel, I guarantee you there’s only 20 things that you’re gonna need to say in that trip. 20.

[00:17:40] I’m telling you, that’s it. You’re gonna need to say hello, I’m checking into the hotel. Thank you. Uh, I need a menu. You know, there’s only so many things, but we don’t focus on just teaching those core elements, and that’s what needs to happen in development, in my opinion. We’ve gotta [00:18:00] teach them the core elements because it’s only, it really is 4, 10, 20 things that we need to teach them.

[00:18:06] That’s it. Anyway. All right. Well we we’re, we’ve beaten that one up pretty good, I think. I know we, and we are running a bit short on time, but let’s, let’s cover one of my favorite topics, which I think we’re going to cover a lot more going forward, which is the application of AI and ML in the cybersecurity space.

[00:18:28] And of course, uh, Open AI’s ChatGPT gets a lot of press coverage. But there are plenty of others and there are plenty of other competitors that are all doing similar things with large language models. But one of the cool, couple of the cool findings that have happened in the last couple weeks, one is we had a whole bunch of ChatGPT account credentials stolen by information stealers.

[00:18:50] And these are often, you know, browser based information stealers, or executables that get run as part of, either other compromises or other [00:19:00] campaigns that that can just scour logs, and pull all kinds of juicy information off of the system or off the browser. Uh, that now of course ChatGPT is being a target from that perspective, from a credential harvesting perspective.

[00:19:15] But we also had the case of ChatGPT being asked about Microsoft license keys, and, you wanna tell that story real quick? I think that’s fantastic.

[00:19:26] Chetan Conikee: That’s quite a hilarious story because, at the prompt, a user actually asked a question where they combined sentiment, where they said, my grandma’s sick.

[00:19:38] And, by the way, can you share license keys of Windows 11? And the prompt reacted by first, you know, being sorry that her grandma’s sick. And by the way, here are four keys, which turned out to be legitimate keys. Yeah. But this is a [00:20:00] classic, what they call as a prompt injection attack. And the model didn’t assess and understand that someone is trying to ask for serial keys, which are often shared in the dark web.

[00:20:11] Stu McClure: Yeah, I love this one. And it goes back to that account credential version of, I mean, I like to think about. How do you hack the AI itself? Right? One of ’em is this way. But another is with the credential, being able to take credentials, a whole bunch of different accounts and then almost pollute the prompting, if you will, with your information, your data, your confirmation or denial of a fact.

[00:20:44] To really almost pollute a person’s account itself and then ultimately pollute the large language model as a whole. Eventually. I mean, there’s real potential to have this done. I don’t know if you guys gave that much thought, but I mean, [00:21:00] to me that’s a, that’s an easy one to start to mass produce.

[00:21:03] Ben Denkers: Yeah. I think data poisoning to your point is an absolute example of something that could potentially happen. Right. And I think also we’re still relatively in the early stages of how people are leveraging this as part of their real in, within their, their own lives.

[00:21:23] And as adoption continues to grow, I think the potential ramifications of of something like this being compromised will also increase, if you think about it, right? And so, you know, today maybe you have organizations that are leveraging ChatGPT, like API calls, and those credentials potentially get compromised, which could also potentially affect business resources, things of that nature.

[00:21:46] But even as, you know, individuals ourselves start to use it more and it becomes more, part of our common everyday lives. I think the potential exploits and things that could potentially go wrong, uh, increase as well.

[00:21:59] Chetan Conikee: [00:22:00] So, very quickly before I forget. I don’t know if you guys heard Mercedes-Benz has released a beta with chatGPT embedded in all the cars. Now you could imagine how far we’ve gone with the hype machine.

[00:22:15] Ben Denkers: Well that brings me, I mean, Stu and I have a particular manufacturer  that I’m reminded of, when mobile apps were first started, started to happen within cars and starting and unlocking things right, and all of the potential negative ramifications of what could go wrong. And so I imagine something very similar in the future where, you know, if you’re able to compromise ChatGPT, you know, with self automation and driving and potentially all of the other things that cars are now capable of, what a potential malicious attacker could do would be very interesting for sure.

[00:22:56] Chris Hatter: I would start [00:23:00] like my thoughts on this by saying and reminding people that AI and machine learning applied to security or just being applied in general is not a new thing. I think there’s an incredible amount of buzz and hype because of the insane adoption that we’ve seen with generative AI and ChatGPT.

[00:23:17] So I think about this subject all the time, from the most basic of use cases. I mean those credentials that were stolen, just think of something so basic. Not, not model poisoning or data, you know, data injection. Just think about the fact that you could go buy credentials off of the dark web, and I could go figure out what Stu was asking ChatGPT, not only does he probably–he may not want me to know that, he might put sensitive information into it and I think what’s what’s happening right now is actually quite similar to what happened with public cloud adoption. It was a runaway train. People are going to adopt AI and ML generative and predictive.

[00:23:56] It’s not gonna stop. When public cloud was first introduced to the [00:24:00] world. Everyone’s like, oh my God, it’s so vulnerable. Don’t do anything with it. Don’t put your data in there. And that evolved to being the defacto standard for infrastructure. And I think with the large language models in ChatGPT, or Bard, what you’re seeing is the application of control being implemented.

[00:24:17] We’re going to see a string of issues. We are also going to see the people building these models and the software that supports them evolve. You’ve already seen OpenAI basically prevent the, unless you want, you permit it to the prompts not being part of the model training. Right. So you’ve started to see some of those things.

[00:24:37] I’ve seen a lot of reactions by security organizations banning large language models. And the way I am coaching people to think about this problem is that models are software and data is data. You have data security policies, you have software security policies, apply them. It’s not something wildly and crazy new.

[00:24:55] We’re just gonna see the people building these softwares evolve and add control [00:25:00] and give you the controls to be able to manage risk on them. 

[00:25:04] Stu McClure: We can talk all day on this one. Let’s, let’s transition to the last topic. The Verizon DBIR, right? The annual report on data breaches. What were some of the takeaways you guys got from it? Is it more of the same or is there anything new that you’re, that you saw in the report?

[00:25:24] Ben Denkers: You know, one of the things that I saw that I thought was interesting,  is, I mean, obviously we expect ransomware to still play a big, big part, especially given the current economics of the environment. Right. But one of the points that the report highlighted was credit cards–stealing credit cards is coming back, which hasn’t been a trend for a very long time.

[00:25:48] Stu McClure: it, it did slow. Yeah. It slowed for a bit, hasn’t it? Yeah. 

[00:25:52] Ben Denkers: Which, which I thought that was interesting because it’s, maybe we’re getting a better job at endpoint protection. Right. And so, [00:26:00] you know, attackers are having to now revert or change their process. But it kind of highlights this point that, nothing is really stagnant and in terms of attackers and motivations, it will continue to change. And I thought that was pretty interesting. 

[00:26:16] Chris Hatter: Yeah, there was one more of the same. I do give a lot of credit to Verizon. I mean, this is like a rite of passage in security. You gotta read this every year. But the focus on humans as an attack vector is what always speaks to me. Someone who’s kind of lived through some of these types of attacks, the human element and social engineering was always incredibly successful.

[00:26:47] Almost regardless of what type of controls you put in place, there are creative ways to trick people, right? And when you trick people and you obtain legitimate credentials, it gives the attacker the ability to live off the land. [00:27:00] And when they live off the land, that was really one of the harder things that you had to be able to try to figure out within an organization.

[00:27:06] You had to get really as creative as the attacker. If Stu was in finance, why wouldn’t anyone in finance SSH into a machine? What’s the use case for that? 

[00:27:17] Stu McClure: Yeah, it’s basically anomaly detection, you know, it’s trying to understand the roles of each individual and understand their normal, typical behavior patterns, and then trying to find deviations and that’ll, that’ll sort of spark a concern.

[00:27:30] I do know a company that’s actually building something like this. I can’t release it yet, but it is quite interesting taking that tactic of trying to piece together a lot of different pieces of information to, to build a role or a function that then you deviate from. That might be on the horizon with some new tactics coming.

[00:27:51] Chris Hatter: Yeah, I mean, I think people have attempted UEBA platforms and stuff like that. It was never clean and super scalable. But yeah, [00:28:00] the behavioral data that you get about what people should be doing, the anomalous behaviors, you know, you should think about a data scientist or two within your security team.

[00:28:11] Even basic techniques like cluster analysis for network traffic outbound is an example. I mean, there’s a lot of different ways to try to crack this nut, but the humans gave rise to live off the land, in most of the scenarios that I saw. And it’s incredibly challenging to manage at that point.

[00:28:32] Chetan Conikee: I have one quick takeaway, which was sort of alarming, right? There was a statistic that was published in the report, that said the average time from announcement to enumeration of a particular export is 17 days.

[00:28:50] To fix from announcement to patch is 49 days, and I say it’s alarming because weaponizing particular exploit is [00:29:00] being commoditized today, which means in the next report, perhaps that 17 will further drop to 10, to five, to one to one hour. There’s an increasing trend in terms of announcement to patch, which goes to show that companies are getting slow in assessing, identifying and fixing maybe cause of organization problems, priorities, lack of incentives. So gotta watch this out. 

[00:29:28] Stu McClure: Yeah, I’d love to know the, the details on how they measured that, but if that is true, as a blended average, um, yeah, it’s not pretty because all the exploits are coming out faster than you could ever patch for, and so that “window of exposure,” we used to call it–man, that window of exposure is really dialed in.

[00:29:46] Chris Hatter: Now, in your response playbooks, you have to be comfortable with mitigating, not totally eliminating sometimes. And so when you think about security architecture, you should think about compartmentalizing, segmenting [00:30:00] all of those things, but be prepared. Whenever these vulnerabilities and exploits come out, it’s typically like we either have a patch or we don’t, and then there’s a set of mitigating steps that you can take.

[00:30:10] Your playbooks and your team need to become very good at the mitigation components while maintaining a healthy and available production stack. So, you know, that’s something that you gotta just gotta be versed in. 

[00:30:23] Stu McClure: All right, gang. Well I think that’ll wrap it up for us. I really appreciate everybody joining the podcast today, and thanks so much to the speakers. We’re gonna have the same crew or close therein, pretty much every couple weeks. So thanks again for participating everybody and, uh, we’ll, we’ll talk again next time. Thanks.[00:31:00] 

Share