Introducing Qwiet AI AutoFix! Reduce the time to secure code by 95% Read More

Season 1  |  Episode 5

Episode Transcript

Stuart McClure (00:08.673)

Alright, welcome everybody. We’re back to Hacking Exposed podcast, Quiet Edition, where we’ve got a whole bunch of incredibly interesting attacks. Some very boring, but interesting because they’ve been reapplied and just some huge attacks that have been in the news in the last couple of weeks. So with that, welcoming my team here, my cohorts, Chetan, Gabe, and Ben. Hello, Gents, how’s it going today?

 

Chetan Conikee (00:36.288)

Hello, how are you?

 

Stuart McClure (00:37.049)

All looking so darn good. Gosh. Yeah, thanks for making it like, look, I think the biggest topic we’re gonna have, and you’ll probably see a trend of all of this in a lot of the attacks we talk about, is social engineering attacks. So good old fashioned, you know, Kevin Mitnick, like old school calling up somebody and get them to do something that they really shouldn’t be doing. But let’s kick it off with the MGM hacks.

 

Ben Denkers (00:37.294)

Doing great.

 

Gabe (00:39.016)

Hey y’all, yeah it’s great to be here.

 

Stuart McClure (01:04.809)

And of course there have been some peripheral hacks too with Caesar’s and others, but MGM has been such a dominant force in the headlines. I mean I’ve gotten, my phone was blowing up for weeks after that first event. I thought that the whole world was melting down with how many inbounds I was getting. But I mean, you know, starting with the social engineering attack I think, you know, to me it’s one of the oldest tricks in the book and it just seems to work over and over and over again and we can’t really fight it. I mean you know us at silence we had built a pretty interesting tool to try to thwart this with our identity solution but other than that you know we’re old-school like username password username password or you know some T key or token or something but those are all effectively passwords just hidden and embedded.

 

So what do you guys think about this? Now I know they’ve been back up for five days now. What, it’s September 26th right now. They were back up in the 21st. All reports were sort of like eight million dollars a day is what they lost. Obviously, just a drop in the bucket. I think total loss or cost was something like 80 plus million to 10 days or something down. I think it’s more the…

 

Chetan Conikee (02:25.184)

Thanks for watching!

 

Stuart McClure (02:26.213)

And they had other properties obviously that were impacted too, not just MGM proper hotel, Mandalay, and a whole bunch of other properties. So yeah, Balaj. So I mean all of these clearly there’s going to be a, you know, an impact in the market with the, you know, questioning of well, are their systems secure? Are they, you know, keeping my data secure, et cetera, with this even more so than before dramatically. But what do you think it’s doing? What do you think it’s done to the industry?

 

Gabe (02:33.939)

The light’s on, yeah.

 

Stuart McClure (02:55.893)

And I actually felt really bad for him. I mean, I know a lot of the guys at MGM and all the guys there, I just felt super bad. You know, all it takes is one crack in the dam for a bad guy to get in. You could have a billion and one preventative solutions in front of a billion different vulnerabilities, but man, just this one comes in and just exploits and then just cracks open the whole thing. And now involving Okta and everything else. So I don’t know, what was your take on all this, guys?

 

Gabe (03:10.931)

Right?

 

Ben Denkers (03:25.058)

You know, my thought is, again, we’re talking about an attack vector that’s been around since essentially the age of… But more importantly, look at how effective it still was. And what do you do in those types of situations, right, where you have a lot of technical controls in place, whether it be technology or equivalent to help identify those attacks, but ultimately you still fall subject to somebody having the right set of answers to then provide you with access to a VPN in this case, right? And for me, if I’m an attacker, I’m looking at how effective that is. I’m going to double down and try my luck with the next MGM or equivalent, for sure.

 

Gabe (04:12.135)

Yeah, something that I think I saw this conversation taking place on Twitter and where sometimes a little bit too quick to go and blame the user. Oh, this person shouldn’t have access or, you know, they don’t they’re not trained. They don’t understand. But at the end of the day, it’s all we can do is mitigate, right? Reduce risk. And we take, you know, different approaches to do that. But we cannot depend on just.

 

Chetan Conikee (04:28.448)

Thanks for watching!

 

Gabe (04:37.543)

This one control or this one tool to prevent this attack. It takes a village really to be able to reduce that risk and keep the organization as safe as possible and its customers.

 

Stuart McClure (04:51.632)

Oh.

 

Chetan Conikee (04:52.012)

I’m sort of curious of what’s going to play out after this because there is a term in statistics called a stationarity which means something fundamentally never changes decade after decade which makes it conducive for an attacker to continue to attack. And if I just play what’s going to happen in the next few days, perhaps we’ll have a new standard set, we’ll have a set of rules, then all employees will be put through training.

 

And maybe they’ll go procure another four vendor tools. But is this going to really solve the problem?

 

Stuart McClure (05:26.745)

Well, I think that’s it. And you know, you guys know how I think all the time. It’s like, how do I prevent this silly thing? So social engineering is just a pathway. It’s just a pathway to get to the core feature that somebody is exploiting. So the core feature is authenticated access onto a network, right? And to be, to take over the persona of another individual and then act into their capacity and then exploit their privilege inside of an organization. And so I have to look to Okta. I have to look to technologies that are providing that final last mile authentication and say, can we do better here? Can we build some sort of an algorithm? So much of what AI is great at is, if a human being can tell that Gabe is Gabe, video, audio, mannerisms, gestures, speaking, accent, blah blah. Why can’t a computer, like why can’t it? I don’t understand. We covered this back at silence days. It’s doable, behaviorally. How you type on a keyboard, how you use your mouse, what programs you use, how you use it. These are all very highly unique to an individual. Why can’t we do that? And I just beg the Okta’s of the world and others to get more innovative, to drive AI into the solution and don’t be part of the problem. Because guess what? And you saw it in a couple of other attacks, but like deep faking voices and deep faking videos like this is going to happen. Sure, you can call up a supposed IT admin back who was asking for a reset of their MFA.

 

Gabe (07:10.248)

Hmph.

 

Stuart McClure (07:20.373)

And you should be able to tell that that’s a deep fake. That’s not the real individual. But we don’t have a unified, trusted system of identity. That’s the problem. And that’s also why all these phishing attacks work and smishing and all the social engineering is because it’s at the core. At the core of it, we cannot trust the identity of the individual tied to what we’ve stored for the individual. And that’s the problem. And I just think AI is a perfect, perfect place to apply here. But, sorry, I’ll get off my soapbox now.

 

Gabe (07:49.551)

Yeah, there is a challenge. And just playing devil’s advocate here, but then there’s data collection, right? And do we really want our biometrics to be collected and used for authentication? I think at Cylance we looked at things like, hey, what devices are near you? Is your phone near you? Is it where it usually is supposed to be and things like that. So all of that requires data collection, right? And then understanding the traits of different individuals. But then how do you feel about privacy, right? And so it’s hard to balance, right? And we need to be careful not to go to extremes, but at the same time, we also want to make sure that we establish some kind of baseline, right? So it’s, yeah, anyways, I don’t have a solution for that, but it’s a problem.

 

Chetan Conikee (08:37.385)

Hmm.

 

Ben Denkers (08:40.098)

Two-factor authentication using DNA, right? Like that’s, you know.

 

Gabe (08:43.25)

Hahaha, yeah.

 

Chetan Conikee (08:43.971)

Be-

 

Stuart McClure (08:44.877)

2FA with DNA, well, you know, I think there’s something to it. And you know, us older folks, I can tell, yes, we find this stuff creepy, right? But I’m telling you, my kids couldn’t care less. Like, they just couldn’t care less. My 19-year-old, my 20-year-old, my 22-year-old, my 23-year-old, they just couldn’t care less. So I think it’s a generational thing. As long as it’s being used for good.

 

Gabe (08:59.943)

That’s true, yep.

 

Stuart McClure (09:13.977)

As long as it’s being used to prevent the bad guys from exploiting you, I don’t see the future generation caring. I don’t. I could be wrong.

 

Gabe (09:21.691)

Yeah, I do think that we are making some improvements, like moving away from SMS-based two-factor authentication, onto things like pass keys or a YubiKey and things like that. That helps. So it’s not just exchanging codes over the phone, but rather having a better experience.

 

Chetan Conikee (09:23.63)

Oh, you are-

 

Gabe (09:42.791)

Devices that ensure you actually have access, getting rid of usernames and passwords in the case of passkeys, for example. That helps. I think it’s just a matter of how do we put all of that into practice. Because transitioning from having the technology and the theory to then actually implementing that in an organization, we know how difficult that can be. It could take years because it’s a business, right? And they have more on their plate than just security. It’s complicated, but we’re here to help, I guess.

 

Chetan Conikee (10:21.264)

To touch on one point which has nothing to do with security, I was recently at a hospital observing a nurse. Nurses record critical PII data on their terminals into the system, but unfortunately, purely from a usability standpoint, security is seen as an impediment. Constantly, the system is expiring the session. The nurse needs to quickly enter patient data and move to the next patient.

 

Stuart McClure (10:21.638)

Yeah.

 

Gabe (10:43.782)

Hmm.

 

Chetan Conikee (10:51.516)

Now, in the design world, they refer to this as an anti-affordance because if these interruptions happen, they will resort to the notepad and start recording there. So my takeaway is, when can security be usable? That’s key.

 

Gabe (10:59.515)

Hmm.

 

Stuart McClure (11:05.121)

I’ll tell you when. I’ll share a story. When I went to Kaiser Permanente in 2008 to help build the cybersecurity practice there with Patrick Heim, I saw this exact scenario. I’d talk to the regional head, the ones that represented all the doctors in the region. I won’t name names, but he and I discussed the challenges. I was new to the organization and he said, “Look, if you get in the way of patient data access, you kill people. It’s that simple.” So when you asked me to put in a Wi-Fi password to get access to the Wi-Fi, that could have serious consequences. 

 

Gabe (11:39.373)

Hehehe

 

Gabe (12:02.836)

Hmm.

 

Stuart McClure (12:03.377)

That was the first time I truly realized the gravity of the situation. Cybersecurity must accomplish two things: it must genuinely secure systems and it must do so invisibly. Most importantly, it should enhance the user experience. It must be efficient. And if you can make the user’s work experience better, they’ll be more receptive. We discovered in the configurations of all their Wi-Fi a misconfiguration of the DNS servers. Both primary and secondary DNS servers weren’t functioning correctly, causing delays. By fixing this, we improved the Wi-Fi connection speed.

 

Chetan Conikee (13:22.397)

Yeah.

 

Stuart McClure (13:30.637)

And that’s how we gained trust. I understood very quickly the importance of prioritizing patient care. Any hindrance in accessing patient data can have a direct impact on patient health.

 

Chetan Conikee (13:51.924)

And to close out this discussion, you’ve hit the nail on the head. There are both demographic and usability factors at play. If someone decides to phish or run a phishing campaign against Stuart McClure, they’re not going to target Ben Denkers and Gabe. They’ll target someone like Stuart’s executive assistant who needs to move quickly and efficiently.

 

Ben Denkers (13:51.987)

And there have been real…

 

Chetan Conikee (14:19.72)

…implications from these breaches. Sorry, Ben, go ahead.

 

Stuart McClure (14:24.305)

Yeah.

 

Ben Denkers (14:25.07)

No, I was just going to elaborate on what Stuart mentioned, especially in healthcare. Patient safety is paramount. We’ve seen examples in recent years of the effects of inadequate security in healthcare settings. We must find a balance. Cybersecurity should be both efficient and effective. 

 

Gabe (14:53.495)

Exactly.

 

Ben Denkers (14:55.114)

It’s an ongoing challenge for the industry.

 

Stuart McClure (15:05.361)

Alright, so we could probably discuss this all day, but we should move forward. I want to shift gears from social engineering, but we’ll revisit that topic shortly. I’d like to bring attention to the recent Apple Zero-Day vulnerabilities. They often don’t get the attention they deserve. These vulnerabilities, especially the ones discovered in September, have been extremely severe. I’m not sure when Apple was notified about them, but we know when they patched them, and they urged everyone to update immediately.

 

Gabe (15:39.035)

I…

 

Gabe (15:42.395)

Yes, a small correction: the known attacks have been few and far between.

 

Stuart McClure (15:48.293)

Right, there haven’t been many known attacks. I’m sure Apple is aware of some issues, but they might not be sharing them until a patch is ready. This drives me nuts because everyone asks me about Windows being insecure and how easy it is to hack. Yet, I’ve demonstrated on stage how to hack iPhones and MacBooks. Gabe, you helped me with a demo at McAfee. We could hack almost every phone in the audience, of course with their consent for the demo. But I’m over this idea that Apple is inherently secure. Not saying Apple is terrible; they just have a vast amount of code to manage.

 

Gabe (16:22.584)

Hehehehe…

 

Stuart McClure (16:42.557)

It’s challenging to ensure full security with an expanding code base and many developers. But these recent vulnerabilities have been alarming. The exploit allowed an image manipulation to run active scripting in the background, executing commands with admin privileges, right?

 

Ben Denkers (17:19.419)

Exactly, it’s a dream exploit. Just send a user something, and bam, you have control.

 

Stuart McClure (17:26.889)

You send it, and the user doesn’t even need to click or open it.

 

Ben Denkers (17:31.894)

It’s invaluable, especially if you’re targeting high-profile individuals. The potential impact is massive.

 

Gabe (17:53.72)

Apple’s marketing often overshadows these issues. They have made some security improvements and are great at promoting them, which can sometimes overshadow the real issues. People often think, “Who’s going to target me?” or “Who wants my data?” but the vulnerabilities are real.

 

Chetan Conikee (18:00.788)

Exactly.

 

Gabe (18:20.503)

Apple’s good at making it seem like everything is fine. However, any device, any computer, if it runs on power, can be hacked.

 

Stuart McClure (19:05.589)

Absolutely. The vulnerabilities we’re discussing are CVE-2023-41061, 41064, 41991, 41992, and 41993. They’re severe, and if you haven’t patched, they can easily be exploited. They might have already been used in targeted attacks. Zero-day vulnerabilities aren’t used recklessly. Attackers use them strategically to ensure they remain undetected.

 

Gabe (19:45.487)

Once a zero-day is public, though, it’s fair game.

 

Stuart McClure (19:47.513)

Right, it’s out in the open.

 

Ben Denkers (19:48.85)

If it happened once, it can happen again. We’ve seen it before, and with the visibility these exploits bring, it’s not typically a one-time occurrence.

 

Stuart McClure (20:04.465)

The black market for these exploits is thriving. An iOS or Mac zero-day can fetch millions. The narrative that Apple is impervious needs to stop. Nothing is perfect.

 

Gabe (20:58.195)

Exactly.

 

Gabe (21:03.975)

If you’re listening to this and haven’t updated, please do so now.

 

Stuart McClure (21:08.205)

Absolutely. Update your devices.

 

Gabe (21:20.713)

The latest vulnerabilities impact all Apple devices: Watch, phone, computer, Apple TV, and iPads.

 

Stuart McClure (21:24.453)

If a similar vulnerability was discovered on Microsoft, it would be major news. Now, returning to the social engineering theme, the Google account sync vulnerability was recently exploited to steal $15 million from Fortress Trust. What’s interesting about this is it was a social engineering attack. In this case, it was smishing — using SMS messages. They used the messages to convey that the user needed a reset and to acquire a new MFA token. What’s more, they supposedly employed deep fake technology to sound like the targeted individual.

 

Gabe (22:14.113)

Mmm.

 

Stuart McClure (22:19.365)

If it were a smaller operation, you might think it’s typical because you’d recognize a colleague’s name, like Jim Smith, and wouldn’t hesitate to verify him. But it’s intriguing to ponder how far adversaries might push this technology. If we know anything about cybercriminals, they’ll maximize any tech’s potential. So, considering this incident involves Google’s Authenticator, which they managed to bypass, do any of you want to delve deeper into this, or should I explain?

 

Gabe (23:03.579)

The attackers capitalized on Google Authenticator’s feature that syncs all entries to the cloud. It’s a user-friendly feature. In the past, I’ve struggled with migrating my codes to a new phone, sometimes getting locked out of accounts. Apple has a similar mechanism with its keychain. It’s a convenience, but due to this feature, the attackers were able to call and likely employed AI to mimic the voice of their target, thereby bypassing security measures.

 

Gabe (24:18.163)

It’s a double-edged sword. AI can be beneficial, but it can also be weaponized for nefarious purposes.

 

Stuart McClure (24:31.193)

Indeed, Gabe. How can we be sure that you’re genuine in this video? Could you be using deep fake technology to mask your appearance and voice? This is something we need to be prepared for, as its prevalence is bound to increase.

 

Gabe (24:39.952)

The background blur in my video feed is a simulated effect by Apple’s Center Stage. It follows me and creates an artificial depth of field.

 

Ben Denkers (24:50.847)

Considering the topic of deepfakes, the real concern is the potential for scaling attacks. In the past, social engineering often required one-on-one conversations, but with AI, attackers can automate these interactions. Instead of targeting a handful of employees, they could potentially target hundreds or thousands. This scalability is bound to increase the success rate of cyber attacks.

 

Gabe (26:00.953)

Previously, attackers needed extensive research to execute sophisticated attacks. But with technology, much of this can now be automated, enabling them to target numerous individuals without expending significant resources.

 

Stuart McClure (26:56.601)

It’s a perpetual challenge. Switching gears a bit, Chetan, can you shed some light on the recent vulnerability concerning GitHub?

 

Chetan Conikee (27:23.697)

Certainly. This is a classic example of repojacking. In GitHub, when you rename a repository or change your username, it creates potential vulnerabilities. Attackers exploit these changes, recreating accounts and repositories to inject malicious code. While GitHub has implemented controls to address these vulnerabilities in their main platform, it seems they overlooked applying the same constraints to GitHub Actions, allowing attackers to exploit the new service.

 

Stuart McClure (29:20.645)

So, to clarify, the attack sequence involves repojacking, then hijacking the NPM package maintainer email?

 

Chetan Conikee (29:36.192)

Exactly. The attacker changes the name, creates a new account that mimics the old one, and sets up repositories. They then plant a worm, and if the repo is popular and widely used, the worm spreads through the software supply chain.

 

Gabe (30:17.722)

For the developers listening: ensure you pin your dependencies. Always know what you’re installing. When you update, review the content to ensure it’s legitimate.

 

Stuart McClure (30:31.289)

So, by “pin,” you mean specifying a particular version?

 

Gabe (30:32.627)

Yes, exactly. P-I-N. Be explicit about which version you want to install rather than automatically fetching the latest one.

 

Stuart McClure (30:49.145)

It’s akin to the debate of auto-patching versus manual patching. Any other insights on the GitHub vulnerabilities?

 

Gabe (31:05.987)

The thing that’s really concerning now is that with all this social engineering and open source under attack, it’s becoming hard to trust people or organizations. Networking is getting more complicated. And, I mean, there’s that North Korean attack too.

 

Stuart McClure (31:33.779)

Right.

 

Gabe (31:35.619)

Engaging with people has become riskier. Before, you might’ve just tried out some software because it looked cool. But now, you have to worry about hidden threats. How do we restore trust? 

 

Stuart McClure (31:51.581)

Have you seen platforms like Twitter or LinkedIn implementing government ID validation to restore trust? 

 

Gabe (32:05.307)

LinkedIn does it, but GitHub? I don’t think so. Many people in our community, especially those attending events like Defcon or Black Hat, use nicknames. They’ve earned trust over the years, but they’re cautious about sharing too much personal info.

 

Chetan Conikee (33:03.268)

It’s tricky. By providing all this personal info, you can bolster security. But companies also want to grow. They can’t always do both.

 

Stuart McClure (33:18.309)

Before we finish, I’d like to touch on the Kubernetes command injection flaw. This vulnerability affects Windows endpoints inside a Kubernetes cluster and can escalate privileges. Essentially, the combined vulnerabilities in Kubernetes and Windows could compromise clusters. We need to address these issues.

 

Gabe (34:57.384)

On the North Korean situation, they’ve been targeting security researchers using a mix of social engineering and zero-days. They offered a seemingly helpful tool, which, in reality, was malicious. It’s a long con—building trust and then exploiting it.

 

Stuart McClure (37:08.025)

Their exploitation of trust in the open-source community using their “Get Symbol” project is concerning. We have to be vigilant, even with open-source.

 

Ben Denkers (37:32.266)

It just shows that even those who might be more tech-savvy can fall for social engineering, not just the average person.

 

Gabe (38:03.611)

For those who’ve used ‘get symbol’ or ‘debug symbol’, ensure your system’s clean. Given the nature of the attack, starting afresh might be best.

 

Chetan Conikee (38:29.332)

Also, always keep an eye on business logic flaws. They’re frequently overlooked, but crucial.

 

Stuart McClure (38:36.069)

We’ll dive deeper into that in future podcasts. It’s a significant issue in the AppSec world. Thanks for today’s discussion, everyone.

 

Gabe (38:56.467)

Definitely. Great points today.

 

Stuart McClure (39:07.719)

Till next time. Stay safe out there.

 

Gabe (39:11.995)

Thanks, everyone. Take care.

 

Ben Denkers (39:12.962)

Goodbye, everyone.

Share