Season 1  |  Episode 8

Stu McClure, Chetan Conikee, Chris Hatter, and Ben Denkers return for Episode 8 of Hacking Exposed, Qwiet Edition! The focal point of the conversation is the unexpected firing of Sam Altman from OpenAI, the subsequent departures of high-profile co-founders and researchers, and the contentious atmosphere that ensued. The panel explores the implications of this leadership upheaval and delves into conspiracy theories versus facts about the potential in-play chess game involving large tech enterprises.

In this episode:

(00:35) The Drama of AI: Leadership Shifts at OpenAI

  • [00:00:35] Stu reveals the startling news about Sam Altman’s departure from OpenAI
  • [00:02:21] The pondering of unprecedented board decisions and their impacts on the tech giant.
  • [00:03:23] Analyzing the new structural setup after key resignations.
  • [00:07:00] Discussion on ‘cap profits’ and the unforeseen effects on OpenAI’s financial and operational strategy.
  • [00:08:00] Speculations on Microsoft’s involvement and the future of AI leadership.

(19:00) Cybersecurity Concerns: The Sunburst SolarWinds SEC Suit

  • [00:19:00] The panel addresses the SEC lawsuit against the former SolarWinds CISO and the implications for the industry.
  • [00:20:29] Chris Hatter offers insights on how the SEC’s decisions may affect the CISO role.
  • [00:25:00] The conversation shifts to discuss a newly outlined four-day breach disclosure rule and its implications.

Resources for this episode:

The Firing of Sam Altman from OpenAI:

  1. Interview: Sam Altman on being fired and rehired by OpenAI (The Verge)
  2. A timeline of Sam Altman’s firing from OpenAI — and the fallout (TechCrunch)

Cybersecurity Concerns: The SEC Suit and Disclosure Rules

  1. SEC Press Release on Charges Against SolarWinds and CISO (U.S. Securities and Exchange Commission)
  2. SEC Charges Against SolarWinds CISO Send Shockwaves Through Security Ranks (Dark Reading)
  3. SEC Litigation Complaint (PDF) (U.S. Securities and Exchange Commission)


Episode Transcript

[00:00:00] Stu McClure: Well, welcome everybody. This is Hacking Exposed podcast. And, uh, this is the quiet edition. I have my trusted cohorts with me today. We’ve got Ben, we’ve got Chris and Chetan. Thanks guys for joining. Man, I don’t know if anybody got any sleep this weekend, but, uh, we, we, we certainly had a lot of activity in and around a really hot, hot topic here in the AI space, at least.

In and around OpenAI and Sam Altman. Let me just, uh, recap it for everybody real quick. So on Friday, I was actually chatting with some folks internally, and they had leaked the information to me in that chat, uh, that Sam Altman had been fired. And I thought it was some sort of a joke or a [00:01:00] meme of some sort, so I jumped on quick.

Sure enough, I mean, within the hour, a news article had come out that the board had, um, fired Sam Altman. Now, let me read this because I think it’s important what they said here. So, it was a deliberate review process by the board which concluded that he was not consistently candid in his communication with the board, hindering its ability to exercise its responsibilities.

The board no longer has confidence in his ability to continue leading open AI. Now, I mean, this is really interesting because I don’t know if this is, if we have a precedent here, certainly people will get outed as CEOs all the time from boards, but they tend to be, you know, rather innocuous, you know, they’re retiring, they’re spending more time with their kids.

Uh, I don’t know. There’s always some storyline that you can, you can tee off on and sort of follow on. Now. What was really interesting, of course, was all the [00:02:00] scuttlebutt afterwards. So almost immediately, uh, one of the other co founders, Greg Brockman announced his departure. Um, then three of their senior researchers resigned.

Uh, then that’s on Saturday, of course, later on Saturday, basically a deadline was made to reinstate Altman and Brockman. Uh, that sort of came and went. I love the picture, I don’t know if you all saw it, with Sam on site going back to talk to the board, and having to get a guest pass in the lobby. And so, because his credentials obviously didn’t work anymore.

And then, um, really, leading up to today, which is now Sam McGregor at Microsoft. And it feels like a sort of special projects sort of scenario, where You know, there’s some problem children or something that go into the special projects, or it could have been a grand conspiracy designed and planned from day one and Microsoft or somewhere in between with their investment in open AI.

I mean, where do [00:03:00] we, where do we go here, guys? I mean, there’s so many different places to unpack this. A lot of other details, obviously I could have involved. I’m sure Elon Musk is just sitting there laughing, laughing all day long. So. So what do you think guys? What’s going on here?

[00:03:14] Chris Hatter: This whole thing has been like, uh, a reality TV show for Silicon Valley.

[00:03:22] Stu McClure: Drama. Yeah.

[00:03:23] Chris Hatter: Lots of drama. It’s in a cluster of, uh, it’s just been a, a, a disaster. It’s literally been like a reality show playing out through, uh, X accounts and, uh, and news articles and


um, you know, after Sam gets fired, the next thing you know, he’s going to get rehired. The next thing you know, they hire another, uh, CEO, a former Quich CEO.

So It’s been chaos. It’s been hard. I’ve been interested, but it’s been really hard to understand the ground truth and what’s actually going on here.

[00:03:54] Stu McClure: I get really confused because Greg, you know, Greg, just to get into some details here. So let me just recap. So the [00:04:00] board member, there are apparently five board members, best I can tell.

So there’s Greg Brockman, who’s one of the co founders, and he was the chairman of the board, by the way. I don’t know if you knew this. Elia was a co founder, is a co founder. Then it’s Adam DiAngelo, who’s CEO of Quora and a bunch of, it used to be Facebook CTO. Uh, Tasha Macaulay, who’s Adjunct Senior Management Scientist at RAND.

And then Helen Toner, who’s Oxford Centre for Governance of AI. That’s it. No board seat for Microsoft. It’s well known. Um, alright, so you’ve got two founders on a board of five. And Greg… Is clearly a part, he’s a chairman of the board. I mean, it’s not like he’s a passive observer position on the board. He somehow gets, I don’t know, convinced to let him go.

I don’t know how.

[00:04:54] Chetan Conikee: If you conduct a retrospective, right. It was all written in the tea leaf. Let’s go back to when open [00:05:00] AI was conceived, right? It was when Elon was a part of the crew, their mission was not. not for profit, not to be controlled by corpse. And at some point, compute went out of control. They couldn’t bear the costs.

And then for the first time in the history, they created a new structure called as cap profits. And if you read the docs that they framed for cap profit, it has never existed before. So what we’re seeing out is literally clip the tail black swan event where, um, often you have the situation where you think everything works and you design for everything works and then now something just fell apart.

And here’s the interesting thing with cap profits. If you read the docs, the doc says that A board is deputed, the board can take action without consent of the chairman. The board can fire the chairman. And every investor, [00:06:00] Microsoft just came in to fill that gap of compute. So every investor is like a donor, not.

A seat member who can influence, wield power.

[00:06:12] Stu McClure: So explain what that is though, Jaden. So talk, what is this CAP profits thing?

[00:06:18] Chetan Conikee: CAP, they found themselves in this odd situation. Like, uh, not for profits almost works great when it can self sustain. But now you’re running this near to sentient machine that is going to chew up compute, like gobble up compute, not just chew up.

They ran into that, and at some point they realized that they have to bring on board someone who owns compute. And likely there are only three or four gravity centers. You have the Google, you have Facebook, and you have, uh, you know, Microsoft. Well,


as well. And video as well. And for some reason, they’ll probably gravitate towards opportunity.

Uh, you know, I [00:07:00] would say safe, safe controls. And maybe they picked Microsoft at that point. Now, if Microsoft comes on board and commits maybe a billion dollars and more. Uh, some structure should be created because, you know, Microsoft’s going to look at some, uh, leverage in this relationship. So, for the first time, literally, this is like a blurred line between, um, you know, being not profit and also being capitalist.

And I feel like this cap profit is somewhat of a structure of that sort, not tested before.

[00:07:35] Stu McClure: That’s interesting. I have to dig into that a little bit more. But if you, if you read the blog post from Satya, it was as if they were completely unaware that all of this, this dynamic was happening until minutes before the public announcement.

So, I don’t know, this is either a super elaborate Illuminati sort of exercise, uh, or honestly, it’s just a bumbling buffoonery, or I don’t know, [00:08:00] maybe something in between.

[00:08:01] Chetan Conikee: Stu, it’s kind of true to a certain extent, pleading the fifth, because the board took the decision of nixing the chairman, firing the CEO, and Microsoft was not an observer, so they just, like, maybe heard of it at the nth minute.

[00:08:16] Ben Denkers: I just don’t think they expected the ramifications

of like everybody, everybody

quitting. Right. Then try to walk back that decision or at least talk about what that possibility would be. I mean, to me, that’s, you know, that’s telling in and of itself. And then, you know, not being candid with the board. Like, I really want to know what that conversation was.

Right. Like, you know, I mean, there’s, yeah, to me, that could mean a whole.

[00:08:39] Stu McClure: There’s a lot of different things. Let’s just start with details.

[00:08:42] Chris Hatter: And this is what we’re talking about. Why, why did this happen? Why? Right. So even if you nixed the chair and Sam, I mean, that’s multiple co founders out. You still have a co founder who was apparently involved in the decision.

Uh, the, the [00:09:00] release said not candidate the board or something along those lines. But then there’s tons of speculation out there, uh, around, well, you know, maybe they’ve invented something that is unsafe or potentially unsafe. Um, you know, there’s all kinds of speculation out there. This just seems too, this seems bizarre, right?

Because open AI has such tremendous momentum. Uh, you know, they have, they grab so many headlines. They’ve become one of the most consequential companies in the world. And then just to get rid of the.

[00:09:34] Ben Denkers: It’s like that meme with the bike and the guy’s riding the bike and one in one frame and the next frame he’s taking a stick and they’re putting the stick through the spokes right and then obviously in the third frame the guy’s falling over on the bike and like that’s what this whole situation just reminds me of is like they’re like well we don’t like what he’s saying telling us or not telling us we’re CEO right and then you know they realize wait that was a terrible decision so let’s try to [00:10:00] want him back let’s do like you

[00:10:01] Stu McClure: But, but again, I mean, that’s like boom, bumbling buffoonery.

Then you’re leaning towards bumbling buffoonery because I don’t know, look any, unless it’s abject fraud, financial fraud that, that they are getting non participation of from the, uh, the CEO and the chairman and the other founder, I don’t know why you would ever bypass that whole structure that you put in place as a company to manage the health of the organization.

So for, for this to happen. You either have to have foundational, fundamental fraud, or you have bumbling buffoonery because anything else you can manage.

[00:10:41] Ben Denkers: Yeah. But if you had fraud, would you then reverse what you just said or the action that you just tried to do to bring back the CEO?

[00:10:49] Stu McClure: Probably not. No, that’s right.

That would sort of eliminate that as a potential, uh, explanation and outcome.

[00:10:58] Chetan Conikee: Yeah, I mean, to your point, I feel it’s [00:11:00] bumbling, buffoonery, like you put it, because the situation is, I think something was brewing. There was Dev Day, where, uh, you know, Sam Alpand went on stage. He probably, maybe there was some internal conflict.

Conflict of ego, conflict of ethics, where… Ilya is all about, you know, okay, ethical, deep, artificial and AGI, right? And right from the beginning, he’s been sounding the horn, like very honest guy, like if you go back to the history of Ilya, right, he’s always been on point saying, you abuse AI, then it could lead to all sorts of, you know, catastrophes.

And then on the flip side, you have Sam, who’s charismatic and possibly looking at capitalizing on, on being first mover in the market. So something played out was brewing long enough that, you know, the ethical person woke up and said, let’s do it. All right, well, let’s play.

[00:11:52] Chris Hatter: All right. So let’s just say it was an AI ethics or AI safety.

Concern, you know, [00:12:00] that leads me down the AGI path. I mean, how, how close are we actually to sentient AI? How likely of, of, um, internal conflict would it have been like as the safety issue?

[00:12:12] Stu McClure: I think it’s very likely, I, I don’t know if it’s probable, but it’s very likely that GPT 5 or some variant that they have as the next version or maybe a version beyond is, um, what you would call sort of by all measures, uh, of, of true measure of AGI is, is sentient, if you will, and I think it’s potential, probably likely, That, um, somebody like Sam was at a place of wanting to continue to advance perhaps faster than the board was comfortable.

I, I, I accept all of that actually as potential. What I don’t accept is how you go about it like this. I mean, I, I just don’t, I can’t possibly fathom the logic chain here unless there’s, you know, obviously there’s probably tons of stuff we have no clue about and that they’re trying to [00:13:00] manage without sharing it.

For all kinds of legal reasons. So we can’t really interpret too much beyond this, but to me, this is a clear case of, wow, uh, this could have been handled a lot better and okay, now that they’re over in, in Microsoft. The question is, well, is there some of this that was sort of quasi planned or at least anticipated and, uh, you know, what, how much, uh, conspiracy theory are we into now?

[00:13:29] Chris Hatter: Well, we had one before the show, uh, we were talking about this, like if you, in a nutshell, whatever Microsoft invested, they might find themselves in a place where that’s what they paid for OpenAI because you have Altman and Greg over at Microsoft now. You have the entire company, well, the vast majority of 500 people and guess where they would go.

Right? Probably Microsoft. Maybe over at XAI, but you’ll see power [00:14:00] concentrated under Sam and Greg at Microsoft.

[00:14:02] Stu McClure: And what will happen to OpenAI? That’s a really big question. I mean, they put, what, 10 billion into it? Uh, and, uh, I know others have invested a lot of money. Sequoia, for example. I mean, this is going to be either a huge egg on the face.

Or, um, they’re going to have to figure out a pathway forward to heal and, and health. I mean, I know that there’s a structure of this nonprofit scenario, which I don’t quite understand just yet. Do you guys understand that model?

[00:14:31] Chetan Conikee: It’s the first time model. I think they just reacted to an event and created it.

It’s not regulated. It’s never tried and tested. So think of a new insurance structure being created and you have your first hurricane and you’re figuring it out, right? Who’s liable? Who’s not? This is exactly what’s playing out and I feel someone’s going to step in some government body and then put some policies and constraints on such structures, but this is almost a black swan event [00:15:00] to summarize.

And to one point, uh, Stu, uh, if you have not caught the other set of news, this is, unfortunately, they’re all unfolding every minute. Apparently on the weekend, some of the top OpenAI customers literally were calling, uh, Anthropic and Coher, who are the two companies that spun out of OpenAI because they had conflicts with Sam, uh, to move because they’re nervous right now, whether OpenAI can sustain.

[00:15:28] Chris Hatter: Well, Microsoft has come out publicly and stated that they plan to be a partner of OpenAI for the, for the future and that the relationship will change, et cetera, et cetera, but it’ll just be interesting to see how it plays out or they plan out in

real time in front of us.

[00:15:43] Stu McClure: Well, then there’s XAI, right? I would imagine, uh, just chomping at the bit with Elon, um, to get into the ring and a lot of his banter on, um, On X, if you will, he’s always involved somehow, it’s,

it, it, it amazes me.

[00:15:59] Chris Hatter: It’s, it’s [00:16:00] quite hilarious. Elon’s a benefactor. There’s no question. The, the, I mean, everything plays out the way it looks like it’s going to play out. You’re going to have some of the, the world’s best AI engineers on the market. You better believe that Elon is going to be putting in bids. So I think he’s a benefactor as is Microsoft.

[00:16:18] Stu McClure: Yeah. It’s going to be really interesting. Of course. So this is it. This is a lot of real time happening here. So we’ll, uh, we’ll just see how it flushes out over the next couple of days and weeks with Thanksgiving coming up here. So, um, all right. Any other thoughts on, on poor Sam?

[00:16:35] Chetan Conikee: Good luck and best wishes.

[00:16:38] Stu McClure: Yeah. Hashtags save Sam and Greg. I, although something tells me, I think they’ll be fine. All right, let’s move on to something a little more specific inside of cybersecurity, which we all really, and I’ve been hearing a lot of scuttlebutt around this stuff. In fact, I was at a, uh, an event last week. I was speaking up in the Bay [00:17:00] Area to an investor conference and, uh, it was full of CISOs, uh, most of which, uh, I know or have had a long experience with, and each one of them came up to me and started talking about how nervous they are about.

Uh, the, uh, Sunburst SolarWinds Hack SEC, uh, scenario, what’s really happening there. And so let me catch everybody up to speed. Um, so the SEC filed, uh, just about a month ago, I want to say, a, uh, a suit against Timothy Brown, the, uh, CISO at the time. Uh, I don’t think he’s still there, is he? Uh, now that I, I just read something.

But at any rate, so for fraud and internal controls failure, okay. Just to get, get a gist of it. So the parent company, SolarWinds is a public company. And it went public right before a lot of this hack, the sunburst hack really occurred. [00:18:00] And as, um, they started to go on the road and pitch SolarWinds, of course, you have to disclose your current state of risk and your current, current, the whole company, forget about cyber.

But cyber is also involved, obviously. So you have to state all this. And according to the documents, there was incomplete disclosure about the Sunburst attack in their 8K filing, which was December 14th, 2020. All right. Uh, following which his stock price dropped 24 percent over the next two days and 35 percent by the end of the month.

So let me just read this because I think it’s important for the audience. Okay, here’s the sound bite. Here’s the quote from the, uh, from the filing rather than address these vulnerabilities, quote, unquote, quote, solar winds and brown engaged in a campaign to paint a false picture of the company’s cyber controls environment, thereby depriving investors of accurate material [00:19:00] information.

Today’s enforcement action charges, solar winds and brown for misleading. The investing public and failing to comply the company’s crown jewel assets, but also underscores our message to assure it issuers, which is implement strong controls, calibrate to your risk environment and level with investors about known concerns.

So at the end of the day, and Chris, I’m really curious to know your, your input on this being a long time. So, so, um, at the end of the day, he said things weren’t all that bad and they were far worse, right? Am I reading this right? Chris, in general, uh, that’s the way I think about it.

[00:19:39] Chris Hatter: I mean, when I, when I first looked at that, my initial reaction was this is absolutely concerning to me and that’s how I would have, you know, uh, I was in a large global publicly traded company and that would have been my, my initial reaction, but when I took a step back to think about this a little bit more.

Um, I think there’s two, two sides of this coin on one side, the [00:20:00] SEC may have given, uh, CISOs a golden ticket to a much more senior role. Um, because I think, I think what this permits CISOs to do is now go to their legal department, go to their CEO, ask them questions about the reporting structure, uh, ask them questions about their ability to be protected.

Um, really. put CISOs in a totally different executive class, which would then elevate the second in command from a security standpoint at all, all of these organizations.

[00:20:29] Ben Denkers: That that’s the way it absolutely has to from just from a, from a cost perspective. Right. I mean, cause all that risk you guys are incurring, you know, from a CISO, like to me,

that is really a golden ticket.

[00:20:41] Chris Hatter: I think my more pessimistic hat, when I look at this, like a pessimist. Uh, view is that it’s several fold. Um, I think every CISO of a large organization is aware of vulnerabilities at their [00:21:00] organization and probably, you know, absolutely more aware, the most aware person in the organization. Um, Because that’s what they’re charged to do.

Now, when you think about AK filings and you think about reporting, the end of the day, the, the CISO rarely has final approval on what actually gets sent out. They go through, um, uh, this opportunity to kind of submit what, what the risks are, what they should say about the statement, and ultimately that goes through legal review and all these other reviews and ultimately gets to the CEO, et cetera, et cetera.

So they don’t have the direct capability to personally make a statement in public filings. Um, the other thing that needs to be considered here is the CISO’s ability to effect change in an organization. The best CISOs are most influential, um, when they don’t have people that report to them and can still execute.

The vast majority of CISOs in large organizations don’t have the execution people in their reporting line. They have to collaborate with the [00:22:00] CTO, the COO, business leaders. They have to get the buy in and they have to drive change without direct authority.


when this

SSC filing came out, I started thinking,

okay, well, how would a CISO be directly held accountable if they can’t have a more formal and direct influence over the work that gets done?

And that’s going to be interesting to see how it plays out. I think you’ll see like CIO jobs.

[00:22:29] Stu McClure: Yeah, no, I, I agree with the consequence. I mean, this could be a very positive, right? Could be you’re optimistic and I love your optimism. Um, it, it could easily elevate the CISO position into the board effectively.

I mean, it’d be almost a mandatory position inside the board, but at a minimum it can cross now departments, boundaries, uh, divisions, all that kind of stuff to be the ones. I mean, if you’re going after the CISO here, okay, this is unprecedented as best that I, my knowledge, right. Um, by the [00:23:00] SEC. Um, to go after the CISO for, for not responsibly and accurately presenting the risks of the business.

I mean, you’re talking about every single company out there. I mean, no CISO is, is no CISO is knowledgeable about every risk in the organization, right? Full stop. Of the ones that they are aware of, there’s always context around it. Like, yes, but it depends on this and this and this working or yes, but. Uh, it depends on that, this and so when you present all of the facts that you know of, you tend to present it in the light of the full context at best.

And sometimes you simply say, look, yes, but we got it mitigated or managed or this, that, the other. So you’re not really coming full open Komodo every time you start to talk about your risk. And so that’s, that I think is going to change the dynamic here. Our systems are going to have to get really comfortable about being [00:24:00] really like airing their dirty laundry and getting it out there.

I mean, I’ll tell you a funny, a story that I had back in the day when I was, um, building the, the practice at a major healthcare provider for cyber. And I was brought into the board and I was brand new, fresh. And they asked me point blank, Hey, okay. You know, you’re the cyber security expert. You got the cool books, talk about hacking and you know how to prevent all the hackers.

So tell us how you’re going to protect us. And I came out point blank and just said, I can’t, there’s no way. Uh, it’s just, there’s too many problems, too many vectors of attack, too many layers, too many humans in the mix and not enough technology, not enough process, everything else. What I can tell you is we’ll get better.

I can track, we’ll get better over time based on a measure that we can all agree to. But we all know this problem. It’s just never going to go away. So now what? What could we do? All right. So do you know, I think for CISOs that are listening, I think you want to talk to your leadership about D. N. O.

Insurance. If you [00:25:00] are an officer of the company, especially that’s a no brainer. You should be considered as part of the D. N. O. Insurance policy. Okay. Now, if you’re not an officer effectively, you Of the company, then you should request what kind of coverage, what kind of protections do I have? I think that’s an inevitability.

Now, the other element of it, which is cyber insurance really doesn’t get to help you all that much. Um, because that’s usually in and around breach, um, uh, actual breach responses and, and the impact to the business as part of a particular breach, not so much these claims of fraud or internal control failures, things like this.

I think these are the things, actions, that, you know, the CISOs can take out there. But I mean, this is inevitable, I guess. I mean, we’ve seen this how many hundreds of times?

[00:25:50] Chetan Conikee: Actually, you know, a non CISO perspective, uh, you know, somewhat counter to, to what Chris and you are mentioning, Stu, is it, it [00:26:00] scares the pants out of me, this whole decision.

And here’s the reasons, right? I mean. Some few CISOs that I know of, you know, they have, uh, they’ve chosen different roles now because they’re nervous. Um, you know, they moved on to a VP or another role because they feel that the arc of liability is significantly high, uh, could lead to time, serving time, which is something no one wants to choose.

Um, the one thing that I just feel is like, why not look at an adjacent and model a similar adjacent, which is, uh, you have doctors, doctors working in hospitals and doctors having their liability insurance. And then if there is a situation where someone’s going to be held responsible, um, you look at, is it the machinery or the surgical equipment that led to this or did the doctors, you know, mistake lead to this and then you take the decision.

I feel like, um, the CISO is neither the doctor because they don’t [00:27:00] have full control and leverage to look at how applications are created. At most, they get pushback when they ask for tools, they ask for things, you know, you have the CTO pushing back, then the, uh, the board pushing back saying, Oh, cut costs at all reasons, right?

And then when the catastrophe plays out, everyone gets to just react to that tactical situation. And I feel like we are the fundamental, we are not fixing the problem with this. The fundamental problem is. The board is not treating security seriously, problem one, problem two is the CTOs and the other staff is not cooperating and giving fair play to a CISO and the CEO is not paying attention to security.

I feel we’ll keep playing this game without solving it.

[00:27:49] Stu McClure: But yeah, listen, I can’t tell you how many dozens and dozens of boards I’ve presented to. You know, they’ve brought me in asking for, well, what’s their responsibility? What do they need to know? And [00:28:00] there’s just not enough education of boards out there.

I mean, it’s just full stop. They all want to be responsible. I mean, no one wants to be on a board and be irresponsible or, you know. Negligent in their, in their work and behavior, so they want to, but we’re not giving, we’re not empowering them with what they need to know to be able to push back and ask and challenge.

And, and so I think there’s a huge board education opportunity here as well. Not just, you know, look how the whole field works. Like, how does a hack work? It’s really simple. I mean, you can explain it in 10 bloody seconds, every single attack out there, but we don’t do this. And, and we, we, we think that they’re just are not sophisticated enough, that they’re super smart.

They can figure this stuff out. Anyway, that’s my, that’s my appeal to it all, but listen, I know we’re running up on time, but yeah, go ahead.

[00:28:47] Ben Denkers: I was just going to say for,

you know, one thing that will be interesting to see as this unfolds, right, is how many times that message changed before the filing, right?

So as they’re going through the lawsuit, I would really like to understand. You know, was the message, you [00:29:00] know, kind of to Chris’s earlier point softened, right, to, to what was actually filed and can, you know, will you be able to see that, um, you know, throughout, throughout this process and see how that unfolds?

[00:29:11] Stu McClure: Yeah, I agree. I feel so bad though for Tim Brown. I just want it publicly known that we all feel for him. He’s clearly simply the, the, the guinea pig test case for the SEC on crackdowns with all this stuff. And I feel for him, he’s a brother in arms with a lot of other people that are, have had to deal with this or is going to be dealing with this shortly.

And so I, we just feel for him. All right, listen, we got a couple more minutes. I just want to really quickly catch up since we’re on the SEC topic. There was a new data breach disclosure rules that came out recently that I think is important to talk about. It’s this four day, four business day mention. Um, I really want to get your thoughts on it.

[00:30:00] So, item 1. 05 on the form 8K will be due four business days after a registrant determines that a cybersecurity incident is material. And I love that one, is material. What the heck does that mean, right? Um, the new rules also add regulation SK item 106, which will require registrants to describe their processes, if any.

For assessing, identifying, and managing material risks from cybersecurity threats. As well as, uh, the material effects or reasonably likely material effects of risk from cybersecurity threats and previous cybersecurity incidents. So both. So, Chris, walk me through this, man, like, what, what, what is this going to mean for CISA, as you think?

[00:30:46] Chris Hatter: I think probably not as

much as your initial reaction, uh, to it. At the end of the day, CISA is a large organization, they have a breach response process, they have a notification process. They exercise those [00:31:00] routinely, whether it’s with their insurance provider, with privacy regulators, you name it. We’ve been, we’ve had reporting requirements for a long time.

Uh, that’s one thing, a demonstration of your processes. That too is something that people should have, uh, in terms of how they manage risk. If you’re doing your job effectively, you’re communicating risk. You’re working through risk treatment plans. These are things that are in your repertoire as a CISO and a risk function.

So I think that it’s, it’s probably less impactful than, than most people think.

[00:31:33] Stu McClure: But at the end of the day. Well, four days though, Chris, I mean, you used to be worried, like, just do it within the quarter that you have the event four days, that’s right. Now I, I think that’s going to really be difficult.

[00:31:42] Chris Hatter: If four days isn’t difficult because in any major breach scenario, um, the amount of information that you’re receiving, triaging and working.

It makes it very difficult to report quickly.

[00:31:56] Stu McClure: I mean, you don’t even know you can’t spell your name in four days. I’m in the fog [00:32:00] of war. These incidents, I mean, you can barely spell your name and no exact, you don’t know exactly what’s happened here. And much less, how are you ready and being prepared to.

Sort of reveal or disclose the event and even determine if it’s material.

[00:32:14] Chetan Conikee: That’s an important point, Stuart. I just wanted to touch on what you and Chris were discussing. Now, an event happens, but you don’t have confirmation of certainty, right? There’s some likelihood. So as a, as a CISO, Chris might say, you know, I’m still waiting.

It’s not confirmed. I’m seeing event one, event two, right? And at some point, he receives many events to say, now. Is the time, right? So if, if the, if it’s written so loose, uh, I, I, what comes to my mind is Charles Goodhart’s, they call it as a Goodhart’s law, which is when, when the measure becomes the target, it ceases to become a good measure.

And, and what I feel is you can game this darn thing where, um, suppose my earnings call is four days from now, [00:33:00] I’ll say it’s not certain as yet. I’ll let the earnings play out and announce it three days after. So, it’s, the situation is kind of not clear, but, but, uh, not sure if this might, this might be the solution.

[00:33:12] Chris Hatter: If I’m a CISO working through this problem right now, if you don’t have the definition of materiality and you don’t have a, a designated, Executive or officer of the company who makes the final determination. You need to go work on that. You need to go work on that because especially when you connect this issue to the SolarWinds one, the, the conversation that she was just having around, around kind of punting or nothing’s confirmed, yada, yada, that type of language gets used in security all the time, you want the maximum degree of certainty before really, you know, communicating about it.

You don’t have that luxury anymore and you need to have a way of determining materiality and you’re going to have to assume a little bit of, uh, of risk and be more open to kind of communicating about some of that dirty laundry that we’re saying.

[00:33:55] Stu McClure: Yeah. All right. Well, thanks guys. I, uh, I think it was important to cover that a little bit.[00:34:00]

Um, all right. Well with that, I think that’s the end of our podcast this week. We’ll be back, uh, very shortly. Have a great Thanksgiving all. And, uh, enjoy a little downtime. Not too much, of course. And, uh, we’ll catch y’all later. Thanks!

About ShiftLeft

ShiftLeft empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, ShiftLeft CORE scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, ShiftLeft then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use ShiftLeft ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, ShiftLeft is based in Santa Clara, California. For information, visit:


See for yourself – run a scan on your code right now