A logo with accompanying text "Listen on Spotify"A logo with accompanying text "Listen on Apple Podcasts"
The Cybersecurity Seatbelt: A Cyberpsychologist’s Take on Moving from Awareness to Preparedness
Season Three
· Episode
2

The Cybersecurity Seatbelt: A Cyberpsychologist’s Take on Moving from Awareness to Preparedness

What if the biggest vulnerability in cybersecurity isn't the technology—but the people behind it? Dr. Erik Huffman, a pioneer in cyberpsychology, joins The Segment to break down the human factors behind digital attacks. 

Transcript

Raghu Nandakumara  00:11

Welcome back, everyone, to another episode of The Segment. It is a huge honor and privilege to be joined today by an award-winning educator, entrepreneur, speaker and cybersecurity researcher. Dr. Erik Huffman, Welcome to The Segment!  

Dr. Erik Huffman  00:28

Hey. Thank you for having me. I appreciate it. 

Raghu Nandakumara  00:30

So, I'm not going to be able to do justice to your wonderful background. All I'm going to say is, is that this is the first time I've ever had a conversation with a cyberpsychologist. I'm intrigued as to what that is about. I have a small idea, having watched some of your YouTube videos. So, Dr. Huffman, why don't you give us a bit of your background? 

Dr. Erik Huffman  00:52

Yeah, so starting all the way from the beginning, or, you know, start from the cyberpsychology, interesting stuff?

Raghu Nandakumara  00:59

You know what? I think it's important to start from the beginning because that's how we figure out how you got to cyberpsychology, right?

Dr. Erik Huffman  01:05

The quick abbreviated version is after I received my bachelor's degree in computer science and I started working for Walgreens, like driving from store to store, fixing printers, fixing networks, fixing computers, and things like that. Enjoyed it! You know, the reason I got into computer science is because of my dad, to be honest. When I grew up, obviously, just like a lot of people, you don't have a lot, and I got to see my dad go to school, and he really dug into these computers. Then I saw our life change. I saw us go from, you know, really small apartment, a small house, to a large house. So I'm like, "Hey, this is pretty cool," and so I decided to follow that the same path. And so I got my bachelor's degree in computer science, started working for Walgreens, and things like that. Then I received my master's degree in management with a concentration in IT management. And so started doing some it, project management. I was of the mindset just like everybody else. Like, only dumb people get hacked. Like, if your username's admin and your password is admin, you deserve it. You know, things like that, just super arrogant. Like, hey, I'm going to lock everything down. Nothing's bad. It is going to happen. And then there was a data breach on my head. Then, three months after that, there's another data breach on my head. I was curious: if only dumb people get hacked, am I stupid? You know, kind of self-reflect, like, is that? Is that? Am I stupid? And so I really started to look at the data into everything we did because I know what the technology we implemented helped. I know it. Like the firewall, the IDS system, the IPS system, all the things we threw out, all the technology we threw out this problem. I know it helped, but why aren't we seeing any of the results? And so I started some research, and then I just looked at the rate of technology, and I looked at the rate of data breaches, and they like to mirror each other. It's like, the more technology we implemented, the worse the data breach problem was. But still, in my heart, I was like, I know, I know this helps. And so I started looking at the commonality, which is us, which is people. Which sparked a whole different thing because I started looking at the psychology of people, the psychology of social engineering. And then there, we helped found this field of cyber psychology. So my particular look into cyber psychology, there's a bunch of different branches now. Out of it, there's some people do some fantastic research. Me, particularly, I look at digital social engineering in cybersecurity. So in cyberpsychology, there are also people looking at mental health for kids, cyberbullying, and things like that. But me, particularly, I look at digital social engineering and cyber attacks and things like that. So I went from, you know, driving store to store at Walgreens to start teaching. I started teaching, and I loved education at a couple of colleges, higher education. Then telling the stories made me miss the reason why those stories existed. And so I was like, "Hey, I got to get out of the classroom. Let me get back into the trenches with my cyber brothers and sisters." And so then that leads me to now doing a massive boat ton of research to continue the cyber psychology push while still working in the industry. 

Raghu Nandakumara  04:45

That's amazing, and you're absolutely right. The assumption is, is that, oh, like, careless people, stupid people get hacked, get compromised, right? But the truth is that some of the most well-funded organizations with the most mature cybersecurity programs are still the victims of an attack, right? And you've said that you were sort of in the middle of two cyber attacks. How did you feel in the middle of those incidents? Like as those incidents are, you discovered them. You're in the midst of them. How do you feel? 

Dr. Erik Huffman  05:22

Honestly, the feeling is scared, and it's like, how, like, what did we do wrong? You've got to calm yourself down. You've got to calm everybody else down. Because it's like, immediately, it's like, who did what? And it may not be you did anything wrong. Like, if you're a small business to medium-sized business, if a nation-state wants to get you, most likely, they're going to get you like that, that you may do everything right and still get it wrong. And so, during those incidents, it was just like, "What did we do wrong? Who missed what? Who did what?" And that was that, that was that feeling, and I had to grow out of that. You know, as a being, as a junior level senior person, because, you know, I was new, received my master's degree, was in the industry for quite a while, but I was new to my position. So you're a junior-level, senior-level person, you got to understand that, like, hey, you got to understand how to lead, and that it may not be anyone's fault, and that knee jerk reaction of what happened, and then it's by who did what— that was toxic. And that's something a lesson learned for me was just focus on the thing, focus on what happened, focus on how, how could you prevent it from happening again. And so I fell victim into that was just like, who did what? This shouldn't happen. We were 100% secure. Actually, 100% secure doesn't exist. And so don't think who did what. Just think about what happened, and then after that, in the after action, or some people call it a post-mortem, figure out who did what. What happened, you know, like, did we do anything wrong, or were we just caught with the vulnerability that we didn't even know existed?  

So you used a very powerful term there about it was, like toxic. In that, it was more about the blame and attaching blame to an individual or a team versus understanding how the problem manifested in the first place. Do you see that? I don't know how many years ago these incidents were, but do you see that that approach has changed in organizations now and that there is much more of a focus on what led to that versus who?  

No. The unfortunate answer is no. A lot of times, even when I consult for organizations, it was, it's who did what, and it's the focus on who did something or what negligence existed versus what actually happened. And we as a public, as a society, we do that a lot too to these organizations, and so there's a data breach to some major enterprise and immediately it's kind of like, well, man, they're obviously doing something wrong because there's a data breach. Even cyber professionals play armchair cybersecurity engineers, like, "Hey, this wouldn't have happened if you would have done this," knowing we know 100% security doesn't exist as long as things are connected to the cesspool of the internet. As long as that connection exists, you're not 100% secure. So I don't think that has changed yet, especially in a business perspective. A lot of times, if businesses think they're the public perception is going to be bad for business or they going to take a stock hit or something like that. It immediately goes to who did what, and then it comes back on, like the CISO or something like that. And then the CISO is like, well, obviously you're not doing your job right, and who knows what that vulnerability was. It could have been zero day. It's like, hey, no one knew, and we're patient zero out here. Yeah, we did everything right and still got it wrong. And that is an unfortunate reality when it comes to this industry. And I'm humble. I'm a very humble man. That's my mom, my grandma, my dad, they raised me to be. And you know, I fell victim to that. And I'll be honest to everyone and say, Yes, I fell victim to that. When there were back-to-back data breaches on my head as a security leader, I turned to, like, who did what? Like, this shouldn't happen. Well, it shouldn't happen, but it did happen. And it's kind of like, what vulnerability existed. We all may learn something, like I may put something out to the world, and everybody learns something. And it's just at that point, like, Who are you to blame? Like, sorry, you don't know everything about technology and every bit of technology that could be potentially exploited. It's not a realistic expectation for a leader to their employees, and it should not be a realistic expectation for everybody else, the CISO, or the cybersecurity engineers. Not to say, hey, every data breach should be excused. No, but you just can't immediately go there. If there is negligence, figure that out after you fix the problem. Don't immediately point towards negligence first, then let's fix the problem. Not fix the problem, then look for negligence, if that exists.

Raghu Nandakumara  10:30

Absolutely. I think that's such an important point to make because it shouldn't turn into a, the first reaction shouldn't be a CYA exercise. Right? Cover your backside first. That shouldn't be, right? You also mentioned the stress for the CISOs, right, and the sort of, this constant fear they live in of "what's going to happen if there is a breach on my watch", right? Particularly because it's like, well, I'll get asked, right? Well, you've got all this funding for your security program, and yet this happened. So this must be a key part because we hear so often about CISO burnout. This must be a key reason for that.  

Dr. Erik Huffman  11:13

Yeah, it absolutely is the last position I was in. I was the highest-ranking security person in the organization, and it is, it is a path to burnout. You're constantly worried about all kinds of security threats and the speed the business wants to speed up because, you know, the speed of business can't slow down, which is totally understandable. But as the speed of business just continues to roll faster and faster and faster, the higher the risk exists. Yeah, so you're trying to balance the risk out without slowing the business down, but at some point, you need to tell the business, "Hey, you need to slow down." And you'll never, you'll never win that argument. And it is very, very difficult because you're still worried about any breach that would happen, because you may be the one to blame for that, unless you have a mature organization that will tell you, "hey, actually, like, due to this, it's probably, probably not your fault, but we still need to continue to invest in security." But that's a level of maturity we don't see from a lot of organizations. It's a level of maturity that doesn't exist in a lot of boardrooms yet to say to the CISO that if there's a data breach, it may not be due to negligence. It may not be the CISOs fault. It may not be the cybersecurity engineer's fault or it or security director's fault. It could be something that the entire industry is learning at one time, or it could be a zero-day that we just didn't patch quickly enough because it, zero days was released on day one, day three. It happens to come, happens to come to you, especially in enterprise setting, things like that exist. I'm not like a CISO apologist, but it's obviously not fair the level of scrutiny that happens to some of the degree that some of these data breaches occur. If you're a mid-sized organization and a nation-state goes at you, you're not investing enough money. I will tell you that you're not investing enough money to protect that CISO. That CISO is probably going to get exploited. And if you blame them, you can hire someone else, and the same thing is going to happen to them. Like, really, if the resources are strapped, hire whoever you want, and it's not going to save that organization from what's going to happen. So you've got to continue to invest in security, but understand 100% secure doesn't exist, and understand that, like man, some of these breaches are just next-level good. And if you're prepared for them or not, and if you want to prepare for everything and really minimize risk, that investment better be pretty high. It needs to be pretty high to give your security engineers a chance. Don't have 1,2,3, of them in your company's worth $50, $60, $100 million and say, if anything bad happens, it's on y'all. I don't know many people good enough for that, and they're not. They're probably not paid enough for that to be real, like 24-hour SOC with three people, I don't know, man.  

Raghu Nandakumara  14:27

There's something in there that you said. It's not just about the board being fully understanding the importance of a security program, right? That sort of, I think that maturity, we've absolutely come to the point where the board, those in charge of making the decisions, clearly understand the benefit of a well-developed and well-executed security program. But what you're saying is, is that that's of itself, that's good, but really, the board needs to have the maturity to understand that as much as they invest in security, there is still a significant chance that an attacker will compromise their environment. and they shouldn't then react saying, "Hey, what the hell," like that shouldn't be the reaction. It should be like, "Yeah, we get it. We get that the probabilities are still stacked in their favor."  

Dr. Erik Huffman  15:19

Yeah, exactly because every conversation in the boardroom for progress or the business most likely, I won't say all the time, most likely, will evolve or develop a new risk. And depending on what that risk is, as a board or as a CEO, COO, something like that, you need to understand that when you have these conversations, those risks do emerge, and are you going to mitigate those or you're going to accept those risks? Some of those security conversations that happen, and you get frustrated CISOs are just due to accepted risk. And the board, the CEO, the CEO, and the CFO should understand that, hey, if this happens, this happened because we accepted the risk of it happening. And if you accepted the risk of it happening, and it happens, then it's just kind of damage control. You know, if the risk existed, and we thought the impact, the consequence of that risk happening, wasn't going to be so severe. And it happens, let's make sure it's not so severe, and then let's revisit the risk. Do we continue to accept this risk? Most likely, the answer is, "No, we need to pivot. We need to do something different."  Because we didn't think the probability of that risk happening was so likely, or it was fairly likely, and then it just happened to happen so quickly. Like, do we think it's going to happen again? Yeah, is this going to be an annual thing? Is it a quarterly thing? Hopefully, it's not like a weekly thing, but it might be once every five years after you do your post-mortem and your research on it, then accept that risk and hold the CISO responsible for the risk. This risk was accepted like, "Okay, if this happens, please protect us from it getting worse." Not "Please protect us from this risk we accepted it from ever, ever, ever happening", then you just need to mitigate the damn thing. Apologies — you just need to mitigate the thing.  

Raghu Nandakumara  17:26

Absolutely and just that last point you made, because hearing you talk about this and so passionately about sort of understanding that if you accept a risk, you have basically done the due diligence, and you've said, " I think my odds are in my favor. So I'll take that here. I'm not going to put more money", but you got to realize that that there's still that risk still exists. By accepting it, right, you haven't made it disappear, right? So if that risk is exactly the thing that gets exploited, then you can't say, "Well, I thought we defended against this." You accepted it, right? And I actually, to that point, our Chief Evangelist John Kindervag, he actually, he says to him, the risk is danger, right? You've got to rather than accepting risk, you've got to do as best as you can to mitigate as much as possible.  

Dr. Erik Huffman  18:20

Exactly. So security-mature boards don't just understand security. Well, they do, but they understand that security is risk management. Every security professional, the job is risk. Whether it's using technology or whether it's implementing something, our compliance is just, it's risk management. And so the board should understand, or your business development teams are understanding, that it's all risk management. And so, "Hey, we're going to do this. Do you understand this risk? Yes." And then they, they'll have, hopefully, like a risk register populated out, and they know what they're accepting, and they know what they're trying to mitigate. And then the security team, yes, they should be held accountable. You're held accountable for the risks that are on the board that you choose to mitigate. Like, "Hey, we said we mitigated this, and we found out y'all are still running Windows XP for some damn reason." Yeah, by all means, like, by all means. But if this risk is accepted, and then the risk occurs like that, just means, hey, just damage control. Damage Control and make sure it doesn't happen again. Applaud all the men and women leading organizations with that type of relationship because then the security team, the CISOs, are not facing as much burnout because they understand the board understands, like, hey, if something bad happens. I'm not out on the street in this hard job market.  

You know, they understand that, hey, we can have things happen like, and we just all go on damage control, but certain things just absolutely cannot happen, like, if, the if the breach happened, and it's absolute negligence, yes, you're held accountable to it, and it's fair. It is a fair argument. the path. The username was admin, the password was admin, yeah, Bad. Bad security team. Absolute bad security team. Or you had unpatched software that should have been patched a year ago or three or two quarters ago. The security team probably should own up to that one if your processes are broken. But if it turns into the software or this is due to the business need, we implemented this, or we were not patching as quickly as we wanted to because we don't have the team that we need. And we told you what risk existed, and you said, "Hey, we could be a little bit behind on patches because we need to invest more money in this, " and something bad happens. Then it's like, well, we need to revisit this. This risk is too high; the consequence is too high, and the probability is too high. And that conversation, that is where all businesses should be like, that's the  dream for businesses.  

Raghu Nandakumara  23:43

That's an awesome summary of just a collection of points and really about how best the business on one side and security organization to support something on their side should really work together to deliver the best outcomes for both sides. So, so thank you for that, Dr. Huffman. I want to kind of move on. Your field, and we've kind of skirted around this, but you're here because of your work as a cyberpsychologist. Give us a really simple definition of what cyberpsychology is.  

Dr. Erik Huffman  24:14

Cyberpsychology is a mix between security, cybersecurity, psychology of human behavior, and sociology of human behavior in neuroscience. Plain and simple, you put those three things together, and you end up with cyberpsychology. Okay,

Raghu Nandakumara  24:30

I get, I understand that, right? So, why is it so important today?  

Dr. Erik Huffman  24:36

Because over 90% I think, is between 94 to 97% of data breaches involve human behavior, involve the human, the human element. And so I know even prior in this conversation, we're talking about probably 2-3% of the problem -not to say technology doesn't matter, technology matters a lot. We're talking about 2- 3% of data breaches. The other 95-97% of data breaches involve human behavior, involves like human error, or involves social engineering from some perspective. When I started my research, I pulled a group of over 300 hackers, self-identified. They didn't tell me if they're white hat, black hat, gray hat, whatever. I think it's like 93% of them said they start with humans before they start with technology because they would rather get you to give me your username and password. It's not even hacking at that point. It makes my job way easier as a threat, as a threat actor.  

Raghu Nandakumara  25:40

Yeah, absolutely. And I think that last point is probably a starting point; is attackers don't want to work hard, right? They want the easy path for success, not the, not the hard path to success. Well, most of the time, anyway, it's about the easy path to revenue.

Dr. Erik Huffman  25:57

Yeah, oh, definitely! Like going toe-to-toe with a patch firewall, that's an insane task. Not many people on the planet can do that. So when you're talking about, like, all right, you have a fully patched IDS system, fully patched firewall, fully patched which router, man, like people that can break into that. That is, like, 1% of 1%. It is very, very, very difficult to do. So that limits, like, if you're fully patched up, that really limits how many people can exploit you, but if I can get you to contribute to your own data breach, oh, man, like that is that that's, that's love.

Attackers, they don't want to they don't want to work hard because technologies evolving, patches are coming out. Vulnerabilities exist, and then they go away, but you're a target. And so the longer it takes, the harder it can become. But if I can get you to talk to me, if I can get you to contribute to your own faults, then it makes my job a lot easier. And I can get a lot more data if I become a privileged user if I can become you in your system, it makes my job a lot easier. And so most hackers start there. They start with you, they start with me, before they end up like, Hey, let me go toe-to-toe with Microsoft or with Apple or with Cisco or Palo Alto, not having it, and they don't want to. They don't want to do that. For the most part, most hackers don't. Most of them just want the data as quickly as they can, as easily as they can.  

Raghu Nandakumara  27:36

And that's exactly why so much of the security awareness training that we all know and love is so focused on almost addressing that human tendency to be curious, right? So if the education is around like, how can you validate that a website that you're going to is trustworthy, right? How do you avoid clicking on that link in an email when they're offering you tickets to the Super Bowl for $5 in the VIP suite. So, despite all of that, we still click right. We still go to that website that we shouldn't go to, right? We still give our details on a phone call when the person is being our best friend. So why, despite all of this education, why is it still not? Why haven't you got to a point where we've stopped us?  

Dr. Erik Huffman  28:30

Yeah, I think my, my honest opinion, is that we spent so much time on cybersecurity awareness that we haven't even explored cybersecurity preparedness. We've tried so hard to make people aware of the problem that we haven't prepared people for the problems that exist. Like at this point I will say most people are aware like, so you should not teach awareness. We need to start moving towards preparedness. In digital social engineering and, phishing, is nothing but malicious marketing. And every business knows marketing works. That's why we invest billions of dollars in it. Marketing works. And so malicious marketing, why wouldn't that work? Because it's preying on, it's preying on the human. It's preying on the principles of influence for that human. And we all have these principles of influence that exist for us, and so it just depends on how the attacker is choosing to reach the person. Your principles of influence are different from my principles of influence. So the things you would click on are different than the things I would click on. And we haven't even started preparing people for the conversation that they, as a person, are going to be attacked. We've made people aware of technical threats. And on top of that, those technical threats sound very difficult to do. It sounds very difficult for a person who's not into technology; it sounds like, hey, you got to- not even like checking email headers- that's easy, but we start talking about back doors and things like that. But if we broke the problem down and said, what they're going to try to do is fool you as a person. And in order for you to help us with the security problem, we need you to not get fooled. What they're going to try to do is play on your principle influence of liking. Play on your principle of influence, of reciprocity, of curiosity. Things like that will then sound like to a person, like an average employee, or a financial person, or a marketing person, like, "Oh, I can get this. I can do this. Like, I'm not, I'm not a technical expert, but I can protect myself from these kind of attacks." Then, we start leaning towards preparedness. We start preparing people for the problem because awareness, I don't think, is a waste of time. I think we should still continue to do it, but there needs to be a step ahead of that. We need to go a step ahead of that and start preparing people for the types of threats that exist, how they are going to target them specifically, and what vulnerabilities they have as a person, and really lean into that. like right now, it's very difficult to find a job, and people are scared of losing their jobs, especially in tech. Tech layoffs are going crazy. I guarantee you that if you send out a message regarding layoffs in a phishing campaign, people are going to click. It is not because they're stupid. It's that these people are scared for their jobs. They're worried about this. And those are the types of threats that we start to see that are most successful. The days of the Nigerian prince are gone. You know, people still send out things like that, but when it comes to targeting the person intentionally, people are scared, and they don't want to think twice. They just want to help the CEO out because they're scared of the CEO because they're scared to lose their jobs; that is a human vulnerability. And everyone has one. This is, like, bad person, like, no good person, very vulnerable at this point in time. And in order to quote, unquote, patch that vulnerability might need to be a message from the CEO. Like, if, if there was just a layoff, people need to hear it from the CEO, and they need to hear honesty. If you just laid off like 20% of your company, I guarantee you the other 80% of your company is scared of the next layoff. And so where does that phishing vulnerability? Where does that digital social engineering vulnerability tie to? Probably your name, Mr., Mrs., Miss CEO. Because if you send a message out to someone else. I guarantee they're going to live. They're going to listen and not want to question you because they're scared for their job, which is human nature.  

Raghu Nandakumara  33:09

Yeah, yep. So I only think one little thing I'll disagree with you on is I don't think the Nigerian prince has gone away. It says that the prince is now armed with a large language model that is able to generate emails that are so targeted and so personalized, right, that you actually see it for something else, and you think it's genuine and a trick by it. But I want to talk about vulnerability, about human vulnerability and preparedness. So what you said is that we've got really good at awareness, right? And I agree, right? And you can see that it has gone beyond just the work environment, and it's now pretty much like kids from when they're almost at kindergarten, are educated on cyber awareness, right? And it's really important. And you said we need to focus on preparedness. So my question to you is, is that when everything is fair and lovely, right? We are able to make very kind of relatively rational and objective decisions. But as you know that attackers, hackers, criminals are targeting. They've profiled their targets in such a way that they target them when they're at their most vulnerable. So how do you build preparedness for a state-like preparedness for situation when you are at your most vulnerable, and how can you still act in a way that ensures that you're properly protecting yourself?  

Dr. Erik Huffman  34:38

Yeah, that's a fantastic question there because that is the step organizations need to take. A lot of that is on self-reflection, but a lot of that is understanding the environment. So when it comes to understanding environments, like if you were to go to a country you've never been before, I guarantee you, if you have a backpack, you're going to be a lot more aware of what's going on around that backpack. If you're a woman or a man and you're carrying a bag or a handbag of some sort, I guarantee you're going to clinch that handbag a little tighter. However, when you're online, the physical environment around you doesn't change, but in your digital environment, you may be in just no man's land at all. You may be in some foreign, odd place, but because your physical environment around you doesn't change, your brain as a person, biologically, your brain isn't seeing the threats. It doesn't quite see the threats the same way. Especially some organizations, a lot of organizations that I agree with a lot, you know, work from home. If you work from home, you're way more comfortable. So your most vulnerable state as a person online is different than your most vulnerable state as a person physically. Like if you're comfortable, you're more hackable. It's just like, if you're comfortable in your physical environment, you're more likely to be like, hey, I'm going to have my backpack, and I might leave my backpack somewhere. And go back to like, Oh man, I totally forgot it. But if you're in a foreign country, you're probably not forgetting that backpack. But online, digitally, if you're comfortable, you're not like, "Hey, let me check this. Let me double-check this." Because as a human, like the default like, if it's written language, the default voice you read in is your own, and because you're reading in your own voice. You're receiving all of these messages, all these commands, in a friendly, trusted voice, because most people, they trust themselves, they like themselves. You're not reading it in the voice of the person that sent it, unless you see the name of a loved one or someone that you know, then you begin to read it in their voice. And so as an attacker, I can get whoever to say whatever I want to say, because that is what I like to call human factor authentication. You see the name, then you see the message, and then you trust the message. You know you have to be able to check those things. So, as we begin to go from awareness to preparedness, a lot of it goes to understanding the environment and understanding the self. And if you understand the environment that you're in, you're in a little bit better, better position. And if you understand yourself and how you receive information and the vulnerabilities you have as a person, then you're in a better place. How a lot of this started off was me receiving a message from an imposter, imposing as my mom. And so, you know, I know my mom's voice, Southern black woman and man, the most friendly voice in my head. And so when I see that I receive that information, and I'm receiving that information in her voice. And that is hard to be like, "nah." You know, that's kind of how it is. And AI is taking that, and it is flipping that like crazy right now.  

Raghu Nandakumara  38:11

Yeah, absolutely. In fact, for those of you who haven't heard Dr Huffman tell this story, I really encourage you to go and watch the video of his TEDx talk, and it's told in the most brilliant, brilliant way. We'll put the link in the in the show notes. So when we're thinking about sort of preparedness and being able to be, I guess, being able to be prepared even in the time of stress and vulnerability, right? So all of those things, and those are difficult things to develop, how do we then build a robust cyber preparedness program?

Dr. Erik Huffman  38:49

Yeah, so do that, you have to do two things, really. You have to do a threat appraisal and a coping appraisal. And I encourage every person to do this on themselves, not to say, hey, every organization needs to do it in an organization with high-value targets. You can do this with them. You can do a threat appraisal and a coping appraisal. What is a threat appraisal is, what vulnerabilities or threats exist out there for this particular person, and what risk exists out there for this particular person? And then the coping appraisal is, what skills and how would they react if these risks happen to occur. And so as you do your threat appraisal, you just say, hey, how do I deal with this particular risk? How do I deal with this particular instance? What threats exist out there realistically, not just to the job, but to the person and themselves that exist? So, if you do it personally, you can be a lot more realistic, or you can be a lot more personal with some of the threats that exist out there for you. And then based on that, you can do a coping appraisal. Yeah, and then that coping appraisal is, what skills does the person have, or what support system does the person need in order to better prepare themselves for these risks that exist out there. Doing those two appraisals will put you light years ahead of most organizations, because these are psychological things that people should do, like when they face trauma or they're in therapy, most psychologists will tell you, hey, to do a threat appraisal. Do a coping appraisal to see where you are, and just as a quick, as a quick example. So, if you were to do a threat appraisal, the question you may ask is, what could, or should you do to reduce this particular risk of this issue? Insert like blanks there, whatever risk that exists. What is your experience of this particular threat or this particular thing that exists out there? And then another question you can ask, kind of off top of my head, is, how do you avoid dealing with this particular risk that that exists? And so from there, you're getting a comprehensive understanding of the person and the risk that exists as that person is, as that person is working within, within the organization, if you want to go deep, if you're doing one on yourself, or if you want to ask a deep question to someone as you're doing a threat appraisal is, how do you know you're upset? And when you're upset, do you hide it from yourself? And so if they're realistic, say, How do I know I'm upset? You know, typically, I start clinching my fists or palms, get sweaty, or something like that. Like, there are some telltale signs for me personally when I get upset. Do you hide it from yourself and others? Like, yeah, yeah. Typically, I just, like, hold that in, then it's like, so if you, if you're in a high-stress time, most likely you're not quite showing the stress, and you're hiding it from yourself. So you understand that risk, you understand that threat that exists for that person, and then coping appraisal with that same threat could be, and when that threat occurs, how could you view that particular situation? And then you go from, it's called trial layer thinking. It goes from what is the threat to what could you do about the threat? To what are the possibilities right? Like, how else could you view this threat? Like that threat can't go away. How else could you view this threat? And then you're giving the person tools on how to better assess and cope with the situations that they may face at work. And then it's one be like, how could you change your response to this issue? I mean, if you’re changing your response to the issue, you're actually giving them tools to change their behavior. So rather than making you aware of the things that occur, like Cybersecurity Awareness, I'm preparing you. I'm showing you what is existing out there for you, and I'm giving you the tools to prepare yourself for when they occur. If the threat cannot go away, you need to understand how you view it, and you need to understand how you can change the situation of this threat so you are better prepared for it. So, if you're getting targeted emails, or if you're getting targeted messages or targeted calls, or if you're getting AI deepfakes from particular people, you can better prepare yourself for when you see those things that exist, there's a lot you can do. So coping appraisals, threat appraisals that will be at that would be the absolute game changer for organizations because of the threats that exist, if you're just picking threats, don't just pick any random threat. Pick threats with value. Like there's 1000 threats for every person. We all know that, like anything can happen, pick threats with value. After you pick threats with value, help the person understand what's their perceived severity of this threat, because they may not see the severity, or they may think the threat is way more severe than it actually is, and if they think the threat is way more severe than actually that's the Goldilocks zone, for someone that is going to click on something or fall victim to social engineering, the threat is not even that serious. Like the CEO, they're probably not going to fire you. You should not be scared of your job, but the perceived threat, severity of that threat is all the way up there, then the perceived vulnerability that they have cybersecurity professionals, we fall victim to that all the time. I understand technology, so I am not going to fall victim to digital social engineering to perceive vulnerability is insane. Like yes, you will, I guess you, yes, you can, because it's not a technology issue. It is a human issue, and then the associated benefits that occur. From that. Then, the coping is everything that I explained. And then one thing to add to the coping is just based on your coping appraisal, what you're going to build is more self-efficacy, because what we have found is people who hate micromanaging, people who are self-monitoring, like, probably you and I hate micromanaging. I'm very self-motivated. I don't need to be like, looked at. We fall victim to social engineering much more so than people who rely on other people. Because I'm out there, I'm out there by myself, like, just let me do my work. I'm good, and because of that, I'm relying on myself. I'm more isolated. And because I'm more isolated, I'm going to have to make decisions on my own. And so social engineering, you're really going, you're really taking and going one-on-one with the social engineer and the threat actor. So you're more likely to fall victim to a social engineering attempt, just like, just like I would, and with that coping appraisal. What you were going to get after that is perceived control. Can you control the situation? And if you have perceived control, and if you can align all these things together, and you give control of the threat to the person, they're no longer looking at the security team as, hey, you're going to save me from every mistake I can make. I absolutely cannot save you from every mistake because if you start joining the threat actor and you give them the keys to the kingdom, there is nothing I can do to save you. And so that perceived control now you have perceived control accurately of the situation. Because right now, if your perceived control as a user is the security team is going to save me, are the spam filters going to save me? We have all these security tools out there like, I don't have to worry about social engineering. Yes, you do. Your perceived control of the situation is that you don't have control, and I promise you, you do have control. Sorry, that was a long-winded answer that. I'm passionate about it.  

Raghu Nandakumara  47:09

No, I love it. I think it's such a great answer. And I think it's really important because, again, I would like to summarize, right? It's that shift from awareness to preparedness, right? Awareness is great, but it's basically useless if you're not prepared to act on that awareness. And I think that you kind of you after you summarize, so the paths that get us to better preparedness.  

Dr. Erik Huffman  47:35

Yeah, exactly because, like, being aware is like, there's if you drive in a car, car accidents occur, being prepared is putting on your seat belt, yeah, and when you drive, I think most people, everybody should be prepared for an accident. You know, obviously, you want to have, you want to have your seat belt on, just in case something happens. And also, preparedness from the automotive industry is, hey, we have airbags and all these things. The car is designed to, designed to be safer if something was to happen, and you put on your seat belt, you're prepared for if something happens. We hope it does it, but you're prepared for it. And right now, we don't have no security seatbelt. We have no cybersecurity seat belt. We just have a whole bunch of videos and a whole bunch of awareness out there of what can happen, and what we have in the late innately done is psychologically, we scared everybody who we should have prepared. Yeah, everyone is scared of this threat that exists, but we need to prepare them for it, because the speed of business is not going to slow down. We can't say "internet go away." It's too dangerous not happen. It's not realistic. So we need to prepare you for the environment that you're undoubtedly going to have to live in and going to have to operate it just like with cars. You could try other means to get to work, safer means, but most likely, if you want to keep your job, you don't want to start pedaling a bike at, you know, midnight in order to make it to work at a decent time, you're going to have to prepare yourself for a potential, potential car accident. Hope it doesn't happen. But if you happen to face that kind of threat that exists, have your seat belt on so you're going to be okay. And so now is if you end up facing a very well-equipped threat actor, especially now that AI and deepfakes are coming out for everybody, we can get into that and say, like it's coming for everybody, you need to better prepare yourself on the types of threats that you're going to face as a person. And it doesn't include going hands like coding, going code to code with a threat actor like you'd lose that. But if you can prepare, if you can prepare yourself psychologically, you can keep them from getting so far. You can keep them from exploiting you as a person. Because right now, 90% of the problem is not just like. Humans are the weakest link. That's not true. I believe that, as humans, we just have the most vulnerabilities. We just have the most vulnerabilities. We ain't the weakest link. We block a lot of threats ourselves, but we have a tremendous amount of unique vulnerabilities technology can't save us from. That's

Raghu Nandakumara  50:17

fascinating. I love the seat belt analogy. And in fact, if you don't mind, I'm going to see what sort of because, as you said, the reason you wear a seat belt, the reason the car has all the various like safety features that it does, is because it's saying that in the case of an accident, we want you to be as safe as possible, right? So with that, do you think a Zero Trust strategy, a Zero Trust approach, and we've spoken about how, like an attack, is an inevitability, right? The attacker will find a way to be successful, just in the way that we can never obliterate all traffic accidents in that scenario. Do you think adopting a Zero Trust strategy is kind of like having those safety features on a car? It prepares an organization best for that inevitable cyberattack.  

Dr. Erik Huffman  51:10

I definitely think so. So if your organization, because what we're, what we've been talking about, is trying to build like Zero Trust behavior in people as they read things or as they interact with people or other entities online, and so as an organization, if they build up a Zero Trust infrastructure, that is the best case scenario for organizations. It's not even trust but verify. It’s verify. You know, because we've started saying, hey, trust, but verify. You can trust it, but verify that it is true. Like, no, don't trust at all. You know, just verify, verify everything. Because Zero Trust from what it is. It is not a product. It is a mentality or a concept that an entire organization has to follow. You just can't buy something and say, Hey, we're now Zero Trust. You have to implement a whole bunch of different behaviors. And so I would argue to some that you're not fully Zero Trust, even though you've done it from a technical perspective if you haven't looked at the people and given them the power to be Zero Trust. And so if you're so authoritative in your behavior as an organization where they can't question you, you're telling them to innately trust anything that comes from you. That's not Zero Trust. And so I think one has to happen with the other, like you have to be Zero Trust as an organization in your behavior, and you have to be Zero Trust as an organization from a technical perspective.

Raghu Nandakumara  52:43

Yeah, absolutely. I think that that's spot on. So, moving on from there. And we've dipped our toes here and there in AI, right it during the process of this conversation. So, I mean, I am sort of going back to the root of this in cyberpsychology because of everything we've spoken about. If we then now layer on the fact that AI is a reality, right? We use it on a day-to-day basis, in our jobs, right? In our lives. But if we think about this from a psychology perspective, it really throws a massive spanner in the works because now, like our ability to discern reality from fiction, it is super hard, right? Yeah, it's just the difficulty has gone up big time. So, how does that play in your field?  

Dr. Erik Huffman  53:37

Oh, it took a lot of things that I worked on for years, and it shredded it. No, all jokes aside it. It adds so much complexity. Because from a human, when I say you are reading your own voice, that's just what happens. So like, if you ever like, watched a movie, you read a book, a really good book, and then you watch the movie, and you're just all-time disappointed, because everyone looks wrong, everyone sounds wrong, and the person who created that movie was one of the people that wrote the book, and you're telling them they're wrong, because when you build, when you read, you build this character up in your head, and then you just, you start reading in their voice, and you accept that as the norm. When it comes to AI, I can now take all that mental work that you do and throw it out. I can make you see what I want you to see, and I can make you hear what I want you to hear in the way I want you to hear it. Previously, that's why memes were never sent as threat attacks. You know, like they're not sending memes because you see it, and then it just all your biological defenses kick up, and then you just like, Nah, that's not real. But as AI now, I can layer on to like, the principle of liking. I can show you an attractive man, woman, or whomever, and then you're more likely to receive some of that information. I can have a video from your CEO telling you, in their voice, to do what they do and what I want you to do, and you're more likely to receive it. I can have a deep fake pose like you and send it to someone else, and they're more likely to receive that. It takes the biological defenses that stranger danger when you look at someone and it's slowly dissolving those things away, and as we continue to evolve, because the goal of AI is to not be human detectable that it is AI. And what that is, from a security perspective, is just like, I need it to be fake enough to where I can tell. And right now, it's getting a little too good, and it's hard to tell, like we've had CFO send money to other CFOs via deep fake because they heard them say it on the phone. That's already happened. And AI is not even that great right now. It's still in its embassy. But when it gets really, really good, those biological defenses that we have when we see the song is going to shatter because now you can't trust what you read, you can't touch, can't trust what you see, you can't trust what you hear. We had better be Zero Trust by that point because it's going to get very scary out there.  

Raghu Nandakumara  56:32

And I think it also to that point, the whole preparedness things, it's suddenly we're back to square one, yeah, right, having to completely reset and start again.  

Dr. Erik Huffman  56:44

Definitely, because threat actors, because everything we have built previously, threat actors has have found a brilliant way to use it against us. So like, encryption is it was designed to be a security tool, and it is a security tool because it was designed to keep our information confidential. What threat actors did, they took that and they created ransomware, and they said, Hey, we're going to use encryption, and we're going to encrypt your own data, and we're going to make you buy your own data back. And so when it comes to AI, we're still, we've seen some AI attacks start to happen. But as a as the threat actors begin to understand AI and how to leverage it against human behavior, that is going to be huge when it comes to digital social engineering. Like, hey, now I can make this person see what I want them to see and hear what hear what I want them to hear like, How can I? How can I leverage this for my own personal gain? Because organizations already, they're trying to leverage it for their own personal gain by different ads and marketing campaigns that exist out there. The threat actor is going to start to leverage that against us as I can prey on these principles of influence of people, and you can sit on a Zoom call, or you can sit on a video call of any sort with somebody and talk to a deepfake version of whomever. And you feel that it's real, and you're excited about it, but actually, it's not. It's like, hey, it could be whatever it can be, a fake mark, fake sales call, could be a fake whatever people need to be prepared to face those kinds of things, even deepfakes in voice now over phone calls, like, can you just that you're talking to who you think you're trusting to talking to before it's like, hey, the answer was, I would pick up the phone and call that person. Is that the answer? I'll pick up the phone and call that person. Hey, you know, Susan, is this? You like, Yeah, are you sure? I kind of think, like, yeah, you call me. Then you like, oh, yeah, I did call you. So like, in it's going out, it's really, it's really going to change the game in digital social engineering. And we're already starting to see it a little bit, I think, is going to catapult into, like, the stratosphere over the next five years.  

Raghu Nandakumara  59:06  

I think it's almost to the point where even those kinds of nonphysical interactions, where, let's say it's over the phone, that we're almost going to have to find a way where we can use sort of various ways to validate identity, right? We're going to have to do the same right to say, "Hey, I'm a Dr. Huffman. Can you validate? Can you identify yourself" by whatever your face, ID, etc., so that I can confirm that I'm talking to you before we have the rest of this conversation? So I think that that's kind of where, like, the whole way we approach preparedness and even awareness, to an extent, is just going to shift so much over the next few years. And I think, I mean, you're right on the cutting edge of this, that we must still be sort of at the infancy of where we can craft what that preparedness is going to look like.

Dr. Erik Huffman  59:58

Oh yeah, definitely. I don't think we've taken a hard enough look at it. We've spent so much time in awareness. I don't even think most, most people have even started with preparedness outside of a technical, that when we start talking about psychological threats. I've worked with a few organizations, and they asked the same thing, "Hey, could you verify yourself before you call? Like, what should we do?" I was like, if you have, this was a government entity, like, if you have something super secret, super important, you can have, like, a code phrase, like, when the cow jumps over the moon, my stomach hurts, you know, something along the lines of that, something that deepfakes has haven't even gone close into like you can't fake that if it's that confidential if it's that secret, you may need to go that far, because if you're worried that, Hey, someone or something or some organization is really, really, really targeting you in these ways, and you can't make it go away. You may need to go that far because the vocal deep fakes now it's down to, it's down to less than 20 seconds. 20 seconds of vocal data will train a deepfake to sound okay. You know, because we hear in music all the time. It's like musicians no longer die. Like you can get a new Elvis record. You can get a new Michael Jackson, France Tupac, you know, and it sounds good. And people are releasing music now, and people, other people are like, that sounds like AI, but it's actually just the artists in their new song. Are people, an artist comes out with a song, and it's like, that's not me, that's AI. And as it gets that good, and it starts training itself on other people, on public figures or people that you're not even familiar with, it's going to be hard to trust even the vocal data, and soon it's going to be hard to be it's going to be hard to trust what you even see online, so that Zero Trust posture is going to have to happen on a psychological perspective for people as well. If not that 90% is going to stay 90% and it's going to it's just going to stay stagnant, or, if not move up, threat actors are going to go for the most vulnerable thing, like, if I can get you to give me your credentials, doesn't matter what you can do. Doesn't matter what you can do at all. Absolutely.

Raghu Nandakumara  1:02:24

I mean, talking about artists, I'd say that ever since the days that autotune was in, was introduced, we don't know whether we're listening to the real artist or not, but you said that we're going to we can't trust what we see.

Dr. Huffman  TIMESTAMP

As AI and deep fakes begin to hit when threat actors start to choose to use some of these things against us. It is clear, you can't tell, but it got even worse when we start, like, photography filters, yeah, you start adding filters to it. It's like, hey, it's real our AI, but we filtered everybody. Yeah, it's tanked even more because people just can't tell. And this is in its infancy. There's some bad AI out there, because, you know, a lot of people, they look at like the fingers, you know, there's like, seven eight fingers on one hand to give it away. But the good ones that exist out there, and the modern ones, are starting to get better, like hands are starting to be starting to have just five fingers, rather than extra limbs going out there or something like that. This is just leading us to how do we as a public including security professionals, can we tell the difference? And because we can't tell the difference now, what it's like now, what and threat actors. We know this is a vulnerability where even if you look, you just kind of glancing, you can't tell vocal data is we have videos now, it's the latest study in AI videos that exist that's like 30% 35% right now, we're not near data saturation yet, so, but it still is less than a coin flip, which is still telling me we can't tell the difference. If we can tell the difference as a public the number I was looking for is like greater than 70% because that means it's more than a coin flip. Is much more than a coin you we can actually tell the difference is less than a coin flip, and it's much less than a coin flip that we can't tell the difference between what's real and what has been manipulated by computers.

Raghu Nandakumara  TIMESTAMP

I mean, Dr Huffman, I know we've recorded for a long time here. I've taken a lot of your time, but you don't leave me with a lot of hope with that point you made, right that already today, that the probabilities of us being able to look at an image, look at listen to sound, and us being able to accurately determine whether it is fake or real, is significantly less than a coin flip. And then, as we feed these AI models with more data, right? They're only going to get better. So, like just taking your message about, we've got to rethink, potentially, how we present awareness. We absolutely have to rethink how we make ourselves. More prepared to how we respond to whether it's as individuals or as organizations being the victim of cyber attacks. But I think the key point you're making here is that we need to really think like, but we can't necessarily leave sort of human emotion out of that, right? It's so fundamental to how we prepare each other how we react in those situations, and that's the only way we're going to get better collectively at managing cyber incidents,

Dr. Erik Huffman  TIMESTAMP

Definitely, and I would say not all hope's lost. I think there's a lot of progress to be made because of everything. One thing I pride myself on with my research is that we're not selling anything. And there's a there's a fix to it there. There are psychological like, ways that we could address this where we don't have to be the 90%. If we cut the 90% to 70%, we have taken hundreds of billions of dollars away from cybercriminals. Then, understanding that the problem is not technology doesn't mean, like, Hey, we got to innovate something new. It's just that we got to change how we train. We have to change how we think. We got to understand the environment a little bit more and how you're psychologically more vulnerable online than face to face. Like, no one's given their credit card information up face to face, no one's given their bank account information. It's just not happening. We do it online, and it's like, well, I would never do that. Like, yeah, we do it's like, traffic, everyone sucks at driving, but no one's a bad driver. If you ask them, like, yeah, I guess yes, you will think there's a way to get to everybody. So absolutely we can fix this, and we can address this, and it's absolutely free to do, absolutely free to to do.  

Raghu Nandakumara  TIMESTAMP

Absolutely. I think that's the beauty of it. So, Dr Huffman, it's been such a pleasure and such an education having you the on the podcast. I know when preparing. I know you're a huge Will Smith fan, and you and I are both big fans of Fresh Prince of Bel Air. We had planned on incorporating sort of lessons in cyber from Fresh Prince, but we'll just have to leave that for when you're next on our podcast. But thank you so much. Dr. Huffman.  

Dr. Erik Huffman  TIMESTAMP

No problem at all. No problem at all. Thank you much. Love. Appreciate it. It's fantastic.

Raghu Nandakumara  TIMESTAMP

Thanks for tuning in to this week's episode of The Segment. For even more information and Zero Trust resources, check out our website at illumio.com. You can also connect with us on LinkedIn and Twitter at Illumio, and if you like today's conversation, you can find our other episodes wherever you get your podcasts. I'm your host, Raghu Nandakumara, and we'll be back soon.