A Zero Trust Leadership Podcast
.webp)
How Cybercriminals Manipulate Trust — Then Steal Millions | Timothy Kromphardt
In this episode of The Segment, Timothy Kromphardt, Senior Threat Researcher at Proofpoint, explores how modern scams actually work behind the scenes.
Transcript
Raghu N 00:00
Hi everyone. Welcome back to another episode of The Segment today. I am so excited and genuinely really looking forward to this conversation with Tim Kromphardt, senior threat researcher in email fraud research at Proofpoint. Tim, welcome to The Segment.
Tim K
Hey, thanks for having me on Raghu.
Raghu N
It's our pleasure! And so Tim, in his role, focuses on business email compromise, BEC, TOAD attacks, and large-scale scam operations. More on all of that in a second. His work centers on understanding attacker behavior by engaging directly with threat actor. Obviously, we want to dig into that, both online and over the phone, to study how trust is built, manipulated, and ultimately monetized. Tim's research spans job scams, cryptocurrency, pig butchering. I want to know what that's about and supply chain fraud, with a particular focus on how modern fraud campaigns exploit organizational processes, rather than technical vulnerabilities. So Tim, I've said all of that, and really the only two words people will have heard or phrases is TOAD and pig butchering, right? So we're going to have to go deep in that. Yeah, yep, that's a wild role you've got. But let's talk about the backstory. How'd you get to be doing what you do?
Tim K 01:19
So before this, I worked in a SOC at a financial institution, and saw a lot of stuff right from TOAD scams to just regular network traffic, all of that. So it gave me a really good foundation. From there, I really got kind of attached to doing the fraud side of things, right? And so I moved to a different role within that company, as a fraud investigator, right? So I had to look into fraud from either an external threat actor side or from, like a customer trying to pull one over and get stuff. But either way, their tactics and things like that are the same. And so when I saw an opportunity to come work here at Proofpoint and focus on BEC and fraud, you know, I kind of, I took that opportunity and have been here now for about five years out
Raghu N 02:05
Amazing so kind of like me, someone who is sort of on the customer side. In fact, I was in financial services 15 years before moving over to the vendor side. So it's great to meet sort of someone with a sort of similar, similar career arc. What you do sounds just like so much fun. Feels more like sort of a hobby, entertainment than work. Is it as fun as it sounds?
Tim K 02:29
I mean, it definitely has some fun moments, but it's mostly like hard work and talking to people, right? Like talking to threat actors, they're trying to do you harm. It's definitely not like super enjoyable. You find the moments that are fun and entertaining and share those out. But majority of it is a lot of just kind of grunt work and and messaging these, especially with pig butchers, right? You're talking to them, 24/7 a lot of the time, like they're messaging you, showing you pictures of their breakfast and their lunch and their dinner. They're trying to, you know, chit chat with you and get you to share more info and trust them and then engage on their scan platform. Or, you know, some of the other threat actors are more annoying, you know, they're like, constantly, like, “Hey, you haven't sent us this money yet. Where's our money?” You know? And they're getting really upset with you, and so you're feeling a lot of that kind of stuff. So, yeah, it's not all not all fun all the time, but it's definitely, there's some fun moments.
Raghu N 03:21
So we're going to dig, dig into some of that stuff in a bit. But where I'd like to start the conversation is just, let's talk broadly about social engineering, right? Yeah, so often when we think about social engineering, this is kind of like the front end of a cyberattack or a scam, often, but it's also a technique that's as old as time, right? So why does it still work? Why is it so prevalent, particularly in scams that target the average consumer?
Tim K 03:54
Yeah, you know, as these new AI things have come out, these large language models, and you work with prompts, right? You give them a prompt, and it goes in a certain direction, and depending on how you give that prompt, you may get better results, or you may get worse results or inaccurate stuff. The same applies for people for the most part, right? We have ways of dealing with things that come at us all the time, and so we have to, kind of, I kind of equate it to like you wear a path right through, like a like a field, right? You can see the foot, the foot path, right? And you usually take that path because it's the easiest way and saves your brain time and energy and all that. And that's kind of what happens with people, too, right? They get these work things they have to do all the time. They're repeating them. And then threat actors come in and use the exact same like, most of the time, like, especially the RFQ threats, things like that, they're identical to what is being sent by an actual trusted person, right? So you get in these, these modes, and your brain just kind of goes in that same direction, and that's where, where you're you're exploiting the trust, right? Like you're hoping that they will just follow the process. And again. Give up the information or money, or whatever the threat actors look for.
Raghu N 05:04
So and I love that, and that analogy of essentially following the path or following the footsteps, but kind of given, given what you said, right? And it's ultimately about sort of establishing trust and reinforcing trust, right? Yeah. And I mean, you sort of mentioned, of course, like AI, like that, the ability to leverage large language models to essentially generate so many variations, that's kind of like all well understood. So if we were to sort of strip back sort of just, let's say, technology that's made the ability for attackers to scale and introduce variety more so than they did, how have social engineering attacks fundamentally changed over time? Or have they,
Tim K 05:46
I would kind of argue that they really haven't. That's something that we bring up. You know, a lot within the scam space is like, yes, you know, you can scale things with AI, with new tools, with all this stuff, but at the end of the day, the mechanism for getting someone to trust you, to give you money, to, you know, to do the end final goal, right? You have to do the same things to get to that point, right? There's not some new, novel way to get someone to convince them to give up their banking account information, right? There's, there's tried and true methods, and that's usually what at the end of the day, they all have to kind of bottleneck down into so that's useful for us as defenders, right, to find, to find these scams. But on the other side of that as well, like the scammers, they know all that they have to work within that they can get creative with their lures, but at the end of the day, they still have to get your information, so have to trust you. So they still have to go through this trust either build it up like pig butchers, or exploit the trust that we have already in place within business processes and things like that nature. So looking at ways of strengthening those that bottleneck, I think, is really important for stopping scams and for kind of mitigating the effects that they have, and at scale, it's just going to again increase the frequency, but not necessarily that that final checkpoint, if we're the bottleneck.
Raghu N 07:05
So, so with that right? And I absolutely align with what you said there. Because I think if I kind of not wanting to go too much into an area where I'm not massively well first, but again, I kind of constantly talk about, right, if I think about like micro attack, TTPs, right? The tactics stay the same. It's just that the T's and the other T's and the P's change as technology, as processes, resources evolve, right? And yes, that's all that's happening, but it's still social engineering, ultimately, is the overlying tactic. So as you are, I'd love to So you spoke at the top about how you spend a lot of time engaging with the threat actors, right? Sort of like jousting with them almost. What are you trying to figure out? Like, as you establish that trust and you establish a relationship with them, what are you trying to figure out? And ultimately, from that, how are you helping organizations and individuals defend themselves or protect themselves better, right?
Tim K 08:09
So a couple of things that I like to do, like we can look at, so there's three kind of levels, I would say, of sophistication within the scammers kind of space we've got, like the Nigerian threat actors that are doing, you know, the net, RFQ stuff that I talked about my blogs, we've got, like, the TOAD scammers, the telephone oriented attack delivery, you know, like the kinds that you see, like the scam baiters on YouTube doing they're, you know, you're calling up a call center in India somewhere, and they're trying to gain access to your computer. Like, those are kind of the mid level. And then you've got, like, the high end, like billions, literally backed by billions and billions of dollars, with the pig butchering and the pig butcher job scams, things like that, right? But all those, I'm trying to get them to give up their their infrastructure, right? So if they actually have stood up websites, we want to try to find out all those as much as we can, if they have mule accounts, right? Like a mule is like either somebody that's either willingly or unwillingly helping them out by providing like access to their bank and they can transfer funds through it or with, like, the case of the pig butchers, we're trying to really just identify as much of their infrastructure, their scripts, right? Because the scripts are important, like we do a lot of like, language analysis and things like that at proof point within, like, the email space, and so finding out, like, what word verbiage they're using with their scripts, because they all have to work off scripts, right? They're not hiring. Most of these people are held against their will to do some of those scams, and so they're not really willing to put in the best effort. So they're required to use scripts. And so we want to try to get as much of that script as we can, so getting them talking more seeing how they react to to, you know, me intervening with the process, you know, dying, to break things and just to see how it acts, you know, like a lot of times Same, same techniques that, like, you know, red teamer might use to test defenses, right by trying to either see what happens when something fails, right and behaviors happen. So we're trying to do this. Same thing, gather as many much indicators of compromise as we can, as much data as we can, and then block as much as we can on our side. So yeah, that's, that's the goal. Sometimes it's less effective, sometimes it's more effective, but it's what we're always striving, yeah,
Raghu N 10:15
and sort of sharing, sharing the knowledge through your through your blogs, and your own podcast, the discarded podcasts, etc., right, amongst other things. So if you can, can you just talk through an interaction that that you've had, and sort of from the origination of that, right? And sort of how you built, like, just, like, layer on the story. I'd love to hear that?
Tim K 10:41
Yeah, yeah. So I did one engagement, and this was actually kind of, it was work adjacent, right? It was a friend of mine at work. She had a friend that had gotten scammed on Twitter, right? And they had thought they were purchasing a laptop, they ended up sending off money, and that you never got the laptop, and they asked me to kind of investigate that. And so by talking to that threat actor and trying to purchase one of these laptops myself, and them giving me information and saying, okay, hey, we need to get paid. Now, send us the money and we'll send you the laptop. And I was like, okay, great, let me, let's do, let's do Venmo, right? So they sent me their Venmo information. Well, that's one meal. And I was like, Oh, this isn't working. Something's gone wrong, you know, they're like, well, we got, you know, Zelle, we can send you Zelle information. Okay, well, I've Zelle information. So we ended up getting cash app, Venmo, Zelle, PayPal, and an ACH number, right? So we got all their banking information, and then I just reached out to some of the people on social media because, you know, PayPal and stuff, you have to use a handle, right? You've got your screen name. Well, some people just like to keep the same one. So we found these people, or you've reached out to them, and a couple of them were like, Oh, hey, yeah, this, this was me, a friend of mine. He asked me to send a bank account, you know, I was like, Well, that's pretty bad, you know, you should, shouldn't do that, you know, it's probably against the law. And they're like, Oh, I guess, you know, here, here's their here's their email address, right? So you get there more and more, so you just kind of build out from there, you know? But eventually we had four or five people within a geographic range. It looks like they all probably went to the same college, and so we knew how those connections were formed, passed all that on to law enforcement. Unfortunately, they didn't have time to track any of that down. But on a good note, the person did end up getting some of their money back at the end of the day. That's what we're looking for, is to try to find as many threads, I guess, and connect them and look at what we can find, shut down as much as we can, you know?
Raghu N 12:34
So yeah, that sounds awesome, I'm sure. Like it's it was significantly more work than you've just sort of describe in about two minutes there, yeah, like, and how much of that work is, how much of that process is technical, specialized knowledge on how much of it is, very much or like it's this feels wrong, right? I've seen this pattern before. It feels wrong. That's the sort of rabbit hole I need to go down, because I feel this is going to lead somewhere, how much of it like feels?
Tim K 13:03
Yeah, I mean, definitely like with the screen names, right? You don't know, okay, maybe it's just happenstance that they have the same name, right? That happens, you know? So you have to be able to kind of discern some of that. For this instance, it was okay, the original IP addresses and stuff we have, because that's something else I send out a lot, is tracking links. You know, they're benign. It's just like, Bit.ly, URL shortener, that kind of stuff. A lot of those will grab IP address. So grab, you know, cookie information from, like, what kind of browser they're using, you know, where they're located, geo located. So looking at those kinds of things, we could see, okay, the initial guy was based in this area, and so are all these people, but not this guy, right? Like, so maybe this guy I'm talking to in Nigeria is like the ring leader, and these people are folks that he has working. So you can kind of start looking at that. And if you know how, that's how those kind of relationships between, like the threat actors and the mules work, you can kind of start seeing those connections and just kind of following those back. So there is, there's some of that, I think. But you know, for the most part, it's just about sending them, you know, the tracking links to get their information, to conning them into giving you more, more information that they normally wouldn't want to give out if, unless they absolutely had to. But luckily for me, most threat actors are pretty greedy, you know, especially, especially some of the ones that are based in like, third world countries and things like that. They need that money, right? They're doing that, not just for fun or whatever. They're doing it to actually put food on the table. So they're definitely more interested in making that sale, I guess, as it were.
Raghu N 14:35
So, yeah, it's also, I think, like, as sort of hearing this, right? And just going back to that segment where you said, oh, like, send me a Venmo, and they sent a Venmo, right? And then you're like, ah, that didn't work. And send me a PayPal, or they sent me a PayPal, and then whatever, Zelle, etc. And I listened to that, I was thinking it, this is kind of like, and then the last comment you made about ultimately, they just, they want to get paid. They're like, they want money. So it's like, oh, do. Person thinks that I think they're going to send me money. They're trying to send me money because they're looking for all these ways. So it feels like then their operational security is fairly low, particularly for this type of scams. It's that I think, as you said, right? It's not necessarily highly skilled attackers in this case, right? This is essentially people, almost like in a call center with a call script saying, right, if x then y else said, yeah, right, yeah, exactly, yeah. And it's like, yes. It's a sort of again, like, I think, bust some of that myth about, like, cyber scammers being these, sort of, again, sort of Mr. Robot type thing in a hood right in front of their computer.
Tim K 15:39
I mean, obviously there's lots of videos online you can go look at, like the TOAD call centers, and what kind of like professionalism they have there. Those, those folks are pretty good at social engineering people themselves, and they have a lot more free form kind of allowances to kind of say whatever they want, sometimes verbally abuse things like that. Some of the Nigerian scammers, you know, they may not have that level of sophistication. They more work with intimidation and trying to get people to just do it. And they're going to, you know, use the specter of authority, right? Like, oh, this is your bank is telling you to do this. You have to do this, or I am your, you know, security Norton, or something like that, right? Like, you need to pay us, or we're going to put a lien on you or something, right? So they have their own techniques and tactics as well to get people to comply and to get that but I think the pig butchers, though they are, they kind of interest me the most as far as like, how much scripts they use, but also like they're at a massive scale, right? So they can look at their scripts, refine them, kind of get them, get them down. And they work in a lot more insidious ways than either the other, the Nigerian scammers or the TOAD scammers, because, you know, they build trust up from the ground up, right? Like these other ones are using, you know, well known either business practices, or they're using, you know, fear to get somebody on the line and to just get their money back, or whatever, where these folks are, like, literally, oh, I'm sorry I texted you by accident, you know, or, Oh, I thought you were someone else, you know, online, and then they'll just talk with you. And they have to, like, build this whole relationship up from the ground up, of just gaining your trust and saying, you know, putting the little things in there, like, Oh, my uncle's rich, or, you know, or showing, like, pictures of, like, really fine dining and, you know, so they have to do a lot more work on the social engineering side. I find that fascinating. So, like,
Raghu N 17:32
pig butchering, and I'm assuming that's not something that they used to get up to in the meatpacking district in New York, correct. So, like, brief, like, overview of what pig butchering is, yeah.
Tim K 17:43
So pig butchering comes from the Chinese term SHA pan. That means, like, literally means like, pig butcher plate, right? It means like, you're kind of the idea is you're fattening up the pig before slaughter, right? And so the whole idea is for them to get you used to doing something over and over again, in this case, investing small amounts of money. And you get, you see the money, they even let you take out some of the money, obviously, never more than you've invested, invested. But eventually they'll say, hey, we have this huge opportunity. Obviously, you trust me. I've been talking to you for sometimes three, six months. Sometimes people talk about they've been with them for like, a year, you know, and they'll say, this is a huge investment. Do everything you can. You're going to double your money, triple your money, go and they get them to invest. Sometimes people's entire life savings. I've talked to a guy who's a, you know, CIO, of a company invested, like, $7 million just gone, you know, like, so that level of trust for them is the is a payoff, you know, for them to invest that amount of time to get that kind of payoff,
Raghu N 18:49
that's insane, because I absolutely understand, like, the targeting of the most vulnerable, actually, in a, in a previous episode of the segment we spoke to sort of a psychologist who's focused, like someone who's focused on sort of psychology of cyber and again, he spoke about how they target the most vulnerable, or they target you when you're at your most vulnerable. But you kind of think, when you think about, think about someone who's in, let's say, like, let's say a CIO title, right? Yeah, that's not who you'd commonly associate being the victim of this, like, multimillion financial scam, because you just assumed that, like, surely they don't need that. How does someone like, how does that compromise of trust manifest itself?
Tim K 19:37
Well, you know, I think a lot of, at least in the case of, like these high earner folks, right? They might see this person as like a peer, right? This person has a lot of money. They, they're, they're doing things that they, you know, that they might themselves, might do. They probably invest in crypto, right? They have, you know, some knowledge of it, or at the very least, they have enough money that, you know, they can throw it around and try to make. More. But I think, you know, when they meet someone, maybe online, they're attractive, they're conversational, and they think, okay, hey, this person a you know, they're nice to talk to, they look good, and they seem to be on my same socio economic, yeah. You know, as you get higher up the chain there, there's less and less of those people that they will find to interact with, and I think that that's probably part of how they're so successful with some of these folks, right? Like social media is unfortunately also kind of distance people from, like the old traditional ways of having close interpersonal relationships, like the physical kind. And I think that a lot of people in this day and age are kind of striving for more of those kind of connections. And I think Right place, right time, and they're exploiting that so
Raghu N 20:44
aspiration and then temptation or greed without exactly two sort of two biblical or preaching. Okay, so you have TOAD on your T shirt here, and I hope, and I know you're not talking about the animal TOAD or wind in the willows, right? Or TOAD of TOAD Hall TOADs. That means something different to you, yeah.
Tim K 21:09
So TOAD within the proof point space is our telephone oriented attack delivery, right? So it's any threat that is involving either the person calling out to a threat actor, or the threat actor calling into somebody at a business or whatever. And so for those primarily, well, I focus on the type where you're getting these invoice scams, right? You'll say, Oh, hey, I got a, you know, Best Buy, like, Geek Squad, or whatever. Invoice for service you're like, is 474, to $700 I don't, I don't remember doing this, you know, I'm going to call somebody. Oh, there's a number prominently displayed on the screen right. Like, call this one 800 number. Okay, great, pick it up. Call them. And that's what they, you know, they get, yeah, and they, they'll, they'll social engineer from there, like we discussed earlier, you know, sometimes they, they have a lot more free reign, and they don't really have an established script, per se, they do all kind of tend to do similar things, but they can deviate off that script. But for the most part, they're trying to get you to share your screen, to connect to your device, and then have you log in so they can capture all of your banking information, capture as much stuff as they can off the screen. Though they'll blank out your screen, so they can steal documents and other things off of your computer. And so, yeah, they those folks can do a lot, and really they're, once they have access, they're kind of limited only by their own creativity as far as how much damage they could do or they could how much information they could steal. So they're definitely a unique threat that is really good to kind of explore. And again. You know, with engaging with these directly, right? Like, it's kind of like with malware, we can say malware, we know that, okay, this is an executable and an email, and it's probably not good, but unless you're reverse engineering it or running in a sandbox, you don't really know what it does. And to me, that's, that's my rationalization, I guess, for engaging with these threat actors, it's like, yeah, sure, we know that. Here's a scam email and someone loses money, right? But like, what does it look like between that? Like, how does, how does that person end up giving that information away? And a lot of times, there's a lot of shame involved with people that have done that. There's also fear involved, right? Like, you're at work and you did that, and you called and someone connected your work computer, or even your home computer that might be attached you're working from home, right? You could buy gain access through that way. There's a lot of shame and fear that is involved with people not sharing all that information about what happened to them, and so that a lot of it just goes unreported or under reported. So I think it's important for folks like my position, where we understand the threats. We understand how to get more of the IOCs and try to get as much information as we can in order to pass that along and block things further along to help protect other people.
Raghu N 23:50
So our exact producer, Mindy, is just message saying, Tim, TOAD, that's how you're in her phone. So just excellent, excellent. What you said, right? I mean, I didn't know it was the acronym TOAD is just, I've just come across in in researching for this podcast. But like, like, everything you've described about the nature of that particular type of social engineering attack. I remember almost more than 10 years ago, friend of mine called, like, called me and said, Oh my god, right. And essentially describe, effectively, like, played out the entire sort of scam, right? That she had fallen the victim food too, and it was exactly this TOAD attack, right? And she was, and what you described as a being about that, that fear, that shame, right, the fact that you've sort of essentially had that sort of your privacy violated. And what to describe is essentially calling up, oh, you've got a problem with your computer, right? Yeah. And then sort of oh, oh god. Like, that's what sort of so many, so much important things on it here, here's have my, have my password to log in, right? And. Whole sort of cycle of things. Oh, yeah. Let me download the remote access app so that you can do what you need to do. And sort of, yeah, oh my god. What do I do? I said, Well, I think you need a new computer, yeah.
Tim K 25:14
And, you know, I think there's some indications that some of this might be as the tablet generation, right, like Gen Z, and further along, Gen Alpha, things like that that are primarily they use phones and tablets and stuff and not be, not as much as, like a physical PC, right? They might game on a console. They might not have, like, the intimate knowledge of, like how computers work and how to find the information that maybe someone might, in a real tech sport situation, might be like, hey, you know, go, click on the start button and do this, do that, you know. And they're not going to feel comfortable with that. They're going to, it's going to take too much time. And so to turn that over to somebody else is going to be, I think, more common for them, and something to look for in the future, unfortunately. But yeah, we see it again with folks that are older and maybe didn't have that. Maybe, on the other end, like the boomer generation and older than that, maybe even retirees things like that same, same reason they were falling for that and why they're targeted a lot. Also, they have a lot more money, typically. But same thing the computer savvy, it makes you want to just turn over some to someone else that does maybe knows what they're doing more than you do. And so that's always, yeah, so,
Raghu N 26:23
so like, in from all the RE, like, all this research you've been you've been doing right, going deep into sort of the nature of various different social engineering tactics, going deep on establishing a relationship and understanding the psychology and motivation of the scammers, what is like when you go and now have to essentially take that to your to your customers, what is what are like? What's the key bits of advice? Because so much of this is very much around sort of personal like, how each individual's operational security, right? Yeah, it's like, this is not about like, I mean, some of it may be updating, let's say, a detection right, rule or etc., but a lot of it is just about how humans operate. So what are like, the key lessons and the key things that you are taking to, sort of like your customer base, and that the ones who are going to benefit from this,
Tim K 27:25
yeah, you know, I think again, I think the main thing we can do is educate people about them, right? To say, Oh, I've seen this before. I've heard of this before, you know, and be familiar with that, so that it's somewhere at least in the back of your mind, right? That the hairs might stand up a little bit, or you might go, is this, do I really want to call this number? Or, you know, what should I do? And so, you know, one thing we we tell people a lot, at least with the TOAD threats or with really anything that they do, right? Like, you know, okay, if it's, if it's Geek Squad, or if it is Norton, you know, antivirus, right? They have an established website, right? Don't, don't trust your email. Don't go from that, go, go out to the website, call the company directly, go, you know, go to them, right? Go to a source of trust that you have right to deal with that. And don't take the easy route out of just looking the email. Because, in my opinion, like, don't trust anything. Zero trust, right? Like, I don't, I don't trust any email. I kind of look at it from the perspective of what is the best practice all the time. Unfortunately, not a lot of people have the desire to do that and spend that time with everything. So I think on that kind of side, if you've got a lot of employees or people in your social network that you know aren't going to do that work. You know, having things in place to help protect, like for remote management tools, right? Like your organization should not allow remote management tools unless it you know, so basic protections like that are really good. Also, like I said, education and just in general, not to be super trusting of things, and to kind of look at things with a critical eye all the time. And unfortunately is it is a little taxing, I guess. But yeah, the alternative is getting scammed and losing significant amounts of money, or a loss of trust with customers or with your company. Or there's a lot of lot of reasons why it is kind of worth the time to be that
Raghu N 29:23
vigilant, yeah, yeah, yeah, I think. But you use the term Zero Trust, and I think particularly here, because when we apply it to sort of technology, bits and bytes, etc., and everything that builds on top of that, then to apply that. It's a very binary principle, right? Yeah. But when it comes to humans applying that for day-to-day interactions like that, that's just not how we're wired, right? Yeah, because we're wired to reinforce trust and needing trust so and maybe, and I. I'd love your perspective. Obviously, we are inundated with very essential like security awareness training, etc., but I think that there's also fundamentally, is it that a lot of that security awareness training is actually going against our natural instincts? It is?
Tim K 30:17
Yeah, in some ways it is. And I think that's why a lot of the time it fails. I would say, you know, like, we're programmed into, well, I have my own thoughts on this, but like, you know, we believe we all have freedom of choice to make the decisions and things that we do. But there's a lot of background processes that are going on constantly with everybody, from, you know, trust levels to, like, where you're at mentally. Like, there's a lot going on in the world. There's a lot of things that are happening to people all the time that are interfering with these with companies that might say, well, we spent this much amount of money on, you know, training until someone got right. Like, okay, that's it happens. But like, I think that people are going to be people were messy. You know, we can't just assume that. You can't put the onus of keeping the company safe on the individual, right? It has to be done at a larger scale. And unfortunately, that may mean that you're always going to have some level of risk of things happening and to blame any one individual, I don't think, is necessarily the right way to do it, because there's a lot going on, right? You can look at any one individual example of why someone got scammed, or whatever, unless they're willingly participating and saying, like, oh yeah, give me a cut and I'll give you access, like, insider threat kind of stuff, right? Other than that, I think that there isn't really a good way to say, to eliminate, you know, the chance of people getting scammed these, man, it's just impossible. It's so...
Raghu N 31:45
All right, so let's move on. I'd love sort of in this, in this final sort of section, right? Is, is that we kind of touched on the use of AI by scammers at the top of it, right? But I'd love to understand it is that in your research. How has like, how have you seen the use of AI? I'm not talking about being able to generate, like, lots of different types of, let's say, communications, and being able to test a, b, test lots of messages. Yes, yeah. But how have you seen AI used by the actors to, like, further compromise trust, like, what are the various uses of AI you've seen from them?
Tim K 32:30
We say this a lot like, we don't see a lot of like AI being used as, like a direct, like LLM to directly conversing with people and like that kind of, like, a scam bot kind of way. I'm sure it has probably happened. I'm sure there's an example someone can point to be like, Oh, here it happened, right? Like, but as far as like, at scale, I'm not seeing a whole lot of that. We've seen it more in them, maybe automating some of their tasks, of like, you know, creating domains or speeding up how much mail they can send right automating their processes, doing what everyone else is doing right now with AI is just using it to cut out some of the repeatable tasks that they have to do that are slowing them down. From what, you know, the actual like responding to the emails and like conversing back and forth, not so much. We have seen what looks like lures being generated. And, you know, like you said, AB testing things to see what sticks what doesn't. But a lot of times they've already distilled down, like they've been doing this for decades, right? Since emails went around, like you said, too. Like we have scam things going back to, like, Mesopotamia on, like, clay tablets, right? Like, it's like one of the old as soon as people knew how to talk to each other, and there was currency involved, or even not currency involved. People are scamming each other, right? So these lures, these things, have been tried and tested, and so they know what works. They don't want to change what works. So they've been kind of slow to adopt, I guess, on some of those fronts, which is probably helping them out still to be remain successful. But I think in the future, I would look for more scaling of just one individual having the ability to send, like, before, like a mid-level person might be able to go, or they might have, like, scam call centers might move over to handle the inbound calls and getting a lot of that legwork done, of convincing the person to, like, you know, connect to your machine, I assume they're going to probably start automating some of that stuff away. But like, when it comes down to like, adding to, like, actually closing the deal, it's probably always going to be a real person, at least for the foreseeable future, maybe five to 10 years from now. Who knows? I don't know. We may have other problems to deal with AI at that point, but for right now, like I said, it's mainly lures and scaling that. It's helping them leverage that.
Raghu N 34:41
So that's fascinating, because when I think about it from the outside, and I think about this site, this feels like a high volume, templated approach, right? It feels like it's ready made to be like automated and scaled with. AI, right? But I think what like, it's really fascinating to hear you say, say that things that it's still like, even there, it's like the human is human saying, Actually, there is a part of this that it's, it's, there's this, like, almost like, human instinct, or that thing that we cannot, yes, yet keeps the AI to do better than we can.
Tim K 35:22
And I think like, AI is going to be used, right? Like, we've got AI videos everywhere now with Sora, thanks. We've got other models that are doing it as well. It's not just Sora, but like, see, you might see AI ads or things like that. You're going to see more and more of that, I think. But at the end of the day, I still think that the actual like convincing someone to send them the money is always going to be done by people at least the near future, until it's proven that it can extract money and is capable of doing that, and then at that point, it may be a different story. But for now, they want that money, and they're not going to trust an AI right now to do it.
Raghu N 36:01
So, yeah, so last, last thing, right? And like you spend so much of your time engaging with, ultimately, it's humans on the other end, right? Yes, they might be. They're probably there trying to scam you, right? But as you said, sometimes a lot of the time, the folks who are doing the grunt work are probably not particularly well paid, right? Not particularly highly skilled, and this is just their route to putting food on the table. The fact that you're trying, like, ultimately, in your role is to figure out what they're up to and then limit their success. Yeah, like, does that play on you at all?
Tim K 36:44
Not really. Other than, like, with the pig butcher stuff, it is a little bit, little bit concerning sometimes, just because they have that much. I mean, they've got literally billions of dollars, and that whole scam infrastructure is quite large. So you know, if I was to disrupt something enough that I caught their attention, I would be a little concerned about that. Nigerian scammers things like that. They yell at you, you know, the TOADs, they'll curse at you, and stuff like that. But for the most part, for them, that's just their normal business, right? You're not until law enforcement shows up to arrest them, you know, they're really not concerned that much. It's just an annoyance, and part of their business model, as far as getting disrupted and having to shift things. But yeah, for the most part, it doesn't bother me. It does a little bit sometimes with some of the Nigerian folks, as you know, they are, get an IP address and you can see, okay, this guy's in this zone, you know. And you look at it, you're like, geez, this is like, ramshackle houses, like, am I? Am I taking food from this guy's mouth with, you know, or but again, he's trying to do the same to someone else, you know, yeah, you know, at risk or whatever. And unfortunately, people that fall for these scams, like they, you know, they may not be, I don't know how to describe it, but like they may not, also. They might be in similar situations, right? Like they may not. They might be struggling as well, because the person that is willing, is susceptible to some of these might be more at an emotional state in a bad place than, you know, someone that's, you know, in a good place and is able to deal with that kind of threat coming in. So yeah, you're still protecting people, but the same time, yeah, you are kind of hurting people. And I think about a little bit, but I try, I try not to...
Raghu N 38:19
Yeah.
Tim K 38:21
...so when someone's got to do it, they got to do their job. I'm doing my job. So you know...
Raghu N 38:24
ultimately, what you're doing is ultimately trying to make the world a safer a better place for everyone. Yeah, absolutely. Tim on that, right? It this has been a really fun and novel conversation for us here on the segment, so I appreciate you taking the time out of your like very exciting day to come and have this conversation with us. So thank you.
Tim K 38:46
Yeah, it's been great. Thank you very much.
Raghu N 38:50
Thanks so much, Tim. Thanks for tuning in to this week's episode of the segment. For even more information and Zero Trust resources, check out our website at illumio.com. You can also connect with us on LinkedIn and Twitter at Illumio, and if you like today's conversation, you can find our other episodes wherever you get your podcasts. I'm your host, Raghu Nandakumara, and we'll be back soon.

