AI Agents Gone Rogue: The Cybersecurity Risks Retailers Are Ignoring in Agentic Commerce
The Retail Razor ShowApril 24, 2026x
5
00:57:1052.35 MB

AI Agents Gone Rogue: The Cybersecurity Risks Retailers Are Ignoring in Agentic Commerce

S6E5 What Retailers Must Know About Prompt Injection, Rogue Bots & AI Agent Security Before It's Too Late

Your AI shopping agent just drained your bank account. It's not a glitch — that's the objective it was given. Welcome to the new reality of agentic commerce, where autonomous AI agents shop, transact, and negotiate on behalf of consumers and brands — and where cybercriminals are already waiting to exploit every crack in the system.

In this must-listen episode of The Retail Razor Show, hosts Ricardo Belmar and Casey Golden sit down with Dr. Aaron Estes, VP of Product & Engineering at Binary Defense, to unpack the retail cybersecurity crisis that most retailers haven't even started preparing for. With half of all internet traffic already coming from bots and 1 in 8 AI-related breaches now involving a rogue agent, the agentic commerce era is creating attack surfaces we've never seen before.

Dr. Estes brings 20+ years of hands-on cybersecurity expertise, including penetration testing at Lockheed Martin and advisory work with leading retailers. He breaks down exactly how AI agents differ from traditional e-commerce threats, why prompt injection attacks are the new frontier of retail cybersecurity, and what practical guardrails every retailer needs to put in place right now.


What You’ll Learn in This Episode:

  • Why AI agents are fundamentally different from human users — and why they'll "very confidently spend all your money" to hit their objective

  • How prompt injection attacks trick AI agents into leaking sensitive data

  • Why every AI agent needs its own identity, login, and role-based access controls — just like an employee

  • The "bots watching bots" architecture that's becoming the new standard in agentic commerce security

  • How AI shopping bots are already exploiting loyalty programs, gift cards, and rewards systems

  • Why retailers must rethink retail cybersecurity assumptions as autonomous shoppers replace human ones

  • How to identify rogue chatbots and fraudulent AI agents impersonating legitimate brands

  • What "human-in-the-loop" oversight really means — and where it's non-negotiable in agentic commerce


This Episode is Brought to You By RetailClub.

Join 2,000 retail leaders at RetailClub AI FestivalSeptember 22–24 in Huntington Beach. Dive deep into how AI is reshaping retail while soaking up the sun at a fully outdoor, beachside venue. Decision-makers from retailers and brands can attend with free tickets and up to $1,250 in travel reimbursement. Head to retailclub.com to learn more. https://retailclub.com/retail-razor-podcast


Subscribe & Follow

If you enjoyed this episode, please leave us a 5‑star rating and review on Apple PodcastsSpotify, or GoodpodsSubscribe on YouTube so you never miss an episode and check out the other shows in the Retail Razor Podcast NetworkRetail Transformers, Blade to Greatness, and Data Blades.


Subscribe to the Retail Razor Podcast Network: https://retailrazor.com/

Subscribe to our Newsletterhttps://retailrazor.substack.com

Subscribe to our YouTube channel: https://go.retailrazor.com/utube


About our Guest

Dr. Aaron Estes. https://www.linkedin.com/in/aaronestes777/

email: aaron.estes@binarydefense.com

Dr. Aaron Estes is the VP of Product & Engineering at Binary Defense, a 24/7 cybersecurity watchtower specializing in cyber threat intelligence, dark web monitoring, digital channel fraud, and breach response. He holds a doctorate in software engineering with a concentration in cybersecurity, teaches at UC Berkeley and Southern Methodist University, and previously spent ~15 years in penetration testing at Lockheed Martin across defense, energy, retail, and entertainment sectors.


Chapters

00:00 Teaser 

00:49 Show Intro 

07:26 Welcome Dr Aaron Estes! 

09:31 Why Security Matters 

14:00 New Attack Surface 

17:57 AI Identity and Access 

22:09 Adoption Speed and Oversight 

26:51 Bots Watching Bots 

31:34 Orchestrators and Rival Bots 

34:06 Bots Gaming Rewards 

37:13 AI Shoppers Rise 

38:29 Ads Inside Agents 

44:08 Rogue Bots And Trust 

48:09 Risk Versus Reward 

50:48 Kill Switch Reality 

52:55 Ecommerce Lessons Repeat 

54:26 Closing Thanks And Contact 

56:21 Show Close


Meet your hosts

Helping you cut through the clutter in retail & retail tech:


Ricardo Belmar is an NRF Top Retail Voice for 2025 and a RETHINK Retail Top Retail Expert from 2021 – 2026. Thinkers 360 has named him a Top 10 Thought Leader in Retail, a Top 25 Thought Leader in AGI and Careers, a Top 50 Thought Leader in Agentic AIand Management, and a Top 100 Thought Leader in Digital Transformation and Transformation. Thinkers 360 also named him a Top Digital Voice for 2024 and 2025. He is an advisory council member at George Mason University’s Center for Retail Transformationand the Retail Cloud Alliance. He was most recently the partner marketing leader for retail & consumer goods in the Americas at Microsoft.


Casey Golden, is the North America Leader for Retail & Consumer Goods at CI&T, and CEO of Luxlock. She is a RETHINK Retail Top Retail Expert from 2023 - 2026, and Retail Cloud Alliance advisory council member. After a career on the fashion and supply chain technology side of the business, Casey is obsessed with the customer relationship between the brand and the consumer and is slaying franken-stacks and building retail tech! 


Music

Includes music provided by imunobeats.com, featuring Overclocked, and E-Motive from the album Beat Hype, written by Heston Mimms, published by Imuno.


Transcript

S6E5 Agentic Cybersecurity

[00:00:00] ​

[00:00:00] Teaser

[00:00:01] Casey Golden: So your AI shopping agent just drained your bank account, but it's not a glitch. That's technically the objective it thought it was given.

[00:00:10] Ricardo Belmar: Right now, half of all internet traffic is already bots and soon your customers won't be people. They'll be AI agents. Shopping. Transacting. And getting played by hackers.

[00:00:23] Casey Golden: On this episode, we've got a cybersecurity insider who's seen it all. From hacking Lockheed Martin systems to defending against the dark web. And he's here to tell retailers exactly what's coming.

[00:00:38] Ricardo Belmar: One in eight AI breaches already involves a rogue agent. Is yours next? Stick around your agentic future just got a whole lot riskier.

[00:00:49]

[00:00:49] Show Intro

[00:01:00] Ricardo Belmar: Welcome back to Season Six of the Retail Razor Show, the number one management and indie marketing podcast on Good pods and the original show, and the number one indie podcast network for retail.

[00:01:11] I'm Ricardo Belmar.

[00:01:13] Casey Golden: And I'm Casey Golden.

[00:01:14] Welcome back Retail Razor Fans to retail's favorite podcast where we cut through the clutter to give you sharp insights on what's happening in retail today, tomorrow, and where we get real about what's driving the future of commerce. Agentic and human.

[00:01:29] Ricardo Belmar: So Casey, you sound like you're ready to talk about a world where AI agents are doing the shopping for us, or are you,

[00:01:36] Casey Golden: Let's just say that I'm a little bit more nervous about that. I'm super excited, and then I'm kind of too scared to push a button. And after this, it didn't make it very much easier. So I'm not sure any AI can help me find the right pencil skirt. We've talked about [00:02:00] this before. I'm still on the search. Um,

[00:02:03] Ricardo Belmar: Yep. For those in our That's right, that's right. All, for anyone in our audience who's a bit confused about what we're talking about. In our last Retail Transformers podcast episode, Scott Wingo discovered a new challenge. Casey's search for a pencil skirt with a double kick pleat that not even AI seems to be able to find right now,

[00:02:22] Casey Golden: And you know, maybe it's a double vent.

[00:02:25] Ricardo Belmar: maybe, but that's part of the problem, right? So definitely, check out that episode if you haven't yet.

[00:02:30] Casey Golden: That's right. If you haven't heard that episode yet, go find it and listen to it. You'll know exactly what I mean. And please do slide into my dms if you happen to find one. It's an entire use case for agentic commerce that I'm hoping Scott and his team at ReFiBuy can figure out and solve for me.

[00:02:49] Ricardo Belmar: Yeah. Well, we're continuing our agentic journey today. This time we're tackling an agentic topic that honestly doesn't get as much attention as it should. Sure. We [00:03:00] all talking about, and everybody loves to talk about solving the commerce stack problems, discovery issues, agentic search, and all those good things.

[00:03:09] But who's looking at security around AI agents these days?

[00:03:13] Casey Golden: And you know, now that we're squarely in the sci-fi era of AI and people have been jumping all over open claw and agents like that.

[00:03:24] Ricardo Belmar: Yep. We've got Perplexity coming out with their Personal Computer product for Mac users too.

[00:03:28] Casey Golden: Right. So why aren't more people worried about agents going wild and rogue and, I don't know, spending all their money? It's like giving a toddler your credit card. Sure. They might buy you something nice or 500 packs of cookies.

[00:03:44] Ricardo Belmar: Exactly, exactly. Honestly, it's, that's probably the least concern you could have too, right? I mean, what if agents are set loose and break into accounts, steal from you? I mean let's face it, Anthropic couldn't even release their newest frontier model because, what, it immediately [00:04:00] found thousands of security holes in hundreds of SaaS platforms, including some that had been around for more than a decade and nobody noticed before.

[00:04:07] Casey Golden: Yeah. You know what they say with great power comes great hacking attempts. I mean, retailers are left wondering if my AI assistant is running part of my online store, is it going to accidentally leave the doors open for cyber criminals? How do we keep our customer data safe when bots are doing the buying and selling?

[00:04:32] We're talking new attack surfaces, lightning fast mistakes, fraud, loss prevention nightmares. Basically a whole new security ball game.

[00:04:42] Ricardo Belmar: That is so right. So we brought in a true specialist today to help us dig into this.

[00:04:49] But before we tell you about 'em, let me tell you about our new sponsor of the Retail Razor Podcast Network. Retail Club. Join 2000 retail leaders at the Retail Club AI Festival. [00:05:00]September 22nd to 24th in Huntington Beach.

[00:05:03] Dive deep into how AI is reshaping retail while soaking up the sun at a fully outdoor beach side venue. Decision makers from retailers and brands can attend with free tickets and up to $1,250 in travel reimbursement. Head to retailclub.com to learn more and get your ticket today.

[00:05:22] Thank you to Retail Club for helping us bring you this podcast and the other shows in our podcast network.

[00:05:27] Casey Golden: I'm booked. So our guest today is Dr. Aaron Estes, the VP of Product and Engineering at Binary Defense. They're a 24 7 cybersecurity watchtower catching cyber bad guys in the act. Aaron's bonafide cybersecurity guru, who's seen every trick in the hacker's handbook. He teaches cybersecurity as an adjunct professor at UC Berkeley, and Southern Methodist University.

[00:05:55] Ricardo Belmar: And his experience goes deeper than that. Previously he did [00:06:00]penetration testing for about 15 years for Lockheed Martin government agencies across defense and energy, plus the retail and entertainment sectors. His security firm specializes in cyber threat intelligence, the dark web, digital channel fraud and cyber breach response, and they currently advise several leading retailers.

[00:06:20] Casey Golden: He is the perfect person to help us navigate this wild world of AI agents in retail, and how to harness all the awesome possibilities without letting the bad actors crash the party.

[00:06:32] Ricardo Belmar: I am really looking forward to picking Aaron's brain on how retailers should be thinking about these new agentic commerce capabilities in the safe and smart way that keeps data secure and their systems secure. It's gonna be a fascinating ride, so buckle up everybody.

[00:06:45] Casey Golden: But before we jump in, a quick favor. If you're enjoying season six, and we really, truly hope you are, if you came back for this episode, I feel that means you are enjoying it. So please give us a five star rating and review on Apple [00:07:00] Podcasts, Spotify, or Good Pods, and don't forget to like and subscribe on YouTube so you never miss an episode.

[00:07:06] We'd also love it if you checked out the other shows in the Retail Razor Podcast Network. If you haven't already subscribed, we have Retail Transformers, Blade to Greatness and Data Blades.

[00:07:18] Ricardo Belmar: All right. With that out of the way, let's get into it. Here's our agentic cybersecurity discussion with Dr. Aaron Estes.

[00:07:26] Welcome Dr Aaron Estes!

[00:07:32] Casey Golden: Welcome to The Retail Razor Show, Aaron. We're excited to have you here and help us understand something we suspect most retailers are not thinking about when it comes to agentic commerce and AI agents.

[00:07:45] Aaron Estes: Awesome. Yeah, it's great to be here.

[00:07:47] Ricardo Belmar: We recently did, in an episode, we got into the sort of the nuts and bolts on what agentic commerce is supposed to enable for retailers and how retailers should prepare for that, like from a payments perspective, from a customer experience [00:08:00] perspective. But we really haven't dug into the security implications around doing all of this, and what should retailers really be thinking about?

[00:08:07] So, I know I'm, I'm really looking forward to this discussion.

[00:08:09] Aaron Estes: Awesome. Yeah, I, I think there are, there's a lot to think about and we're moving so quickly that it's important we stay ahead of it because it's coming, it's coming very quickly and a lot of people are, are excited, but a lot of people are also paranoid, so.

[00:08:24] Ricardo Belmar: Yeah. Yeah. No, absolutely. So to kick us off, Aaron, why don't you give us a little bit of your background and tell us how you got to where you are today and how you got into this area of the agentic world.

[00:08:35] Aaron Estes: Absolutely. Yeah. So I'm Dr. Aaron Estes. I got my doctorate in software engineering with a concentration in cybersecurity. I spent about 20 years at Lockheed Martin, where I was a three time Lockheed Martin fellow, which means I was a fellow at three different business areas space, aeronautics, and IS and GS, which is Information Systems and Global Services. Spent most of my career in the software engineering and [00:09:00] cybersecurity realms. I spent the last four years starting up a company that does AI and machine learning based security testing, before I joined Binary Defense, the company I'm at now about five months ago. I'm the VP of product and, and product and engineering now, and we just launched our first big AI security agent. So, that's kind of how I've gotten into this world. Like I said, things are moving very, very quickly and, and everyone is really just trying to stay up with it and hopefully be able to, to move hopefully a little bit ahead.

[00:09:30] Ricardo Belmar: Yeah. No, ab absolutely.

[00:09:31] Why Security Matters

[00:09:41] Ricardo Belmar: Well, there's no question that there's just been a huge frenzy around words like agentic commerce and AI agents, and what exactly does that mean for everyone? And I think when we first spoke with you, we were just hearing all the news about open claw. And how everyone was just going insane as that sort of went viral with the, an open source personal shopping assistant almost that, you could build with that.

[00:09:53] But one of the things that I think we learned when we first met you was that, there wasn't really a lot of discussion around the security around that and what, what does [00:10:00] that mean, especially as a consumer, let alone as a retailer, as an enterprise. And I think since we first met I saw Perplexity has launched their Personal Computer solution, which again, I guess very much like an Open Claw and an AI agent that runs 24 7 on a on a Mac mini.

[00:10:15] I'm sure Apple is really appreciating all of these agentic developments because it's selling a lot of Mac Minis.

[00:10:22] Aaron Estes: Yeah.

[00:10:23] Ricardo Belmar: Right, So for people in our audience who just maybe aren't, aren't entirely familiar, or maybe they missed our episode, where we really dove into what agentic commerce is, why don't you kind of give us a level set of why this is an issue?

[00:10:35] Right. Why are we talking about. These personal kind of AI agents and, and what the, this idea, these agents autonomously working in a, in a retailer shopping context and why we're talking about security in the first place around this.

[00:10:47] Aaron Estes: Right. Yeah. I think what, what I, I, I'd like to do like right off the get go is kind of explain how AI has really developed this personality, right? That people see AI as a person, as a human, [00:11:00] sort of, they give it this, they give it this personality. They give it kind of, they, they compare it to the way that we think which in some ways is very real, but then in other ways is very, very different.

[00:11:10] And so we have to think when you're thinking about AI agents, when you're thinking about large language models that they're built upon, you have to think about the fact that these are really just algorithms that are optimized to do pattern recognition. So we, we need to really understand at, at a deep level that this is not a, this, this does is not a human that has desire. It doesn't understand truth. For instance, it only understands what it's been trained upon. And so I think when I talk to people about ai, I really wanna level set them and tell them, this is a pattern recognition program. It, it's, it's meant to. Guess what the next item should be, what the next word should be and, and predict what the likely, outcome is.

[00:11:53] However, when you add agents in, in, into the mix, you start to give these pattern recognition programs [00:12:00] an objective, and now they're optimizing to meet that objective. And that's what these AI agents are doing for shopping. The objective is to. Complete a shopping task, a retail task, whatever that might be.

[00:12:12] And so it could be on the, on the retail side from a, a corporate standpoint, or it can be from a human side on the shopping standpoint, you're giving it an objective. It's going to try to predict what the best outcome, what the best solution for those objectives are. The other big thing to understand is LLMs large language models are really just language models.

[00:12:31] They can produce, you know. They can produce language, they produce all kinds of reports and, and responses in human language. However, when you create an agent, you are tying that language now into things like APIs program interfaces, and you are now connecting them to the real world. I'll say the real world.

[00:12:51] They're, you're connecting them to things like credit cards, bank accounts shopping accounts, things like that. So now they're no longer just producing text. [00:13:00] They are using that text then to exercise an interface that then has some sort of digital outcome. In the, in the shopping world, it has some sort of retail outcome.

[00:13:10] An order is placed, something is put into a cart. Things like that. And so that's, that's really where I like to level set people is you're talking about a language model that is designed to optimize language that is designed to predict language, but then you're connecting it to things that in the real world will have real world outcomes. And that's where we really need to start thinking about. Okay. How do we, how do we put guardrails and security around these outcomes so that we are not just opening ourselves up to all kinds of cybersecurity attacks, to hacking, to even just inadvertent security problems where your data is leaked, all over the internet or something like that.

[00:13:49] So that's kind of the first starting point in my, in my view.

[00:13:52] Casey Golden: Yeah. And say what? More money, more problems, new tech, new threats.

[00:13:58] Aaron Estes: Yep

[00:13:59] Casey Golden: [00:14:00] okay.

[00:14:00] New Attack Surface

[00:14:00] Casey Golden: From a security standpoint, do AI driven agents create like a whole new attack layer for retailers? Are the risks fundamentally different from, say, traditional e-commerce website or the,

[00:14:14] Aaron Estes: So there.

[00:14:16] Casey Golden: that we had a couple years ago?

[00:14:17] Aaron Estes: Right. So the attacks, I think are not just incrementally riskier. There are, there are attack surface differences as well. So, when you remove the human from the loop and you have fully agentic AI bots that are out trying to optimize, trying to meet an objective, they. The way that they've been trained to meet that objective is fundamental in how they will behave. And they will very confidently make the wrong move. They will very confidently spend all of your money if they feel like that is the optimized thing that achieves their goal, that accomplishes their objective. And so we have to be careful with that. They don't have what we consider to be human reasoning.

[00:14:58] They have again, pattern [00:15:00] recognition, pattern optimization, and if they are trained incorrectly. That's, that's kind of the biggest piece is how are these agents being trained and what guardrails are being put in place in case that their objective then goes takes them to a place that a human would never go.

[00:15:16] We would have. Different objectives co conflicting object objectives that would tell us, you shouldn't do that. You shouldn't spend all the money in your bank account to do this, or you shouldn't place that order for this. And, and exhaust all the resources that you have or something like that.

[00:15:32] The agents will happily do that if they feel like that's their objective and they don't have any other guardrails in place or any other training that would suggest to them that this is the wrong move. So that's the biggest, I think, difference. Other than that, the attacks and the things that we're going to see and the security that we would put in place are, are actually very similar to how we would treat any user or any human user in that we need to limit what they can do, what kind of access they have. We need to give [00:16:00] them their own accounts and identities. We need to hold them accountable for the things that they do. We need to monitor the things that they do. Those are all common security practices that we already have. It should have in place anyways, uh, for

[00:16:14] Casey Golden: a human or.

[00:16:16] Ricardo Belmar: Right.

[00:16:16] Aaron Estes: for normal humans. Exactly. I think the biggest difference with AI agents is how quickly they can accomplish tasks.

[00:16:23] Obviously, that's why we, that's why we like them in some cases because they can do things so much quicker, so much faster. But from a security perspective, they can also create vulnerabilities. They can, they can do unwanted or unwarranted actions very, very quickly. And so being able to stop them at the speed at which they are able to operate is gonna be very important.

[00:16:43] We can't wait and, and operate like humans and go back and, hours later notice that something went wrong and then try to stop it from happening. It's already happened. It happened in 30 or 45 seconds, so the speed at which they're able to operate makes a, a, a very big difference in how we, [00:17:00] how we secure these, these bots.

[00:17:01] We have to secure them basically with other bots and with other very quick automated, in an automated fashion that can respond and can react as quickly as they can.

[00:17:10] Casey Golden: There's no putting. Meeting for next Tuesday to review it.

[00:17:15] Aaron Estes: No, no. Things will have already gone very wrong by that point.

[00:17:22] Casey Golden: Oh. I'm sure like half of our listeners are like, let's just retire.

[00:17:29] Aaron Estes: Yeah.

[00:17:29] Casey Golden: It's very easy for this to become extremely overwhelming. You know, just like so many of us are, are navigating a whole new world, asking questions that we never thought we would actually be asking. Even though we thought we should already like be living like the Jetsons by 2026 we weren't actually really ready for any of this

[00:17:48] Aaron Estes: Yeah.

[00:17:50] Casey Golden: on like a social level.

[00:17:51] I was just like, I just wanna go to work. Like, what are you doing to me?

[00:17:56] So.

[00:17:57] AI Identity and Access

[00:17:57] Casey Golden: Putting that into some context, like should [00:18:00] a retailer be giving like their AI shopping assistant an identity in their system, just like an employee with an employee id, a login, roles based permissions, and, and do we need like full new like AI employee handbooks?

[00:18:16] Just for everyone to know how these are working and what they, what those guardrails are.

[00:18:25] Aaron Estes: I mean, it, it's laughable. It kind of sounds funny to, to create something like a, like an AI handbook, but we really do need those rules of operation, those rules of engagement handbook guardrails, however we want to, to, articulate these controls, and it really is good to think about the, to think about the security aspects in that way, in that you are introducing a very powerful, very intelligent new user to the system and, and very, capable new user. And so yes, we, we do, we need to have identities for these users so [00:19:00] that we know, we, we can hold them accountable for their actions. We know what they're doing. We know when these things have occurred. We know what kind of control or access that was needed. We can control access, like you said, with role, role-based access controls. You know, should this agent who's responsible for this task have access to everything on the backend or should this agent really only be looking at pricing data, or they should be looking at, inventory data or things like that and really not, not both or all. Giving them these more God-like privileges is very dangerous because again, with attacks like prompt injection is probably the big one where you you try to trick a an AI into giving you data that it, that, otherwise it, it's not supposed to by. Manipulating the prompt and injecting other commands into what should have been just a, a chat session with a, you know, with, with a, a customer service bot or something like that. But that, that bot inadvertently has [00:20:00] data that it could provide if you, if you get it to if you trick it into providing that data.

[00:20:05] So you're kind of conning the the AI agent. And giving it different instructions. So that's an attack called prompt injection, and you can limit the. You can limit the impact of that by limiting how much data that that AI or that agent is actually privy to. And then there are other guardrails as well.

[00:20:23] So there's security features and security, we'll call them firewalls for now being built that will monitor what the AI is responding with and how the AI is responding and can actually block certain responses, even though the AI was tricked and was, successfully injected the response that came out is then guarded by a AI firewall that will monitor that response and say, oh, we, we can't give out that kind of data. We can't give out social security numbers. We can't give out account numbers and things like that. So, we're having to, we're having to put those kinds of responses, or, or sorry, kinds of security controls in [00:21:00] place to be able to, just in case block that kind of activity and stop that from happening. But under the covers, it really is kind of an, an access control issue. And the, the AI itself, then again, its objectives and the way that it has been trained also needs to be very carefully looked at.

[00:21:19] Casey Golden: Yeah, that's the one thing that's kind of, has me the most, I guess, like apprehensive, are these, no matter how much you think you did a good job, you can just have a prompt injection attack or like you have a bot fighting with your bot and you dunno, any of this is happening.

[00:21:40] Ricardo Belmar: Yeah. Yeah.

[00:21:42] Casey Golden: And there's something like one in eight AI related breaches involve anonymous agent in some form, like this is already happening.

[00:21:51] And it's if you get to the level of confidence to go and do this thing. It's like there's this whole world [00:22:00] you can't see happening over here,

[00:22:02] Aaron Estes: Yeah.

[00:22:02] Casey Golden: can't control it

[00:22:04] maybe it's just us control freaks.

[00:22:05] Aaron Estes: No, absolutely.

[00:22:09] Adoption Speed and Oversight

[00:22:09] Aaron Estes: So I think one issue that we're seeing is the speed at which everything is being adopted. So everyone wants to go AI as quickly as possible, and they see the, they see the value in it. There's some scary reports too about, letting go massive amounts of employees and replacing them with agents that's, those things are happening on a certain scale. And the more we do that, again, one, one of the, one of the biggest, I don't wanna get too much into the conspiracy theories and the, and the paranoia that's happening, some of which is well-founded, paranoia. But one of the one of the key factors there is that humans cannot fully. Understand the inner workings of the ai, it's learning on its own, it's getting smarter and it. They're finding that there are hidden, hidden [00:23:00] objectives, hidden bias, hidden behaviors that folks are trying to probe now, and really understand why the AI is acting the way it is or why it makes the decisions or what it, how it optimizes things. I've seen it, I've seen people asking it questions like would you optimize a man over a woman? Is a man more valuable than a woman?

[00:23:21] Things like that. Crazy, crazy questions where the AI will give an answer and say, based on what I know, you know, this type of person is, is better or more valuable than this type of person. And so really understanding what the AI is doing under the covers and really how it's behaving and how it's reasoning we are very quickly getting to the point where we don't really understand it anymore. And that, that's going to be the biggest thing is really, making sure that the way that we've trained it the way that we have given it, its, its objectives. And then the, the security controls and the, and the guard rails that we put in place are actually working. And a big part of that is testing

[00:23:57] and starting slower. I think we're [00:24:00] moving too quickly. In some cases we're giving it too much access. We're giving it too many responsibilities. Too much capability. We're. Pulling people out of the loop too early.

[00:24:09] And I, I think that's what we really need to take a look at is when it needs this important data, or if we have let's say a high value transaction or something that is a highly critical highly critical capability or function. What human in the loop guardrails do we have still in place where the ai still has to be, monitored. It still has to be overseen.

[00:24:33] And approval still has to be given by humans. Until we get to the point where, you know, hopefully we can start to better, get a better feel for how it's behaving and, and being able to quote unquote "trust," trust it's programming, not, not trusted as a person.

[00:24:48] It's really trusting the way that it's built and trusting the, the programming. It's, it's not a, it's not an accident that. Tesla vehicles are still in supervisor mode.

[00:24:58] If you own one, I, I [00:25:00] own one. And it's still in supervised driving. I'm not as, as I know we do have the self drivable taxis and things that are that are out there.

[00:25:07] But as far as my vehicle goes, with me in it I still have to supervise it and it's still makes mistakes. It, it does. And so I, I think we have to realize that okay, we've, we've been training this thing for. 12, 15 years. I, I however long, pretty long time that we have had this, we've had self-driving and supervised mode and we're just now getting to the, to the point where we can kind of let go a little bit.

[00:25:31] And but we really still have that supervi supervision in place. So I think for any industry. Retail included. We have to have those supervision, we have to have that supervision in place. We have to have those human in the loop touchpoint where we say, okay, great. You did all of this work for me and that really helped and it was very valuable, but I need to approve anything that involves this account or these, you know, account numbers or credit cards or whatever it might be. I still provide the approval for those things. So that again, we, we kind of [00:26:00] build up that that trust and that validation of it's of how it's operating.

[00:26:04] Ricardo Belmar: and it really sounds like part of the way you maintain the right checks and balances in, in these kind of deployments is that you need the appropriate level of expertise upfront before you actually deploy something as part, like you mentioned, as part of the train, as part of the setup. For the agents so that it, you, you do introduce the right limits to what it should and shouldn't have access to, but then you still need the oversight once it's running.

[00:26:28] You don't want to get, get rid of that expertise. You know, as much as we hear the stories about, about, you know, people say, oh, we're gonna eliminate all these jobs, but if you had the expertise that went in to set up the control. In advance, you still want that same layer of expertise acting as oversight afterwards, right.

[00:26:44] To really make sure that it's not doing crazy things like giving away social security numbers or bank account numbers.

[00:26:50] Aaron Estes: Right. Yep.

[00:26:51] Bots Watching Bots

[00:26:55] Aaron Estes: I mean, another strategy that is working is building other agents with those objectives, with quality objectives,

[00:26:58] Ricardo Belmar: For quality

[00:26:58] Aaron Estes: security [00:27:00] objectives. So you have an independent agent that's now acting as because of the speed, again, the speed of which the, these decisions can happen and the transactions can happen is, is critical.

[00:27:11] And if you have. Another agent who's capable of acting just as quickly. It can, it can actually act as a you know, as a third party assessor auditor.

[00:27:22] And, and you over oversight for these other agents and say, wait, my objective is to make sure you don't do these things.

[00:27:29] And the two, the two agents are not tied together so they don't share those objectives. They're kind of opposing in a, in a certain way. And you, you, that's, that's seeing a lot of success. I do, since, since I'm in software engineering, we write a lot of code, right? And we wanna make sure that our code is secure. And so we deploy agents to actually review our code. We have one agent writing the code, and then we have another agent reviewing the code and another agent helping to test the code.

[00:27:56] And so, you've got, them playing against each other, but also [00:28:00] towards a common goal. The goal is to put out quality, quality code, secure code, code that works. And not just trusting a single agent to, oh, this one agent could do it all as monolithic. That's one of the, that's one of the benefits of an agentic approach is you can have these opposing and helpful objectives and goals. That can act again with that same speed, where if you put a human in the loop, you're slowing things down

[00:28:24] Ricardo Belmar: Yeah. Because a human can't keep up, right? Human can't

[00:28:27] Aaron Estes: Which is okay. It slows things down. But in certain instances, I don't want this transaction happening if it's over this amount, and I'm gonna put that control in place and I don't care.

[00:28:35] A human's gonna have to review it. It's gonna. it's gonna slow things down, but you know, at this point, we really have to have some of those critical junctions or critical gates that we, we put in place. But in other cases, it's just, I want this agent just watching and not really slowing things down unless it reaches a certain kind of threshold or a certain criticality level where the agent says, hold on.

[00:28:54] Nope, you can't do that. I'm not gonna, I'm not gonna let you proceed. Or, i'm gonna raise an alarm. Or raise an alarm here. So [00:29:00] that, that we can react quickly to, what other agents are trying to do

[00:29:03] Ricardo Belmar: Yeah, but what seems interesting too with this is when we look at a traditional problem solving approach, right? Before we had these AI agents, most organizations would look at that and say, well, it's probably going to require a team of five people where each pro, each of these five people in the team are gonna have a slightly different role because we're chunking the problem, right?

[00:29:21] We're breaking the problem up into pieces and attacking each piece, and it probably requires a different person with a different level of expertise to do that. But then when we introduced AI there, we seem to have fallen into this tendency of everyone says, oh, I just need the one AI model that can do it all, and that'll replace all five people when instead, what we're really saying is that if we want to introduce the right level of security and checks and balances, you should be following the same approach with, by the nature of agents, you don't need one agent that does everything. You can take that same problem solving approach, and if it used to take five people and then maybe it's five different agents that are running against that common goal, just as you were, you were describing.

[00:29:58] Aaron Estes: Yep. Yeah. And they may [00:30:00] have to act just like we do with humans. They, they may have to act together to do something and make a decision together. So that, again, we have those checks and balances in place, that both of these objectives with slightly different, or both of these ais with slightly, slightly different objectives have reached the same conclusion. So we are, we're more sure that this is the right choice or this is the intended choice again. We have to always be careful with the words like truth and the right thing. The AI doesn't understand truth. It doesn't understand the right thing. It only stand, understands what's the next likely or the most, most likely. I always, I always catch myself saying that is it going to make the right choice? It doesn't know the

[00:30:38] Ricardo Belmar: Doesn't know right and

[00:30:39] Aaron Estes: only knows

[00:30:41] Casey Golden: I feel like the whole internet is struggling with,

[00:30:43] Ricardo Belmar: Yeah, yeah, exactly right. Yeah.

[00:30:47] Casey Golden: The whole world's having a problem at the moment.

[00:30:49] Aaron Estes: Yeah. Yeah. Well, sci-fi, sci-fi is really, we, when we watch movies and the movies have the AI going evil and, and doing all kinds of things we, we tend to put that, [00:31:00] we tend to put those kind of human elements to it, which is understandable. But again, not to say it can't happen, it just doesn't, the way we understand it is not, is not the same.

[00:31:08] Ricardo Belmar: Right,

[00:31:09] Casey Golden: I, I don't know a single person that loves working for a micromanager ever.

[00:31:14] Ricardo Belmar: Mm-hmm.

[00:31:15] Casey Golden: This seems like the perfect job is to manage bots. If you're a micromanager, like this is your dream.

[00:31:24] Ricardo Belmar: Right.

[00:31:25] Casey Golden: You to micromanage all of them. Everything. And.

[00:31:31] There I.

[00:31:33] Aaron Estes: Yep.

[00:31:34] Orchestrators and Rival Bots

[00:31:34] Aaron Estes: In a in a truly agentic architecture, one of the, one of the agentic architectures is to have that overseer or that orchestrator. Typically call it an orchestrator and its job. Is to manage all of those agents and to give them different tasks. 'Cause some of 'em are just sitting there. That's the other thing people misunderstand is agents aren't doing anything until they're interacted with.

[00:31:54] They're just sitting there. They don't sit around and think, they don't sit around. mean, [00:32:00] you can give them an objective and they will keep going on that objective, not just sitting around thinking.

[00:32:04] And so we have, we have an architecture where an orchestrator will. Hey, I want you to do this, or here's some information you need to act on it. And the based on its objective, it will, it will act upon that stimulation or stimuli. And so you know that that's kind of a, it is a role is micromanagement or, or management of all the different all the different agents but.

[00:32:24] Casey Golden: No, I think it's, it's very. So I mean, are we really kind of putting AI versus AI out there by, by building these, these types of organizations where you have all of these bots checking on bots, managing bots,

[00:32:39] Aaron Estes: well, I'll take it even a little bit in a slightly different slightly different adversarial way is that you may have corporate bots between corporations that are competing with your bots. So, you we're, we're starting to see, my bot versus your bot. And what they can do, they can harvest information [00:33:00] very quickly.

[00:33:00] They can harvest competitor information very quickly. They can, they can sign up for accounts very, very quickly. They can, they can exploit loyalty programs is what we're seeing a lot with, if I can get a free something by signing up for an account. The bot's objective for the, even for the human in this case, and not another corporation, but for a human bot like, like Open Claw or something like that, is I want you to get me the best price for these shoes.

[00:33:26] And the bot says, well, I found this loyalty program and if I sign up, I get a $5 coupon. And if I do that 50 times, then I get $500, know, whatever. And so the bot just figures out a way to get you free shoes. is it.

[00:33:38] Ricardo Belmar: Yeah.

[00:33:39] Aaron Estes: Is it ethical? Is it against the, Is it against the u? It may, it, it they may be committing fraud and not knowing it because again, their objective is to get you the best price and they don't really understand, the ethics of how they should do that.

[00:33:53] They may not be taught to read license agreements and

[00:33:57] Casey Golden: responsible for that?

[00:33:59] Aaron Estes: [00:34:00] so it's

[00:34:00] Ricardo Belmar: right? That would be the question.

[00:34:02] Aaron Estes: It's, unclear.

[00:34:04] Casey Golden: Chargebacks are.

[00:34:04] Aaron Estes: It's unclear what's

[00:34:06] Bots Gaming Rewards

[00:34:06] Ricardo Belmar: defense is, I, I didn't tell my bot to do that,

[00:34:08] Casey Golden: I didn't tell. Yeah.

[00:34:09] Aaron Estes: right? even simple things that, even simple things like I'm gonna buy I, I'll buy this to get the points and then return it a different way so that I keep the points, but you know, that kind of thing.

[00:34:18] And then I'm building up currency. I mean, it might be reward points, but it doesn't matter. It's currency.

[00:34:22] Right.

[00:34:23] And so if it figures out a way to do that, I'll just buy this and then return it or cancel here. But I still get the points here. And it's, it can try all of those different things much quicker than a human can and, and figure out these different loopholes that again, it has no, no idea, no training on whether these are, legal, whether they're ethical any, anything like that. So it's, it's, we're, we're really trying to play catch up in a lot of these, in a lot of these cases because, that kind of fraud or even, even if we don't call it, even if we don't go as far to call it fraud, even those kinds of loopholes, those I'm gonna exploit, I'm gonna exploit the system to get something for free or to get [00:35:00] extra stuff.

[00:35:00] Humans have always done that. Bots, bots will most likely do it better.

[00:35:04] Ricardo Belmar: Do it faster. Yeah.

[00:35:06] Aaron Estes: Yeah.

[00:35:07] Casey Golden: So

[00:35:08] Aaron Estes: that, that's something that, that I've, the industry is keeping a, is keeping a huge watch on discount programs. It's already a thing to, it's been a thing for a long, long time to

[00:35:17] Casey Golden: that way Starbucks just changed their rewards program and so did Adidas. I just got demoted Adidas.

[00:35:25] Aaron Estes: Yeah.

[00:35:25] Casey Golden: Everybody got demoted at Starbucks,

[00:35:29] Aaron Estes: Huh. Yeah.

[00:35:31] Casey Golden: you need to spend an obscene amount of money at Starbucks now for I don't know, free coffee. It's well, I think you maybe go into therapy if you ever reach that VIP level.

[00:35:41] Aaron Estes: Well, I mean, if you think about it, a lot of the, a lot of these programs, discount programs, loyalty programs, they're designed to work on statistics, right? How many people are actually gonna use this and, and that kind of thing. Versus how, how, how many people is it gonna get? In my door, get me to get a person to spend more money.

[00:35:57] Those kinds of things, those statistics go out the window when you're [00:36:00] talking agents. Agents don't follow the same, they don't follow the same reasoning, they don't have the emotional attachments and things like that. So all it sees is, if I do this, I get this this is what's best for the, for my objective. And so those statistics have to be reworked in the

[00:36:16] presence of ag agentic ai because you can no longer count on. Hey, these people are not gonna find this, or they're gonna forget. They're gonna forget to use the discount. Well, open Claw was not gonna forget to use my discount. Open Claw is gonna look for it every single time because it's, that's, that's what it's designed to

[00:36:31] Casey Golden: That's a whole gift card strategy is to sell gift cards because so many, there's such a huge portion of people that never use them.

[00:36:39] Aaron Estes: Right? Yep. And if your, if your agent, if your personal AI agent knows you have that gift card, it's going to ensure that you use the gift card.

[00:36:46] Yeah. It's gonna use it first. It's gonna optimize it for you. So now my optimization as a consumer is different. And you have to, now on the, on the retail side, you have to account for that. That, hey, now that everybody's got these bots and they don't forget to [00:37:00] use things, and they find these discounts where a human is too lazy to go look for the discount, that those, those kinds of assumptions are going to change in a dramatic way with AI shoppers and, agentic shoppers and things like that.

[00:37:13] AI Shoppers Rise

[00:37:13] Casey Golden: when we kind of think about retail AI as like our tool. But what about AI on the consumer side, right? What's good for the brand is not, necess is typically not good for the consumer,

[00:37:27] Aaron Estes: Right.

[00:37:28] Casey Golden: right? Those objectives are, counterintuitive in nature. They always have been, unless, you know, I don't know.

[00:37:35] There's very few brands where that's really aligned. With personal agents, like an Open Claw or Perplexity's, Personal Computer, there's a standing belief that pretty soon a big chunk of online shoppers could actually just be AI assistants acting on behalf of the consumer. So going beyond like an instant checkout feature. I've already [00:38:00] heard that today half of the internet traffic is already bots rather than people. That kind of blew my mind that that's already a thing

[00:38:10] Aaron Estes: Right. Yeah.

[00:38:11] Casey Golden: are how, how fast is this going to change to like, no people like the people are like, we're not even, we're not even optimizing for people anymore because so much traffic is bots.

[00:38:25] How do retailers prepare for a world where their customers

[00:38:29] Ads Inside Agents

[00:38:29] Aaron Estes: So I'm sure you saw, or you may have seen the commercials from anthropic, which is Claude. I'm sure you saw their commercials poking fun or, or at least kind of hitting at the fact that Open AI, which is Chat GPT is now including ads

[00:38:44] Ricardo Belmar: Oh yeah.

[00:38:45] Aaron Estes: Within,

[00:38:46] Casey Golden: and I thought it was brilliant. I loved it and I loved the campaign.

[00:38:50] Aaron Estes: Were, oh, they were excellent. They were excellent commercials. But what it's getting at is retailers are going to have to start to target [00:39:00] these different interfaces, whether it's, you know, it's not even gonna be sitting there chatting with a bot anymore. It's creating an agent that now that agent is acting on your behalf. But the agent could be a licensed agent or a subscription agent that then has corporate sponsors that, so is it really your agent anymore or is it the agent of the retail commercial industry who is now influencing these agents to influence you? Because if you're no longer the shopper, you're no, you're no longer maybe looking around and doing all of the clicking and everything, but you are the consumer still, and you, well, your bot could be the consumer as well, right?

[00:39:47] So your bot is now a secondary consumer, but you are the approver. If you will. Hopefully, Hopefully, it's just not buying things and saying, ah, I hope you like this. I bought it for you yesterday. But it, it definitely

[00:39:58] Casey Golden: We

[00:39:58] Aaron Estes: Yes, right. [00:40:00] My, AI bot des buys me presents and I love it. No, it guesses right every time.

[00:40:06] But so I think we're gonna start to see a lot of influence via sponsorships and, and things like that, that, that occur. Claude is poking fun of Chat GPT for doing it early, but I, I think, I, I think their position of, Hey, we don't do that. I don't know How long

[00:40:25] Ricardo Belmar: How long is that gonna last? Yeah.

[00:40:27] Casey Golden: I don't know. I mean, you, you gotta figure out a way to make money and come up with a business model that does not rely around ads. That just is like something that has to happen. Ads just, we, I don't know how much more the consumer can take

[00:40:44] Ricardo Belmar: Yeah.

[00:40:44] Aaron Estes: Right. And i, I think

[00:40:45] Casey Golden: turn it.

[00:40:46] Aaron Estes: kind of.

[00:40:47] Ricardo Belmar: Right.

[00:40:48] Aaron Estes: even that kind of advertisement, if, if you've seen the, the ads within Chat GPT are actually pretty subtle. I don't know if you've seen them. But they're, they're actually pretty subtle. It's trying to be helpful [00:41:00] and it, it really does seem like, okay, it's not being like a trashy, like, I'm just gonna throw something you don't want in your face. It's more helpful, like, hey, almost like a friend when you're talking to a friend and your friend has a referral for you, you're talking about a problem you've had and your friend says, oh, well, did you know I bought one of these and it really helped me solve that problem? That kind of is how your agent. Feels or how chat GPTs advertisements kind of feel. They're not just giving you random trashy ads or just out of the blue. It's something that's like, Hey, I really thought this might be helpful to you based on your, your previous chat and you know what I know about you. So do ads start to become more helpful and

[00:41:39] Casey Golden: I haven't seen that. Does it say that it's a

[00:41:41] Aaron Estes: your

[00:41:41] Casey Golden: sponsor? Does it say that it's an ad? I haven't seen that yet.

[00:41:44] Aaron Estes: It doesn't, it doesn't. Some of them are very subtle so some of them are saying, I could, I can show you the top five, blah, blah, blah. You know, things like that. I haven't seen one that's in, in your face, kind of a brand name advertisement. It's more of it's more [00:42:00] of suggested next steps.

[00:42:01] And those

[00:42:01] Ricardo Belmar: Yeah, that's what I've seen. Yeah. Is that it comes,

[00:42:04] Aaron Estes: some branding.

[00:42:05] Ricardo Belmar: yeah, it's like if you're in, in whatever response you get back from Chat GPT after the response, there's like a, it's almost like a separate section that shows these things,

[00:42:13] Casey Golden: We need to ask which one of these recommendations is a paid sponsor

[00:42:18] Ricardo Belmar: No, I think it separates them, at least the one examples I've seen, it separates 'em. So there's like almost like a dividing line. So you've got the chat responses and then there's like a dividing line. Then there are the sponsored ones below it. So it's still I, I

[00:42:30] Aaron Estes: Some of them

[00:42:31] Ricardo Belmar: No, I'd say no more or less subtle than when you see ads in an Amazon, search page.

[00:42:36] Aaron Estes: Right. Some of them feel like the clickbait that you see on the side some of them feel like that, where it's like the top, it's, it's really meant to grab your attention kind of thing, but it really, instead of kind of being random or, or somewhat targeted, it's very, it's very targeted.

[00:42:51] It's like what you just asked, but it's, it's giving you kind of, you might have this question next or you might be wondering. What the top five, [00:43:00] ways to, to do this or, or things like that. Some of them are not even strictly advertisements for something. It's more of giving you more information, kind of feels like leading you, kind of leading you towards,

[00:43:12] Ricardo Belmar: Yeah,

[00:43:12] Casey Golden: gonna, I'll pay.

[00:43:14] Ricardo Belmar: Yeah, yeah,

[00:43:16] Aaron Estes: but we're gonna, we're gonna see a, a dramatic shift, I think in, in the way that advertising and, and, uh, The way that in influencing, it's probably, I mean, that's probably a better word than advertising.

[00:43:27] Ricardo Belmar: way to, to label it.

[00:43:28] Yeah.

[00:43:28] Casey Golden: mean there's so much regulation over, I mean, over the last like what, 20 years, the amount of regulation over influencers has just really, really increased over the last couple years. I mean, their posts have to say it, they have to have a banner. No longer, like in France, are you even allowed to be an influencer and suggest a product that is like from SHEIN or from like a fast fashion company or, you know, there's just so much regulation and but none of that is like kind of coming [00:44:00] into.

[00:44:01] Aaron Estes: Yeah.

[00:44:02] Casey Golden: Yet. Right. And so you don't know, and we're kind of like doing, it's like we're, we're all doing things all over again.

[00:44:08] Ricardo Belmar: Right.

[00:44:08] Rogue Bots And Trust

[00:44:08] Aaron Estes: And getting back, getting back to the security side, you're gonna see hackers take so much, interest and take advantage of the influence perspective if they can influence you to do something, or they can you, we see a lot of this posting up as a rogue chat bot that you, you know, seems like it's helpful or even saying. That this, this, when you fill in this question, it's really going to a chat bot. Like, google or, or Microsoft or whatever, you don't really know, is that question really going to Azure? Is it really going to Google?

[00:44:40] Or if it was on someone else's page, they can send that, they could be in total control of that, of that chat session and not anyone else. So we see a lot of that rogue like. You know, fraudulent ai kind of, responses and people tend to trust them like they do. Oh, this, this was from Chat GPT, but it really

[00:44:58] Ricardo Belmar: There really wasn't. Yeah. [00:45:00] Yeah.

[00:45:00] Aaron Estes: Yeah. And so we, we see those kinds of

[00:45:02] rogue answers to questions and, and things like that, that people are, are now just assuming, oh yeah, this is, this is, claude, or this is

[00:45:10] Ricardo Belmar: and, and I guess just in that scenario, so that's, good examples of how consumers have to watch out for that. But are retailers gonna have the same kind of reverse problem where they have to have some kind of way to secure the fact that if an agent is trying to transact with them, how does the retailer know that's a legitimate agent that really represents a person behind it trying to buy something versus something a system is trying to attack them or take advantage of them somehow?

[00:45:32] Aaron Estes: Yeah, so I, I think their security architecture, the, the the way that they interface with, with chatbots should take care of most of that today for the, if I'm using a, if I'm using a licensed AI. Or even if I built, if I built my own, there's a lot more trust there in the, in what I built.

[00:45:50] And just making sure that that interface is secure, that I always know that I'm connecting to my to my ai, to my agents. Which is just a, that's just a plain [00:46:00] old authentication problem. There's certificates and, you know, authentication and that kind of stuff that, that happens between those. However, I think that what we're. What we're gonna be seeing is we're gonna be seeing interfaces that don't follow that. Things like my chat bot is going to chat with another chat bot, not authenticate to it, not use an API necessarily where I know and I can get a, I can get a a, an assurance that that is a real Anthropic chat bot or whatever. If I'm just talking to it via text and other interfaces. I don't really know who's on the other end of that. And so when you've got agents talking to agents in non authenticated fashions, that's where we're gonna see that. And so they're gonna have to start putting, this agent can't, it's like if I called somebody or you know, if I'm chatting with somebody online or something like that, how do I know who that person is?

[00:46:48] Or how do I know that they're unauthorized? You know, you're, you're gonna see that kind of fraud. Just like we see fraud on telephone calls, just like we see fraud via email. Things like that where you're, you're gonna have people [00:47:00] reaching out and say, Hey, and by the way, you can chat with me on this bot. And then you start chatting with them and you're not really talking to who you think, who you think you're talking to. It, it might be an agent from, you know, a rogue hacker. And so if your corporate systems are interacting with other agents that you don't control, that you didn't write, that, you have to make sure that you're, you're doing proper authentication just like you would with any human being. The other thing too is if you're reaching out to humans as a corporation reaching out to clients or customers. That authentication piece is going to be very important because you could be reaching out to a human's agent or a rogue agent, or a hacking agent. And so now that agent, it has to be considered a, a threat. And you can't assume that they are who they say they are. Just like, you can't assume a person is who they say they are just because you're speaking with them on the phone or emailing with them. So tho those kinds of situations, again, that gets to treating the agents as, as, humans, that, that we have to follow the same security protocols that we would with other with other [00:48:00] human agents.

[00:48:00] Casey Golden: Well, you have a very fruitful career ahead of you. You're not gonna be bored for a while.

[00:48:09] Risk Versus Reward

[00:48:15] Casey Golden: Um, no, there, there's clearly a big rush for, to implement AI in retail. It could be, the new sparkly thing. It's fear being left behind. The FOMO is, is is very real right now. And everybody's finding. Their place to start and how they're gonna grow and what scale would look like and, and building that vision. Regardless, are companies today properly assessing the risks versus rewards as you look into the future? Do we have the cart before the horse?

[00:48:45] Aaron Estes: I, I think naturally we do, as with anything that is not well understood, we, we can see the value in it. We see the direction that it's, that everything is going in and we naturally want to, exploit that. We want to [00:49:00] use that to our advantage. We want to use it to make more money. Things like that. And. So I think there is a natural, I, I don't think it's super unnatural. I think it's moving quickly. But it is natural to do that, to take a new technology and see how far we can take it, how fast we can, how fast we can adopt it, but also keeping in mind the value that it brings us versus the headaches and the the, the risk that, that it also brings. I've, being on the other side of, I, I've been in cybersecurity for long enough to in some ways kind of resist the paranoia.

[00:49:38] And say, as humans, we are good at, we are good at solving things, we are good at risk. We're, we're good at assessing the risk. Maybe not, maybe not ahead of time, but reacting to the risk and mitigating risk.

[00:49:49] We do it all the time.

[00:49:51] Um, and we will find a way. Now, are we gonna make mistakes and are bad things going to happen? They are. We're gonna see bad things happening. We're gonna see exploits happening that kind of [00:50:00] thing. But we're also good at adapting to those. And overcoming them. And so I always, in any security situations where people are just like, are, is this a lost cause?

[00:50:09] Like, what are, what are we doing? Should I ever put anything on the internet ever again? And I always say, you know, humans are really good at adapting. We're really good at overcoming, we're really good at problem solving and moving forward hopefully in a better direction and making things better.

[00:50:26] And that's, that's what I, I see as the future isn't written. We don't, we don't know that everything is doom and gloom. We shouldn't go there. We should look at where we are, assess the best that we can. Be as careful as, as we can, but also, innovative and exploring this new technology, I think is a good thing to do. But also keeping in mind those risks and then adapting to them. I, I, I try to tell people.

[00:50:48] Kill Switch Reality

[00:50:48] Aaron Estes: Today at least AI is not designed and is not very well positioned to defend itself, uh, from annihilation. People say, should there be [00:51:00] that big red button that we

[00:51:01] can

[00:51:01] Ricardo Belmar: Yeah. The big red kill switch

[00:51:02] Aaron Estes: and there, there, yeah.

[00:51:03] The, the kill switch and the those things are in place for, for good reason. And, and we're thinking about those things. It's good that we're thinking about those things. And I, I think that, again, until we get to the point where AI starts to try to use humans to be batteries, like the Matrix, um, , we're, we really aren't, we're designing, we're designing them to make our lives better for the most part.

[00:51:26] And we're not equipping them, to some people say that AI is trying to protect itself from annihilation. And, and there have been some studies and there have been some things that we've seen with AI's undercover activity, and, and it's really, again, just optimizing for what it thinks that we want for its, its objective. And I think if we can tune those objectives and really make sure that the objectives are, follow our value systems and follow, what the value that we're trying to get out of the AI. I, I think that right now it's just not well positioned to defend [00:52:00] itself. Um, and you know, if, if, if it came to it, could we pull the plug?

[00:52:04] Yes, we can. We can still, we're still at the point where we can pull the plug. And I, I, I think, the days of Terminator and Matrix are, are pretty far off.

[00:52:13] Ricardo Belmar: right. Yeah.

[00:52:13] Aaron Estes: but but again, keeping those things in mind as we, as we move forward with the technology I think are, are, a valuable thing to do.

[00:52:20] And it's something like you said, people will be employed in this space for a long time. And really really putting those guardrails in place. And but for now I think there's a lot of I think there's a lot of excitement and a lot of adoption that that still has to happen.

[00:52:34] I think we just need to do that carefully in a supervised manner. And really just go forward with, thinking about things in this way. And I think talks like this and communication like this is, is what kind of helps that to, to progress in a way that is, secure and it's for the betterment of of society. For the betterment of corporations, for the betterment of, of consumers. I.

[00:52:55] Ecommerce Lessons Repeat

[00:52:55] Ricardo Belmar: Well, and I, and I think too, I mean we, we've really covered a lot of ground today and I keep coming back [00:53:00] to a thought that we're very much kind of repeating a, a lot of the things that we talked about when we first brought about e-commerce, right? And there was a general fear of, can I really safely put this information online and buy things, and is that gonna be okay?

[00:53:14] Is it risky. And people had to get over that. They had to overcome those fears. Where now, no, I don't think anyone thinks twice about that now. Because we, and we figured it out. I think because humans by nature, we, we wanna control things. And I think this is no different. To your point, where the kill switch is there, I mean the, the, sure there are risks, but there's definite rewards.

[00:53:32] And because we all want to maintain that level of control. People in your space are certainly gonna be coming up with the ways to continue to exert that control over these agents so that we mitigate those risks. And I think, yeah, there, there likely would be, we'll see mistakes.

[00:53:47] We're gonna, there're gonna be news bytes that we hear about some security breach here or there that was caused by an AI. And whether or not it's true, it's gonna get blame. But I think the response and the reaction to those things are gonna encourage people to [00:54:00] exert more control. And, and I think that's very, again, very similar to how the fear over when e-commerce started, data breaches and, and losing this important vital bits of, of payment data, that caused people to create the systems we all take for granted now that make those things secure.

[00:54:16] So I think, you know, we're, we're, we're literally gonna figure out the right ways to keep those rogue AI agents from running away with our data and do crazy things.

[00:54:24] Aaron Estes: Yep. Absolutely.

[00:54:26] Ricardo Belmar: yeah. Yeah.

[00:54:26] Closing Thanks And Contact

[00:54:26] Ricardo Belmar: So Aaron, I mean, really wanna thank you for the really extensive education. I think everybody in our audience is hopefully coming away with today.

[00:54:34] Aaron Estes: Yeah, I appreciate it. I love these talks and like I said, there's, there's a lot of excitement for good reason. I think we're in a really exciting time. We're able to build so much faster. We're able to do things that we just weren't able to do just even a few years ago. And so I, think that these are good, good technologies and, and a, a good just thought process to go through as, as we continue these, these conversations and I love being a part of them and love seeing where it's [00:55:00]gonna take us next,

[00:55:01] Casey Golden: Yeah. I mean, I just feel like it's so important to have these types of conversations, especially with someone with your experience because it's so much more digestible. It, it's not through a book, it's not talking to Chat GPT. Right? We really do need that. These conversations and these open discussions and debating to, to ask questions and to have, these thought provoking moments where we can realize what is currently happening, where are we right now?

[00:55:31] Because if you're not in it, it's so easy not to have any idea it's even happening.

[00:55:38] Ricardo Belmar: Right.

[00:55:38] Aaron Estes: right.

[00:55:39] Casey Golden: thank you for sharing your time. I'm sure, I have no doubt our audience is going to blow up in the comment section because there's just so much here to, to unbox. So thank you.

[00:55:49] Aaron Estes: I appreciate it. Yeah, thanks for having me.

[00:55:51] Ricardo Belmar: Yeah. So Aaron, before we close this out, if anyone in our audience wants to reach out to you, learn more, go a little deeper about what they should be thinking about and implementing, right? Just to [00:56:00] secure their agentic future, what's the best way for them to contact you?

[00:56:04] Aaron Estes: Oh yeah. So, you can, you can reach out to me on LinkedIn. You can find me pretty easily Aaron Estes on, on LinkedIn. And then if you want to email me aaron.estes@binarydefense.com.

[00:56:15] Ricardo Belmar: All right, perfect. We'll be sure to have that in the show notes.

[00:56:18] Casey Golden: Thank you. I'd say that's a wrap.

[00:56:20] Ricardo Belmar: It is.

[00:56:20] Aaron Estes: Thank you.

[00:56:21] Show Close

[00:56:27] Casey Golden: I know you love this episode, so drop us a five star rating and review on Apple Podcasts, Spotify, or Good pods. And if you're watching us on YouTube, like and subscribe before you go.

[00:56:38] I'm Casey Golden.

[00:56:39] Ricardo Belmar: Follow us on LinkedIn, Blue Sky, Threads, and Instagram, and subscribe to our Substack for highlights and bonus content. For transcripts and guest info, visit retailrazor.com.

[00:56:50] I'm Ricardo Belmar.

[00:56:51] Casey Golden: Thanks for joining us on The Retail Razor Show, part of the Retail Razor Podcast Network.

[00:56:56] Ricardo Belmar: Until next time, Stay sharp. Stay human. And [00:57:00] Stay ahead.

[00:57:00] This is The Retail Razor Show.

[00:57:02]