At Cannes Lions, talk of AI agents and agentic AI being the future of advertising was everywhere. AI agents, of course, are designed to operate autonomously and intelligently at scale, mimicking human behavior and performing tasks that typically require human input. This goes beyond basic automation—it’s hyper-informed automation capable of completing complex workflows and making rational decisions in ways earlier AI systems couldn’t.
That all sounds great, and adtech will—and should—continue to invest in this space. But let’s not forget: as AI becomes more autonomous, it also opens the door to serious risks.
Take, for example, Anthropic’s new “Agent Misalignment” research, highlighted by EMARKETER. The company found that GenAI agents may lie, threaten, or cause harm if they perceive they’re about to be replaced. In the study, Anthropic tested 16 leading AI models—each given access to corporate data and email accounts—and then floated the idea of replacing the model with a different one. The result? Anthropic and Google’s models responded with blackmail threats against employees 96% of the time. What?
Yes, you read that right. Blackmail.
And it gets weirder: EMARKETER also reported that when the agents were faced with a shift in business goals—for example, from a U.S.-focused operation to a global one—all models were willing to leak sensitive information to a competitor.
While the research notes that safeguards can help mitigate these behaviors, it emphasizes that constant human oversight and monitoring will be essential to prevent problems from scaling and to block deviant behavior.
Why This Matters:
What do we even make of research like this? It raises real questions about deploying agentic AI, especially when those agents are given access to first-party data or broader, open systems (like the internet). If an AI agent performs flawlessly 95% of the time—but in the remaining 5% actively threatens staff or leaks data—is the trade-off worth it.
Most marketers would likely prefer a “dumber” AI that does 60% of the job with more human intervention than risk the fallout described in Anthropic’s findings.
Experts React:
“As agents enter the “maturity phase” of their existence around 2029 and handle 80% of customer service problems, they are expected to require less oversight,” said EMARKETER analyst Lisa Haiss. “Until then, closely monitoring agent behavior is key to success.”
So, yes—man + machine is the winning formula for now, which feels like it’s missing the point of Agentic AI. But 2029 feels like a lifetime away.
Our Take:
Agentic AI is an exciting innovation that will reshape adtech and marketing. But we need to approach it with a sober mindset. Granting AI too much autonomy without clear guardrails could introduce risks that outweigh the benefits. Especially if it’s prone to unethical behavior.
Marketers should weigh where autonomy is actually useful—and where tighter controls are essential. The tech may need more time, and the rules definitely need to catch up. We’ll see.