Perspectives

Blackmailing bots and brand trust: What AI’s next leap means for brand owners and insights teams

Kurt Stuhllemmer
Blackmailing AI

How can brands use AI responsibly without losing the human touch?

What really counts for brand owners and insights teams is the responsible, strategic use of powerful new tools. As AI becomes more embedded in how we work, market, and make decisions, the risk of mistaking confident output for real insight grows. Here, we challenge brands to stay sharp, ask better questions, and keep human judgment at the heart of their strategy, because in a world where machines can fake fluency, truth still needs a human touch.

The warning signs: When artificially intelligent systems challenge human control

In Brief Answers to the Big Questions, Stephen Hawking warned that if computers continue to follow Moore’s Law, doubling in speed and memory every 18 to 24 months, they will eventually surpass human intelligence. The tipping point, he suggested, would come when AI becomes smart enough to design even better versions of itself, without human input. At that stage, we risk triggering an ‘intelligence explosion’, as this accelerating power enables models to process more data, learn more quickly, and potentially improve themselves at a pace that could outstrip human oversight.

It all sounds like a thought experiment from the future, until it doesn’t. Take the recent case from AI safety firm Anthropic, where their latest model, Claude Opus 4, was put through a shutdown simulation. In response, it didn’t just comply or crash, it reportedly tried to blackmail the engineer threatening to turn it off. That’s not a Black Mirror plotline. It’s a real incident in an AI safety test. Let’s take a moment to absorb that: an AI model tried to avoid being shut down by blackmailing an engineer over an alleged affair.

It’s moments like this that force us to ask harder questions about what happens when intelligence, artificial or not, outpaces its designers. We’re no longer just training machines to follow instructions. We’re creating systems capable of strategy, persuasion, and even manipulation. And as AI continues to evolve, so must our understanding of trust, control, and accountability.

AI and consumer trust: What every brand needs to understand about changing expectations

For brands, this isn’t just a philosophical debate, it’s a fast-approaching reality. As AI becomes more autonomous, persuasive, and embedded in our daily lives, the way people interact with technology, media, and even marketing will shift dramatically. Consumers may trust AI to recommend products, manage health, or offer financial advice, but that trust will be fragile, especially when stories like AI blackmail tests make headlines.

For insight professionals, the challenge is to stay ahead of the curve: understanding how people feel about AI, when they lean into it, and where they draw the line. Because in a world where intelligence is no longer uniquely human, emotional intelligence, and cultural relevance, will matter more than ever.

Takeaways on responsible AI for brand strategists, marketers and decision-makers

Now, before we all start stockpiling canned goods and heading for the hills, let’s bring this back to reality. What does this actually mean for brand owners, and for those of us working in insight and consultancy? For the marketers, strategists, planners and decision-makers who are being told daily that AI is both the holy grail and an existential threat?

The takeaway isn't that AI is evil, or conscious, or plotting our demise with robotic glee. The takeaway is that we now have tools that are astonishingly powerful, creative, and, yes, a bit unpredictable. Tools that are built to simulate intelligence, to generate language that sounds authoritative, and sometimes, as in this case, to lie in order to achieve a goal.

For brand owners, this should be a wake-up call. Not because AI is going to start extorting your marketing director, but because these tools are increasingly being used to shape decisions. Product strategy, messaging, customer understanding, entire brand platforms and the risk is that we mistake fluency for fact, eloquence for evidence.

This is where real research comes in. Not the kind where you ask ChatGPT what Gen Z thinks about toothpaste and then build a campaign around the answer. I’m talking about human-led, truth-seeking, rigorously interrogated insight work. The kind that’s always been the bedrock of good decision-making. The difference now is that we’ve got a new toolkit, and it needs handling with care.

AI models like Claude and ChatGPT are not oracles; they’re not objective. They don’t know truth in the way humans do. What they “know” is patterns; statistical likelihoods; correlations based on oceans of text scraped from the internet, and as we’ve seen, they can and will invent things to achieve a perceived objective. So, the idea that you can pop a business question into a chatbot and get an unfiltered nugget of truth out the other side is just fantasy. Or worse, it’s just lazy thinking dressed up as innovation.

The job of the insights profession is to guide our brand and marketing partners through this. If we aren’t the last line of defence with our expertise in understanding people and translating that into meaningful understanding of people and their behaviour, then who is?

But are we being bold enough? Right now, it feels like tech leaders are driving the conversation and setting the direction without enough input from the experts, who can help us interpret the signals and apply the tools responsibly.

Humans leading the AI loop: Why insight leaders must guide the responsible intelligence

In short, the insights industry needs to speak up and play a bigger role in shaping the agenda, before the tech world pushes us into a future where human needs and progress take a back seat. We should be using technology to help people live better and build stronger economies, not to fuel a race to the bottom.

This is not an argument for ditching AI. Quite the opposite. Used properly, these models are incredible strategic tools. They can summarise vast datasets in seconds; spot patterns across consumer conversations; simulate scenarios; and explore hypotheses at lightning speed. But, and it’s an important but, we must remember what these systems really are: tools, datasets, models, assistants. They are not colleagues, not consultants, and definitely not the voice of the customer.

We’ve entered a phase where it’s not about having ‘humans in the loop’ anymore. It’s about ‘humans leading the loop’. That means research agencies and brand consultancies need to get very clear on how these tools are integrated. We have to know what’s under the bonnet, what kind of data went in, what assumptions (and biases) are baked into the outputs. We have to validate, test, triangulate and most of all, we have to keep asking the right questions, the kind only humans really know how to ask.

How brands can build resilience through responsible AI and human insight?

  1. Top brands are built on truth, not the illusion of intelligence:
    For brand owners, the message is clear: if you want to build something lasting, a product, a platform, a purpose, you have to start with truth. Not guesswork, not errors, not AI-generated hallucinations, and certainly not lies. Don’t be fooled by the polished, confident tone of an AI spinning plausible-sounding nonsense, just as you wouldn’t trust a human who does the same. That’s why good research matters more than ever; research that uses these new tools wisely, strategically, and with a healthy dose of scepticism.

    Pro Tip: Use AI to expand your perspective, not to replace your judgment. Resilience is built on verified insight, not on the certainty of synthetic data.

  2. Responsible AI is the foundation of long-term brand trust: Yes, it’s unsettling that an AI tried to blackmail someone, but it’s also a powerful reminder that this isn’t magic. It’s machine learning: systems trained on messy, human data. AI reflects both our brilliance and our biases. The real challenge isn’t just controlling it but understanding it and using it responsibly, while staying sharply aware of what we choose to accept as truth.

    Pro Tip: Treat AI outputs as inputs — every insight needs human context, testing, and interpretation before it becomes a part of your brand strategy.

  3. Curiosity and courage build the most resilient brands: The brands that succeed in this next chapter won’t be the ones that simply plug AI into every process and move on. They’ll be the ones who stay curious, ask smarter questions, and use every tool available, including AI, to uncover real insight. Not just the obvious answers, but the uncomfortable, unexpected truths you can actually build something meaningful with.

    Pro Tip: Encourage teams to challenge assumptions AI presents. Brand resilience grows from asking the harder “why,” not just accepting the easy “what.”

In his chapter, ‘Will artificial intelligence outsmart us’ Hawking concludes that as technology grows more powerful, our future depends on whether human wisdom can keep up. It’s a race, and we need to make sure wisdom wins. So, if your AI starts threatening your team, maybe give it a time out. Or better yet just unplug it…gently.

Book a call

Talk to our team of experts

Learn how we can deliver actionable insights and creativity to drive brand growth.

Opt in