False prophets: what AI can and can't do in UX research

Exploring where AI fits, where human judgement is still essential, and how to tell the difference.

Authors

Ben LoganUK Growth & Strategy Lead

I

magine how powerful your product would be if you could subject it to UX research at scale. AI’s hype merchants would have us believe that this ‘magic button’ is tantalisingly within reach: thousands of synthetic testers ready to give feedback on a par with human in-depth interviews or detailed user surveys.

If only.

Product managers are under pressure to deliver, and so AI’s promise of speed and cost savings at the research stage is alluring. The temptation to dive right in is understandable. Why go through that messy business of gathering humans in a room, asking them questions, and parsing their answers for insights, when synthetic equivalents can tell us everything we need to hear?

The most enthusiastic promises claim these solutions are as good as or the same as using humans. I don’t believe that’s true (yet).

I want to be clear: I’m not against AI. But I am against blind adoption.

And that’s what we’re already seeing. The challenge isn’t technology, it’s culture. Procurement decisions are happening above the heads of product, design, and research teams. The pressure to “do AI” is coming from the CEO or senior management, investors, and competitors. The decisions are disconnected from problems on the ground. Research has to fit, or it gets cut.

Senior management will believe they’re getting better outcomes, but the promise of AI is misleading, because it amplifies efficiency without necessarily achieving the desired outcomes.

Let me explain what I mean by this.

Plug in
Under time and resource pressure, product and research teams start plugging in AI tools to stay ahead. They generate synthetic personas without talking to real users. Amid this stampede, of course it’s tempting to treat outputs from AI-generated personas as “just as good” as human insight. Except they’re not. Humans operate in a much richer, more variable context than any model can simulate. They feel time pressure. They adapt emotionally to who’s in the room. Their answers shift depending on their confidence, mood, or what just happened in their day.

Insight comes not from polished responses, but from hesitation, contradiction, frustration, people struggling to complete a task on screen, whilst at the same time saying it was easy. In other words, the messy stuff. AI doesn’t experience any of that. And it doesn’t really know what to do with it either.

Relying only on AI-generated research removes this nuance, and risks designing for the personas of customers who don’t exist.

What’s more, if the models are fed sensitive personal data without checking for GDPR risks, or the results erase marginalised voices, it’s an ethical minefield.

Augmented intelligence
We need to understand how generative AI is changing our industry. NNG’s thought-provoking post highlights lots of ways to thoughtfully augment research practices with AI.

NNG’s Raluca Budiu argues that we shouldn’t compare generative AI with traditional research; we should think of it as a better alternative to doing no research at all. Adopting AI just because it’s cheaper could deliver poor results that lead to incorrect design decisions, which create high long-term costs.

Forced to adopt AI at pace, teams are missing a clear understanding of what these tools are for, what they can do, what their limitations are, or whose roles and workflows they’re changing. Most teams don’t have a ResearchOps function to guide good practice, so tool use is inconsistent and insight quality suffers.

Stop, look and listen
As Jeff Gothelf’s excellent blog makes clear, it’s worth understanding how systems work before automating them, because that teaches us to pay attention to the details and assess the output critically. He writes: “I’ve seen teams not only outsource the work to their AI tools, but also lose their critical thinking skills to assess the output generated by these same tools. They accept them as ‘done’. And that’s a big risk. We’re letting the machines decide what’s best for humans.”

When I think of team members presenting research findings to clients, we tailor the format depending on the audience and how much time we have with them. Researchers working on projects benefit from living with the data and forming connections and being able to recall examples and stories, when challenged by product managers in these playback meetings.

Whilst a shiny AI set of analysis and report will look like a near polished artefact, the danger is the researcher may not be as connected to the data if it has been generated on their behalf, and we’re potentially setting them up for failure in key presentations.

The situation reminds me of my earlier agency days, when clients would sometimes contact us to run focus groups. This was the classic trap of starting with the method, instead of thinking if they were asking the right question, and then letting this dictate the approach to getting an answer.

Supporting, not supplanting
At the risk of repeating myself, I’m not anti-AI. What we’ve seen so far tells us that AI tools are a powerful way to start, inspire, focus, complement and refine work. Synthetic users, instant insights, and large language models can support research work in its early stages. But they can’t replace the actual lived experience of the human who’s asking the questions or the one answering them.

Here’s what AI can do:

  • Help prototype faster: not just digital screens, but storyboards, flows, content mockups
  • Generate variation that’s useful in early design or stimulus for research
  • Extend pilot studies by using synthetic personas to help surface new questions
But here’s what it can’t do:
  • Replace real people
  • Understand the dynamics of power, identity, or inequality
  • Give meaningful insight into the unintended consequences of a product or service.

By all means, let’s start playing with these tools, challenging them and forming a view about which ones are useful for our needs.

Removing risks
Let’s keep a ‘curious experimental’ mindset. But that needs to be done with guardrails. Let’s remember we’re dealing with market research participants, confidential company data and potentially sensitive information, so let’s stop and think first and not just upload it to platforms in places where different data protection rules apply. To prevent researcher burnout, teams also need time allocated to explore these tools, alongside their day jobs.

Let’s use it intentionally, to help us ask better questions. Are there parts of your design or research workflow that aren’t working optimally today, and how do you want it to improve? What are the perceived benefits and trade-offs for changing the process? What does the future state look like?

There are multiple solutions to a problem and the answer isn’t always AI. I believe there are benefits to research teams from speeding up some aspects of the workflow but not at the cost of good outcomes.

The path forward
If you lead a product, research, or design team, here’s what I advise:
  • Push for clarity on your AI tool stack. Fewer tools, better used
  • Make time to observe real users, don’t just rely on automated reports
  • Don’t let AI outputs go unchecked. Insist on interpretation, not just generation
  • If you don’t have ResearchOps, make space for it – even on an informal basis. Someone has to define good practice.

If you’re being sidelined by AI adoption: you’re not alone. But there’s a growing need for teams who can navigate this complexity with judgment, ethics, and clarity.

Think of surgeons with powerful tools at their disposal. The tools don’t decide what’s needed; clinical judgment does. The scalpel, MRI, or robotic system are only useful when wielded by someone who understands the person in front of them.

UX research is no different. The skill lies in knowing what tools to use, when, and why. The goal is not to outsource thinking entirely.

For years, the tech industry mantra was “move fast and break things”. The rapid advance of AI should teach us that it’s better to move carefully and check things.

Subscribe to our newsletter

Copyright © 2024 Each&Other Ltd.

Registered in Ireland. No. 545982

Privacy Policy