Exploring where AI fits, where human judgement is still essential, and how to tell the difference.
I
magine how powerful your product would be if you could subject it to UX research at scale. AI’s hype merchants would have us believe that this ‘magic button’ is tantalisingly within reach: thousands of synthetic testers ready to give feedback on a par with human in-depth interviews or detailed user surveys.If only.
Product managers are under pressure to deliver, and so AI’s promise of speed and cost savings at the research stage is alluring. The temptation to dive right in is understandable. Why go through that messy business of gathering humans in a room, asking them questions, and parsing their answers for insights, when synthetic equivalents can tell us everything we need to hear?
The most enthusiastic promises claim these solutions are as good as or the same as using humans. I don’t believe that’s true (yet).
I want to be clear: I’m not against AI. But I am against blind adoption.
And that’s what we’re already seeing. The challenge isn’t technology, it’s culture. Procurement decisions are happening above the heads of product, design, and research teams. The pressure to “do AI” is coming from the CEO or senior management, investors, and competitors. The decisions are disconnected from problems on the ground. Research has to fit, or it gets cut.
Senior management will believe they’re getting better outcomes, but the promise of AI is misleading, because it amplifies efficiency without necessarily achieving the desired outcomes.
Let me explain what I mean by this.
Insight comes not from polished responses, but from hesitation, contradiction, frustration, people struggling to complete a task on screen, whilst at the same time saying it was easy. In other words, the messy stuff. AI doesn’t experience any of that. And it doesn’t really know what to do with it either.
Relying only on AI-generated research removes this nuance, and risks designing for the personas of customers who don’t exist.
What’s more, if the models are fed sensitive personal data without checking for GDPR risks, or the results erase marginalised voices, it’s an ethical minefield.
NNG’s Raluca Budiu argues that we shouldn’t compare generative AI with traditional research; we should think of it as a better alternative to doing no research at all. Adopting AI just because it’s cheaper could deliver poor results that lead to incorrect design decisions, which create high long-term costs.
Forced to adopt AI at pace, teams are missing a clear understanding of what these tools are for, what they can do, what their limitations are, or whose roles and workflows they’re changing. Most teams don’t have a ResearchOps function to guide good practice, so tool use is inconsistent and insight quality suffers.
When I think of team members presenting research findings to clients, we tailor the format depending on the audience and how much time we have with them. Researchers working on projects benefit from living with the data and forming connections and being able to recall examples and stories, when challenged by product managers in these playback meetings.
Whilst a shiny AI set of analysis and report will look like a near polished artefact, the danger is the researcher may not be as connected to the data if it has been generated on their behalf, and we’re potentially setting them up for failure in key presentations.
The situation reminds me of my earlier agency days, when clients would sometimes contact us to run focus groups. This was the classic trap of starting with the method, instead of thinking if they were asking the right question, and then letting this dictate the approach to getting an answer.
Here’s what AI can do:
By all means, let’s start playing with these tools, challenging them and forming a view about which ones are useful for our needs.
Let’s use it intentionally, to help us ask better questions. Are there parts of your design or research workflow that aren’t working optimally today, and how do you want it to improve? What are the perceived benefits and trade-offs for changing the process? What does the future state look like?
There are multiple solutions to a problem and the answer isn’t always AI. I believe there are benefits to research teams from speeding up some aspects of the workflow but not at the cost of good outcomes.
If you’re being sidelined by AI adoption: you’re not alone. But there’s a growing need for teams who can navigate this complexity with judgment, ethics, and clarity.
Think of surgeons with powerful tools at their disposal. The tools don’t decide what’s needed; clinical judgment does. The scalpel, MRI, or robotic system are only useful when wielded by someone who understands the person in front of them.
UX research is no different. The skill lies in knowing what tools to use, when, and why. The goal is not to outsource thinking entirely.
For years, the tech industry mantra was “move fast and break things”. The rapid advance of AI should teach us that it’s better to move carefully and check things.