Piers Dillon-Scott

SENIOR UX DESIGNER

Designing for people – forget what they say, it's what they do that counts

Humans are consistently bad at judging their own abilities & skills. So when asking people to self-assess researchers can't just rely on the Likert scale.

All of us like to believe that how we think and how we behave are consistent. It’s reassuring to think that our actions are simply a reflection of our thoughts and beliefs. But as reassuring as it is, it’s rarely true.

A question for you. On a scale of 1 (low) to 5 (high) how would you rate your driving skills (maybe a three, perhaps a solid four)?

Now, how confident do you feel about your answer?

Chances are you’ll probably feel quite sure that your self-assessment is accurate. After all, who knows you better than you?

Such Likert self-assessment questions are frequently used in HR departments – and while they are useful – but they rarely tell the full story. Take this recent example.

While working with a client in the financial sector, we set about testing the company’s customer-focused app with its target market (18-35 year olds who have been customers for more than three years, and frequently use the company’s app). Our ultimate goal was to redesign the app so it met the target market’s needs; was easy to use; and was visually appealing. To do this we wanted to get an accurate measure of this cohort’s smartphone and app literacy.

There are two or three common ways you can conduct such research; you can ask people in the target market to self assess (for example, using a likert scale), you can interview them, or you can test them.

We did all three.

When we asked the candidates to score their skills with the service’s app. Most rated themselves highly, with 3 or 4 out of 5. Next we asked the candidates about the systems (Android, iPhone, Windows’ Phone), devices, and apps that they frequently used. The candidates were a near-even mix of iPhone and Android users who typically used social media (a lot of Facebook, some Twitter), and a lot of news apps.

Finally, we asked the candidates to perform some regular tasks with the client's app – ranging from the easy (find your last five transactions) to more difficult (make an international transfer). In doing these final tests we discovered a huge disparity in the candidates’ abilities and confidence. The results from this test didn’t correspond with the candidates’ self assessment of their abilities with the app. Typically, those who rated themselves on the higher end of the scale showed less ability than those who rated themselves in the mid-rage of the scale.

So, how do you marry this seemingly inconsistent data? Why do people who seem confident underperform, and why do those who rate themselves on the lower end of the scale do better?

Much of our work involves understanding how people – be they a company’s customers, or employees – act. By understanding typical human behaviour we can assess what people are likely to do; from this, and other, research we can collaboratively begin the process of designing new services and products.

In this instance we were seeing what’s known as the Dunning–Kruger Effect. With the Dunning–Kruger Effect people who are under skilled tend to overestimate their abilities (effectively, they don’t know how much they don’t know), while those who are aptly or overskilled tend to slightly underestimate their skills (they assume others know as much or more than them).

Observing users’ actions will always trump theory but theories, such as the Dunning–Kruger Effect, provide us with a good basis for understanding research data in greater depth.

Every aspect of a company’s interaction with its customers can influence how those customers view that company. A simple choice of font can affect how much people are willing to pay for a product; the design of the queue can dramatically alter a customer’s opinion of a company; while involving customers in a product’s development can increase their sentimental attachment to it.

Here are some more examples,

#Mobile

Microinteractions are a big deal

Dan Saffer argues that small frequent actions (from turning on a switch to inputting your PIN) can make or break customers' opinions of your product or service

Read More »

#eCommerce

Can cutting edge brain research change how (and what) we buy?

Neuromarketing: When psychology, marketing, and advances in brain research meet

Read More »

#Data

Do we forage for information like our ancestors foraged for food?

"People like to get maximum benefit for minimum effort"

Read More »

#User Journeys

Badly designed waiting lines can leave your customers with a bitter taste

The psychology behind waiting lines is both long (like most waiting lines) and fascinating (unlike most waiting lines)

#Experience Design

Seeing is believing

Consumers are willing to pay more for items they can see, smell, or taste

Read More »

#Beautiful Web

Your choice of font can affect your bottom line

The more difficult a product description is to read, the more expensive people will think the product is (but they also tend to think you're hiding something)

Read More »

#Internet of Things

Our memories are terrible

Your memory isn't like a video recording, it's more like a Wikipedia page

Read More »

#Digital Transformation

Of course it's called the "Ikea Effect"

Giving your customers some work to do may be good for them (and your brand)

Read More »

Copyright © 2023 Each&Other Ltd.

Registered in Ireland. No. 545982

Privacy Policy