As any parent of school aged children knows, the words "it's fine" are a common response to the question "how was your day?" These words could cover a range of possible outcomes between having got a detention to having aced a test. However after living in Britain for 17 years it seems that British adults aren't so dissimilar, and if someone describes their day as "fine", their day could have been made up of anything from winning the lottery to the death of their cat.
But seriously; how do you get meaningful data from a user when they are using an app for the first time? How do you know what they actually thought it? Are they just being polite in the responses that they give? Or do they genuinely think that it was, actually, fine? When you field test an app that requires moving around an area, is it possible to work out what a user thinks by relying on pre and post-test interviews?
Our last user test involved testing an app aimed at 7-10 year olds and their families at two sites in Northern Ireland. Instead of gathering results using analytics and pre and post-test interviews, we decided to also shadow each user (child and parents) as they walked around the sites. Our conclusion is that shadowing users is far more preferable to surveys - and here's why.
It's easier to workaround problems
The code and the UI in a user test often isn't perfect, so if a user gets stuck in the app, you can observe them for a while to see where they are stuck. You can watch the steps they take to resolve the problem and see where the user might need a tooltip in the app or what should be made clearer. But even more importantly when they get stuck, you can offer a temporary solution so that they can move on. Without this there may be whole areas of the app that they haven't found or been able to get into because of something they haven't understood or an error in your code.
And yes; we do test our code extensively before user tests but bugs will still slip through and users don't always react in the way you expect. The whole point of a user test is to find these areas! This is especially significant if you are trying something completely new where you might not be able to anticipate all the problems.
Visceral, instantaneous reactions become obvious
Following users you see their instantaneous reaction. Testing a history app for kids, this gave us an opportunity to watch which facts made kids excited. Do kids want to know about ghosts and hangings? How gory is too gory? When you watch them as they read (or their parents read) it's very obvious what they enjoy. Not surprisingly many eyes lit up at the mention of cannonballs going through houses and ghosts rumoured to be lurking behind corners, but we could also see at which point their eyes glazed over because the text was too long. Contrast this with the experience of interviewing them and their parents at the end of the test - they simply can't remember all of these moments after the event.
Issues and problems with interactions become much clearer
If users don’t always remember all the good moments in a post-test interview, they also don’t remember all of the difficulties or questions that they’ve had. If you aren't observing them, then how do you expect to find out this information? In talking about some of the difficulties they had in a post-test interview they also may not be able to tell you all the steps they took to overcome the problem. So you will see part of the information, but not all.
Group interactions are brought to the fore
Observing family groups lets you observe family interaction. It helps you to understand how you can keep different aged children engaged at the same time, and observe little interactions like who gets to hold the phone and who is able to read maps.
Users can fixate on solutions in interviews
Running user tests you swiftly realise that what users think to tell you isn't always what you need to know or what you think is most important. For example in a post-test interview users may get stuck talking about how a map "needs to work just like Google Maps", but in observation you may realise that there are other possible solutions that serve their needs better as well as meeting the application goals.
Large sites can lose users
If you are testing on a big site - especially one with multiple entrances and exits - you can't always rely on users to return to complete a post-test interview. Users who fill out a pre-test questionnaire don't always come back to talk to you afterwards. When you are observing a user they can still decide to cut out early, but you will still have your observations recorded even if they can't do a post-test interview.
Fewer people at greater depth gives more useful feedback and saves you time
Bigger isn't always better, and this is often what is behind the desire for a large number of users in a user test. A smaller number of users carefully observed can give you much more information than a larger number of users where you only get feedback from questionnaires/interviews. Of course this does mean that it is important to have test users who are of the right demographic.
Running a user test with a large number of people using interviews also requires a massive amount of organisation. Don’t underestimate how much work it is to greet people, install the app, do pre-interviews and give them instructions before they go off to try the app. Then after they come back you need to do a post-test interview and make sure you match up the documentation from before. In this case, these post-test interviews are where you get your most useful information. When you have large numbers of people you won’t always have time to spend the time on these that you would like.