News and events
The $300 Million Button
In the post he writes about a client who had a fairly standard checkout process on his website. The process began with a login form:
The form was simple. The fields were Email Address and Password. The buttons were Login and Register. The link was Forgot Password.
It is the kind of form I have seen on many ecommerce websites. This feature, which had been designed to help repeat customers, created two distinct problems:
- New users resented the idea of having to register. One user said: "I’m not here to enter into a relationship. I just want to buy something."
- Repeat users rarely remembered their username or password. They wasted substantial time guessing, before eventually resorted to creating a new account. In fact after examining the database Jared discovered that 45% of all customers had multiple registrations. Some did go as far as clicking on the forgotten password link but of those only 25% went on to place an order.
In the end the site was redesigned, allowing the user to continue without registering. Within a year this created a $300 million increase in sales.
Of course $300 million is a meaningless figure in itself. It is the percentage increase that matters. In this case is was a 45% increase. That is a staggering number and one that really drives home the importance of testing with real users.
The UK government and graded browser support
A couple of weeks ago I wrote about the importance of graded browser support. In my post I explained how we should not limit our support to the browsers we test and how it is unrealistic to push for identical support across all browsers.
According to The Web Standards Project the rules surrounding browser testing on public sector websites have been changed to better reflect best practice in graded browser support.
Changes include an emphasise on functionality over identical layout across browsers (paragraph 39):
You should check that the content, functionality and display all work as intended. There may be minor differences in the way that the website is displayed. The intent is not that it should be pixel perfect across browsers, but that a user of a particular browser does not notice anything appears wrong.
As well as support for progressive enhancement (paragraphs 17-18):
You should follow a progressive enhancement approach to developing websites to ensure that content is accessible to the widest possible number of browsers.
This is excellent news and certainly provides a great reference for UK designers and website owners looking to convince others of the importance of graded browser support.
50 Illustrator tutorials
From development to design now, and a list of 50 tutorials that help you get your head around Adobe Illustrator.
The list is compiled by UK web designer Chris Spooner. He echoes my own experiences when he writes:
Adobe Illustrator can be a little tricky to get your head around, particularly after getting used to the workflow as applications such as Photoshop. The difference between layer use and creating and editing shapes can be especially strange at first hand.
I am a Photoshop man and I have found it very difficult to make the transition to a vector based world, so this list was particularly appealing to me.
Its a great list that you will definitely want to check out, if like me you have never got to grips with Illustrator before.
A new approach to PNG Support
Finally today I would like to draw your attention to a new technique that has been developed by Drew Diller for using PNG transparency in IE6.
Unlike previous techniques this one allows you to use PNGs as background images instead of just as IMG tags. This opens up a world of possibilities and overcomes one of the most annoying limitations of IE6.
This minor miracle is achieved not by using AlphaImageLoader as has been done in the past, but with VML.
Although I have yet to use this approach myself, I have high hopes that this will finally solve the IE6/PNG barrier.
Interview: Liz Danzico on User Research
Paul: So joining me today for our little interview is Liz Danzico. Liz, why don’t you start off by introducing yourself a little bit. Telling us a bit about yourself and your background.
Liz: Sure. Um, I am a user experience consultant, I am here in New York City, I have been developing web sites and user experiences online for about 12 years now. Um, I do a lot of work with Happy Cog Studios here in New York, with Jeffrey Zeldman and Jason Santa Maria. Um, I’m also chair of the new MFA interactions design program.
Liz: At the School of Visual Arts in New York.
Paul: Excellent. I mean, so, to say that you’re an expert in user experience would be a slight understatement then, Liz.
Liz: Well I wouldn’t go that far.
Paul: You’d be too modest, obviously, to say that. Okay, so we got Liz on the show, I met Liz when I went to Future of Web Design and we got talking. Um, she’s got some fascinating insights into the whole area of user research, and usability generally, so I thought let’s get her on the show and let’s maybe, you know, try and cover things from, from the very basic level, a kind of introduction to this concept of user research. Um, so, perhaps a good place to start, if you’re okay Liz, um, would be, how would you go about defining the area of user research? What would you include, what would you exclude from that?
Liz: Right. So … user research, even today, we’ve been doing user research on the web since, uh, the very beginning, so it’s a very old concept but it’s still fairly controversial. So the basic concept is it tells you what really happens when real people interact with your product or service. So, there are no real rules about what it includes and what it doesn’t [inaudible]. You can basically speculate about what your users want, or you can find that out, um, you know? And uh, and the, uh, the latter is probably a more useful approach for you to take than speculation. But with either one, thinking about your audience is useful no matter what. And so, so there are no real rules, now um, when you disconnect thinking about your audience from your business objectives, and you start getting, you know, very excited about behaviors that they’re doing that are sort of disconnected from the real mission that you’re trying to sort of accomplish, then it becomes, um, a bit murky, and confusing. But thinking about your audience is, just in general, is an extremely useful approach.
Paul: Okay. I mean one of the things that, that, um, I’ve heard said before by, particularly cynical clients I have to say, but I’ve heard it said before, you know, ultimately user research, and all of this kind of stuff feels in some ways like, um, just another way for web designers to suck a bit of extra money out of us, you know that fundamentally how, I know my audience already, is the kind of attitude that many web site owners have, so why do you see it as an important part of the process?
Liz: Well uh, you know, as we’ve been seeing design flaws often translate to lost business opportunities, you know, usability is becoming more important than ever as the number of web sites and products is, you know, increasing more and more every day. So, we design these products and services, and we are at the same time users of them, but there’s no way that we can really tell what are users, um, might want. And the best way to, you know, usability research doesn’t cost a lot of money, so, the best way that you can help your clients kind of understand that you need to do usability research in some way is to let them know that usability research is important and it doesn’t need to, um, suck up a lot of time or money in the, in the process. So there’s a great fantastic book by Steve Krug, called Don’t Make Me Think, which I’m sure you’re probably well aware of.
Paul: Uh huh.
Liz: And in one of the chapters towards the end, he has a chapter called "Usability Research on a Shoestring", or it’s probably better titled, which talks of this approach of going out into the hallway and kind of grabbing people, and just sitting them down, and putting them in front of your product or service, and getting some feedback. So getting some feedback from people, no matter who they are, is better than getting none at all. And so, I think starting there with clients, instead of the, you know, $100,000 user research project that’s going to take you across 8 markets, you know, in the United States, the UK, and Asia, then, is going to be a much better approach than kind of intimidating them with the very extensive projects.
Paul: Mmm, I mean, when it, the kind of one scenario that I’ve come across before, um, is where we’ve come across with clients that say "Well we’ve already done user research, we already know our audience ’cause we’ve got somebody in to do this or that." Is there a difference between user research that’s been done primarily with an offline audience, and those with, you know, when you’re interacting with people online? Is there a difference in the kind of results and information that you’re after, and even the techniques, maybe, that you use?
Liz: So, they are probably, when they say that they’ve done user research, they’re probably talking about focus groups. I would venture to guess that when they talk about that they’re probably talking about either focus groups or surveys of some kind and those are not, well, I wouldn’t say that they are, those are bad things to do, but those are not the kinds of user research techniques that are going to give them feedback about their product’s usability. Those kinds of techniques are going to give them good information about, um, certain kinds of things but they are not going to give them information about whether or not people can use the product or service that they’re looking at. So, you want to find out exactly what kinds of user research they’ve conducted. If they say the words "focus group" then you know you want to move them towards something that is a one on one kind of interview. Focus groups tend to be conducted with groups of people, as the name might suggest, um, and when groups of people get together to talk about, you know, they put forth a question for these people, and when they, you know, groups of people get together to talk about the question they might influence one another in their answers, they’re typically aren’t talking about an interface, they’re typically talking about ideas, so you’re not getting good feedback, like in a one on one kind of scenario. So you want to sort of guide them to a more individual, one on one kind of experience. Surveys, on the other hand, are good, but they don’t get that kind of personal experience with a moderator, sitting with an individual, kind of looking at an interface in a kind of task-based scenario.
Paul: Okay, yeah that makes a lot of sense. I mean, let’s then talk about some of the techniques that can be used to better understand individuals, or how those individuals will interact with your product. What different kind of techniques do you use? I mean, there’s the kind of very basic usability session, but do you do, or are there other things above and beyond that, that you do?
Liz: Right. Well, the sort of big secret is that, there are names and there are certainly techniques, but the big secret is there are really no sort of techniques beyond knowing who your users are, kind of documenting what you’re seeing, and then kind of analyzing/prioritizing the results of what you see. So, you can, I’m gonna tell you a number of techniques that we can go through, but if those basic sort of constructs are there, then you’ve done sort of good user research. Now, that being said, the techniques that you can do are usability testing, usability testing traditionally has taken place in a user lab where a moderator is sitting with an individual looking at a screen, or a product, or a sketch of an interface and going through questions in sort of a task-based way, asking people "Show me how you would search for x" or "Show me how you would check out," or, you know, and seeing, measuring the success or failure of that kind of task. The clients are typically sitting behind a one-way, a one-way glass, or mirror, and observing these kinds of things. People have been not so thrilled about this technique recently, saying that it kind of, um, is not, it doesn’t produce natural reactions from users, but that is one kind of technique. There is, uh, kind of creating personas, and using personas, user personas which are an archetype of your site or product’s users, and getting everyone involved in activities around those personas, whether that be using those personas as your talking through features around, you know, a brainstorming session, and getting people to sort of role-play those personas. That’s another user research method. There are, there’s sort of the ethnography kind of take, where a lot of people have been doing kind of in-home interviews and observations recently. Ethnography, cultural anthropologists and people who have been doing traditional ethnography have been watching closely the design research that we’ve been doing recently, and wondering if we’ve been doing it right and so on, but ethnography, in that sort of observing users in their "natural environment", has been I would say a more successful way recently of watching people use products and services, um, so I would say that those three things, usability testing in a lab, sort of using personas and scenarios, and ethnography or kind of going out into the field and watching users, whether they’re in their homes or their offices, are the three kind of key ways to gather user research with users. The fourth way that I’ll mention, and we can talk about this in a little bit, is not with users directly, but it is certainly user research that’s available more and more now, and that is data on sort of analytics, which you can gather from Google Analytics, Shaun Inman’s Mint, these kinds of things. Watching site data and user behavior through site analytics is another form of user research that gives you, you know, some information, and you can watch these traffic patterns on your site. It doesn’t answer the question "Why?" but it does show you some evidence as to how users are behaving on your site.
Paul: It’s quite interesting that you bring up eth, ethnography, whoa I can’t even speak today, because, that’s of interest to me, because that’s an area that we’re beginning to explore a little bit more, and have kind of discovered the same thing, that there’s a real value of going into you know, somebody’s home, seeing the environment that they access the internet on, you know, do they have kids under their feet? You know, where they access their PC, can they sit comfortably at it? All those kinds of things. Um, I guess it’s also an advantage you don’t have to hire an expensive usability lab and all of the rest of it. But I have to confess, I’m a little bit new at it, so talk me through maybe some of the things, you know, how does it differ from a usability test that you would do in a usability lab, other than that you’re in a different environment?
Liz: Well, uh, it depends. It doesn’t have to differ at all — it depends on the goals of the test. I would say that you could construct a test that’s exactly like one that you’d conduct in a lab, it just happens in someone’s home or office, or in a different environment. But as you said, you get the more realistic interruptions, and that kind of thing, and are they going to be able to complete this task given the natural kind of occurrences of their day. And that, depending on what kind of test you are constructing, that’s either going to inform your results or not. If you are doing task-based testing, so I could maybe talk about the different kind of usability testing that you could do.
Paul: Yeah, that’s good.
Liz: Yeah so there are different ways that you could conduct a usability test. Um, traditionally there is task-based testing, where you set up pre-written questions, before you get to the test, that are based on the goals of the testing. So, if we were testing a photo site, we would test whether or not users could upload photos, could they task photos, you know, those kinds of things. So we would write those kinds of questions up beforehand, and then ask those questions during the test. Um, that’s one kind of test. You could do that in a lab, and you can do that same test in someone’s home. In a lab there would not be the children screaming, and the phone ringing, and that kind of thing, or, if someone say were uploading a photo, you would never be able to tell if sort of, timing out, would be an issue, or if anything with time or space or motion would be an issue. If those kinds of things are a goal of your test, then you might want to think about doing it in real time, in someone’s home environment. Another type of testing is something that, I’ll say it was first coined by Mark Hurst, who is a user experience consultant at Good Experience, I think he coined it, it’s called "Listening Labs". Listening labs are, I’ll call them experimental, but they’ve probably been going on long before I was aware of them, where people are designing usability tests in real time. So in other words, you go into the test with absolutely nothing written down, and you sit down with users, and based on your initial interview with them, you hear who they are, and after understanding a little bit about how they use photos in general, say, then you kind of write the questions on the fly, and then sort of develop a test around who that person is and their behavior, with your product, or product type.
Paul: Which I guess, makes people more engaged with the test, because it’s about what they specifically interested in. Is that the idea?
Liz: Exactly. So it’s a more natural way of doing the test. That’s the idea. That kind of thing you could do either way, and probably is even more rewarding if you’re doing it in someone’s natural environment. And then the third type of test is sort of a web, a web wide kind of test, where you have people just surf the internet, as it were, and uh, and just have them think out loud, and that kind of thing is also, I’ve found, more rewarding and fruitful in someone’s home environment, because they have their bookmarks there, and they have their post-it notes. Whereas you put them in a sort of artificial setting and they don’t have those things around them. So, if you, it kind of just depends on the type of testing that you’re doing. If you’re doing just the first kind I talked about, just task analysis and having people go through that kind of task-based testing, doing it in a traditional usability lab is great, you know, I mean you really do get the answers that you’re looking for, and it just depends on your goals.
Paul: I mean, it’s interesting, going back to Steve Krug’s book that you mentioned, I mean he talks about, I guess his agenda in that book is to get people to do testing who perhaps aren’t previously, and so, you know, he really downplays the demographic of who it is that you test, and that it’s more important that you test than that you get the right people, you know and all of that kind of thing. Um, but when you’re going into somebody’s home, and interacting with them, I’m guessing it’s more important to get the right demographic? Is that right?
Liz: Yeah, I mean one of the, um, I think it’s always important to, it’s always important to get the right demographic. Um, but, well I would say that there is a hierarchy of common mistakes around usability testing that kind of has a trickle down effect. You know, the number one mistake is not conducting any research at all, um, and conducting research on the wrong audience is kind of further down the list. So, you know, yeah if you’re doing research on the wrong audience, it’s not going to affect, whether you do it in a lab or you’re doing it at your desk, or at the water cooler, or at home, it’s going to affect your results and your analysis, you know, no matter where it takes place. So, you know, I think that the drawback is you are going to waste more time going out to that person’s time going out to that person’s time, so it’s going to be a drawback for you, but I don’t think that, it doesn’t matter really where it happens, because if you’re testing on the wrong audience, you’re testing on the wrong audience. Um, you’re probably going to get more information out of that experience if you’re in someone’s home, than if you’re not, so if you’re going to test on the wrong audience, do it in someone’s home, because you’re going to, it’s a richer experience, you’re going to get more information out of it than if you’re just testing in a lab.
Paul: No that makes perfect sense, I kind of see that. No, it’s difficult, isn’t it? Because, uh, obviously finding the right demographic of people, and picking the right people to test on is tricky, you know, it’s a more difficult thing and it can be time consuming. So have you got any advice about that? What really matters here? You know, for example, if you’re designing a web site for an over-60s audience, you know, are you, do you want to concentrate on the age aspect of that? Or the technical literacy aspect of that? You know, is it okay to have somebody younger if they’re not as good with the internet, if your audience is, do you, I’m kind of not wording this very well, but you get the idea — what’s important when you’re trying to match demographics?
Liz: Um, well, it’s very specific to your clients. Developing a, so, whenever you are trying to match demographics, you want to work with your clients to develop what’s called a screener, and a screener is a, I would say, whether you’re trying to develop a pretty rigorous recruiting demographic with a professional recruiter, to say, recruit 300 people for an extensive study, or whether you’re going to go out into the hallway and grab some people, or whether you’re going to recruit from something called Craigslist, which a lot of people are familiar with, um, which a lot of people do, I would say developing a screener which kind of outlines your demographic is a really good idea.
Paul: And what kind of things would that include? Sorry I interrupted you.
Liz: Yeah, what a screener is, it kind of goes through, it’s a questionnaire that outlines a number of questions that you would ask a potential recruit, that says, if this person can answer a particular question we should keep them in or out, so it’s actually a really good exercise to go through that allows you to kind of think through the type of demographic that you would have. So that doesn’t answer your question in any way.
Paul: It’s very interesting, though. Can you give me an example? Sorry, I’m interested in this screener thing, cause I haven’t come across it before. Can you give me an example of the type of questions? I mean obviously they’re going to be specific to the individual client, all the rest of it, but what kind of questions?
Liz: Um, what kind of questions? So, let’s see, would this person, so, let’s see, has this person, I mean typical questions could be around financial demographics, age demographics, you know the sort of typical things. But let me think of some more interesting things. So, is this person a full-time student? Has this person been fired from a job in the last 6 months? Has this person participated in usability research in the last 6 months? Those types of things, so if the person answers yes or no, then they’re not a good candidate. But there are other kinds of things you could put into that screener that would be more specific to the project.
Paul: So could it include something like is this person aware of a certain brand, because you want to associate with that brand?
Liz: Absolutely, so does this person drink Coca-Cola on a regular basis, yes or no? That kind of thing. But I’ve found that the screener, because the clients that you work with are often kind of speaking in those terms about their audience, the screener is a really good way to kind of help them understand how you’re recruiting audiences, and a good tool to kind of work together with them to narrow down who you want to be in the target audience for your testing, or your research in general. So, that said, how do you develop a good kind of set of participants for a research study for, say, a product for people over 60? Um, what’s most important, you know it depends on, and I know I hate to say that it depends, but you’re going to develop a goal for the testing, right? And the goal might be about usability, the goal might be about navigation, it might be about design, it might be about, it’s going to have, you have to first identify the goal, and depending on what that goal is, then you can identify the audience. So, the audience, you know the goal might have nothing to do with age, although the product has to do with age. So you can kind of strip away, you can pull apart the product from the goal of the testing a bit, and sort of just focus on the goal of the test. That’s why developing goals for user research is so critical, um, because often times you can separate those and therefore develop a better set of participants for that user research.
Paul: Mmm, that’s really good. I think what we’ve done here, is, a lot of people that listen to this show probably have a basic understanding of user testing. Maybe they’ve done some basic user testing before, or maybe they’ve even written a persona before, but I think what we’ve done, or what you’ve done, is push people a little bit further to kind of consider it in a little bit more detail what they’re doing in order to kind of refine the results that they’re getting back, and that’s really, really great. I mean, if somebody has just kind of done the very basics, you know, they’ve grabbed some people, they’ve done some user testing, maybe in their own office in front of their own PC, and they’ve got a few people in, um maybe they’ve created a couple of personas, what’s the next step for them? What should they be pushing? Is it through this screener? Is that the number one thing they should be doing? Is the goals more important? Is getting a better demographic more important? What’s the kind of next step for them?
Liz: Mmm, that’s a good question. I think that one of the most, well, doing the research is really key. Analyzing the research and connecting the research to the next iteration of a design is also key. We haven’t talked about that at all.
Paul: No, we haven’t, we ought to.
Liz: It’s often a grey area, um, you know there are lots of reports that are produced, you know, diagrams and things, but there’s a lot of kind of intuition that happens between sort of translating the research and putting that research, feeding that research back into the design. There are hunches, leaps of faith, um, you know kind of between that analysis and design. I mean there are clear cut recommendations that one can make, but then there are a lot of more grey areas. So I would say that, I still think, even though I mentioned we’ve been doing this kind of research for at least, you know, more than a decade online, and you know quite a long time offline, I think we still need to get better at the rigor at which we translate those recommendations and findings. So that’s one place I think we need to focus. Um, in terms of the actual research itself, uh, you know, there’s something, I think there are other sorts of techniques. I’m interested in these kinds of emergent, I would say emergent techniques like the listening labs, um, you know where the kinds of things that we’re looking at today with kind of mobile research, where people are, we need to be looking at how people are using our sites not just in the browser on their desktop but, you know, in the browser on their phone, and how their context is changing constantly and how we need to sort of look at that adaptation. So how do we develop tests that are more emergent and can be a bit more flexible, rIght? So I think there’s something interesting about that listening lab, where we kind of understand the person, and then develop the questions around a person and how they use a product, rather than having a pre-written set of questions. So, something that’s more emergent, I think that’s an area that’s interesting to kind of look at. Then, uh, ethnography, really understanding, goes right along with this sort of, emergent, as you said you’ve been getting more excited about ethnography as well, so, thinking more about kind of fine-tuning our approach to people’s own context, whether that be ethnography, going into their homes, their offices, you know, where people are using our products, whether that be on the street, in the hallway, wherever it is, but really understanding how to find people where they’re using our products and test them or do some research around that, I think that’s really exciting and a really interesting opportunity. Um so that, that’s the next step for us, uh, and I think that the way that people are designing tests and doing some usability testing now, is, you know, is good, I don’t think that there’s a big next step that we can all take together, but I think these are three areas that I think as a discipline that we’re going to see people moving forward together in.
Paul: Excellent. Let’s finish off, then, with a kind of where people should go if, you know, they’ve been excited by this interview, they want to learn a little bit more, um, about user research and user testing. You’ve mentioned Steve Krug’s book. What other resources are out there that people should be looking towards?
Liz: Well, let’s see. You know, I was thinking about, I was thinking about that and there are physical places that people can go, but they’re all in San Francisco in the United States, so that’s not going to help anyone. There is, you know, A List Apart has a User Science topic that often publishes user research related methods-like articles, there’s always BoxesandArrows.com which publishes user research related topics, um, Adaptive Path, which is a user research consultancy, or at least one aspect of what they do, they have published a number of articles but they also do events. A lot of events are in the United States right now, but they may have international events as well. But they do kind of give away a lot of their content. Um, and then last but not least, there’s a new-ish publisher called Rosenfeld Media, and the books that Rosenfeld Media publishes are about methods in user experience and, one recently in web form design, was about the usability of web form design by Luke Wroblewski (called Web Form Design: Filling in the Blanks).
Paul: Yeah, I saw that. That looked very good, I have to say.
Liz: Yeah, so that’s something to keep an eye on as well.
Paul: Excellent. Thank you so much, Liz, that was absolutely superb. And I will be fascinated to get you back on the show in the future to talk more depth about some of these issues. Thank you very much for your time, Liz.
Liz: My pleasure.
Thanks goes to Jason Rhodes for transcribing this interview.