Magento Expert Forum - Improve your Magento experience

Results 1 to 5 of 5

How do you know when a design fails a usability test?

  1. #1
    Junior Member clapcreative's Avatar
    Join Date
    Jul 2015
    Location
    149 Mcafee court , Thousand Oaks CA 91360
    Posts
    115
    Thanks
    3
    Thanked 6 Times in 6 Posts

    Default How do you know when a design fails a usability test?

    A handy technique I learned from the wrong job…

    Years ago, I spent an awkward patch of my career as an instructional designer, creating courses for online learning. It was a bad fit and I moved on happily, but one part of that job has made me a better UX designer: learning objectives.

    Learning objectives are simply what you want the student to learn by the end of the training. If there’s a test, the test questions should be based on those objectives — otherwise, what’s the point of the test?

    The same approach comes in handy for figuring out whether a design has passed or failed a usability test. Just remember: it’s the design that’s being tested, not the participants.

    What does the test participant need to do or say for you to feel confident that the design has succeeded? Do they need to track three hours of time for a particular project? Generate an invoice to a client based on that tracked time? Send the invoice? That’s your test criteria.

    Of course usability testing is about observing how users complete tasks, but what will you get them to do, exactly? The beauty of these criteria is that they steer you away from vague testing goals like, “understand how time tracking works.” How will you know they’ve understood it? You get them to describe it. And once they’ve described it accurately, you can say that aspect of the design was successful.

    Success criteria help you twice over: they clarify whether your design is really successful, and they make it easier to share those results.
    Verbs are magical

    The book that taught me about learning objectives, George Piskurich’s Rapid Instructional Design, offers a handy list of behaviours to start your success criteria.

    For example, the objectives for comprehension might be “describe” or “demonstrate”. Again, “understand” is no good — you need them to say (that is, describe) or do (that is, demonstrate) something that proves to you that they’ve understood.

    And then, at a higher degree of difficulty, a participant might “explain” or “organize”; at a higher level still, they might “create” or “evaluate”.

    Whatever verb you choose to start your success criteria, the point is that you can observe whether or not a user has actually said or done whatever constitutes task success.
    “By the end of this session…”

    So, when you’re planning your next usability test, and you’re working on tasks, start by asking, “What should a user be able to do with (or say about) this design?”

    Then, you might write something like this:

    By the end of the session, the participant should be able to:

    track three hours of time for a particular project;
    generate an invoice to a client based on that tracked time;
    describe the difference between tracking time and logging time.

    Now you have three success criteria and, based on those, you’ve also got a pretty clear sense of what tasks you’ll need to give the participants.

    One caveat: success criteria aren’t quite the same as tasks. Tasks have more context; they’re written to be read to the participant, and might include some context about the task, particularly if you’re steering them to find something in your prototype. For example:

    Success criteria: Generate an invoice to a client based on that tracked time

    Task: “Now that you’ve tracked three hours on the Atlas project, show me how you would invoice Acme Products for your time.”

    Pretty similar, obviously, but success criteria are for you and your team; the task is for the participant in the context of the usability session.

    And you’ll notice that one of the success criteria above is about describing something, rather than completing a task. It might be a follow-up question to a task. These are handy for validating whether your design’s mental model is clear to users. I’ve seen users find their way through a task, but then describe to me a mental model of the app which is at odds with how it was designed. That’s task success for one participant, but more importantly there’s an underlying problem with matching that participant’s mental model.

    So, start with your success criteria, then write your tasks and follow-up questions based on your criteria.
    Stakeholders love success criteria

    Stakeholders don’t necessarily care about your process, but they really care about the results. And if your presentation of the results is vague, they will be rightfully irritated.

    “The user managed to track a few hours, but we weren’t sure whether she understood that tracking time isn’t the same as logging it against a client…” Well, why aren’t you sure? Isn’t it your job to figure this out? You’re wasting their time, and not giving them clear direction on how to fix the UX problems — which is also your job, right?

    Success criteria help you twice over: they clarify whether your design is really successful, and they make it easier to share those results.

    We’ve had some success tracking success criteria in a simple table, and colour-coding the results. Like so:

    We whip up a colour-coded table of results (green = success, red = failure) on our wiki. In the top row, we list participants; in the left column, we list our success criteria. It’s ugly, but quick and useful.

    This is easy to scan, shows pretty clearly where the problems are, and grounds the results in the experiences of actual participants. We also list a bullet-point summary of results and a list of usability problems and recommendations just beneath it. We’ll zero in on those problems and iterate until we believe they’re solved. Your process might be a little different — maybe you’re a consultant handing over a report to a client, for example — but the benefits are the same.

    Author Bio- Clap Creative is your next door tech firm in Los Angeles offering full-range web services under one roof. We are having a team of expertise professionals who works to create custom internet solutions to meet your business goals. We provide comprehensive web-based services such as online marketing, web designing, web development, software development and dedicated link building.
    We are the leading WordPress website design and development company with our skilled WordPress developers for all types of wordpress need, theme design & implementation, Plugin Development & Customization. We will help you to learn effectively about the tips, tools and methods of earning money from a WordPress blog. We deliver the best WordPress websites & Blogs with a professionally customized look, superb functionality and absolutely perfect optimization.
    Whether you are a small business, start up organization or an INC, we implement result-oriented WordPress Solutions for everyone. Enjoy the privilege with our WordPress customization services.

    View more threads in the same category:


  2. #2
    Junior Member
    Join Date
    Sep 2016
    Posts
    228
    Thanks
    0
    Thanked 3 Times in 3 Posts

    Default

    If you have run more than five A/B tests, you know that most of these tests fail to produce any real lift in conversions. They either do not produce any lift in conversions or the lift they produce is usually imaginary/short term.



    Book Ad in Times of India | Newspaper Ad Agency in Delhi
    Last edited by vishnu; 13-03-2019 at 08:18 AM.

  3. #3
    Junior Member
    Join Date
    Jan 2017
    Posts
    127
    Thanks
    1
    Thanked 4 Times in 4 Posts

    Default

    Great post.

  4. #4
    Junior Member
    Join Date
    Sep 2018
    Location
    United Kingdom
    Posts
    156
    Thanks
    0
    Thanked 3 Times in 3 Posts

    Default

    The first on this rundown is utility, which decides whether the web composition meets the fundamental needs of its guests. To know when a structure comes up short an ease of use test you should comprehend, that you test and not the clients' response to it. You should underline the criteria that will illuminate whether your plan effective.

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •