Summary: If you’re not learning from your mistakes and incorporating user feedback into whatever your designing, then you’re removing the user from the center of your product. At every stage there are opportunities to get your designs out there and gain some real insights. Good or bad, any feedback is useful.
Gaining feedback from your users at any stage in the design process will provide with valuable information. This isn’t to say that with ever minor tweak we need to wait for testing and approval, but it is a good idea to share and collaborate with your audience when meeting certain milestones within the design. We can setup and track milestones with the Project Manager or an equivalent based management software.
The Agile methodology, or Lean UX, incorporates an iterative process in where you are transparent with your users and stakeholders at all times. Doing so creates a transparency into the intent of the product and starts the dialog with your users.
I’ve found using a hybrid approach or Kanban works best because I feel the strict Agile delivery process doesn’t really favor designers. So I still use sprints to define timelines of production, but instead of two weeks at a time, I like to use 3 weeks. This may come in handy if you find yourself designing in parallel while parts of the system or product are being built by development.
Everything is then formatted into a documentation deliverable that explains who, what, when, where and why, making the data more digestible for product owners and developers. It’s 2022, so there’s no excuse for not having some form of interactive prototype together to share with the test participants and stakeholders. Doesn’t matter if they’re lo-fi wireframes or hi-fi prototypes.
Once there is a consensus on the design work, we can begin testing our designs with the users. If everything is designed with the target audience in mind and you’ve at least addressed the needs as per Maslow’s hierarchy, then gaining the feedback you want should not be that much of any issue. It will boil down to what’s available to you and the tools at your disposal. Your ability to effectively communicate design decisions with stakeholders will showcase your authority as a User Experience professional.
As part of the iterative process remember to test early and test often. If it happens that the process is continually repeating for an extended period of time, the owner of said process should step in to help understand if we’re dealing with scope creep or not meeting our delivery goals.
Usability Testing/Studies
Usability is defined by how well users can learn a system or product to achieve their goals and how satisfied they are with the process. Running testing or “studies”1 early on in our process allows us to identify areas were we can iteratively improve efficiency, ease of learning, memorability and the frequency and severity of errors. The collected data can be used as a commodity when it comes to measuring the success of your product, whether you’re testing the conversion rate or level of effort. Listed below are a few deliverables you can expect from a usability study:
- Heat Maps showing where users looked and clicked within a screen.
- Video Playback of usability sessions to watch the participants reactions and body language.
- Quantifiable Analytic Data from the existing source.
Qualitative studies might be required when concerning native mobile applications where the user is presented with a testing device. The ability to conduct a usability study in person is invaluable, but with the advances in remote testing platforms and the current state of the remote workforce, gathering usability data no longer requires that sessions be done in person. Services like Mouseflow, Hotjar and Google Analytics are free and affordable resources for capturing quantifiable data about your product with video sessions, heat mats, journey breakdowns and more. Even at the writing of this article, there is a self install/hosted Google Analytics alternative called Motamo.
Types of Tests for a Usability Session:
Five Second Test
During a five second test the user is shown your design for 5 seconds, then asked a series of questions about memorable portions of the design and questions that would quantify the nature of the company or product.
Click Test
A click test generates a heat map of clickable areas with the slide presented to the user. In most cases the tester is given a directive then asked what would be their next course of action.
User Flow Test
The flow test allows us to give a user a directive then watch as they try to complete a task. The results are based on a pass, fail, or abandon basis. A good example of when to introduce a flow test would be in cases where there are multiple screens for an eCommerce check out task. The difference between the Flow, Click, and Five Second test is that both of the Flow and Click tests are task driven tests.
Blind Public Testing
One platform that I like to conduct blind public quantitative studies with is Usability Hub. Their system is credit based and reciprocity based to earn those credits. Each credit quantifies a user for your test. The number of users you choose for each test should probably average 5 to 15 responses. You really only need a minimum of 5 participants to conduct a decent study. You need to keep in mind that the testing environment is open to the public and the users within the community, so not every response you get will be golden. However, it falls back on the UX professional to dig through those responses and the data to judge which responses were valid and useful.
When doing so I try to be impartial as possible. Having worked on the design your perception of the feedback is valid but also skewed, because now you have an invested interest in the product. That is one of the reasons why introducing the design to users that have never seen it provides such valuable feedback.
Targeted Studies
Conducting a session of targeted study means you were able to schedule in-person sessions for your study. If possible, this is one of the best ways to gather feedback. It will allow us to further utilize the personas or empathy maps we’ve created and allows us to set the parameters of each test to the targeted demographic our designs will be communicating with.
Either way, when given the opportunity I prefer to do in person testing. The questions that are addressed are agreed upon beforehand with the product owners and the stage is set in order for us to test affectively with signed permission forms from the testers to allow us to video tape our sessions. We can ask questions in person, pick up on subtle cues from the user, and take a moment openly discuss the work with them. Even the study itself should be roadmapped with milestones set in place. it helps set expectations for a usability study week with schedules in place.
Targeted studies should be conducted with participants, a moderator and observers. The moderator administers the usability study with the participant while the observers may be in a separate location watching the study over a feed. The observers provide just as much value to the study as a participant, because while administering the study, an observer may notice something that completely flew by the moderator or participant.
Given the opportunity I prefer to work as the study moderator than an observer, because I can gather more useful information when I’m interacting with someone. If something doesn’t sit well with the study participant and our interface, then I want to be there to address that issue in the moment and make a note. At any point and time I don’t want the participants to feel obligated to appease me or to appear cold in my responses, but we are both there to do a job.
Grasping Analytical Data
Once your product has been released into the wild it will take some time for users to begin using it and the analytics to populate. With all of our design and prep work paying off the involvement of the UX professional should be coming to a close. It is my belief that this is further from the truth.
We should have actual data in my hands with real users, the users we built the solution for. How could we possibly pass up a chance to pour over the information and setup various channels and metrics? We can’t. Understanding how to read the data collected is one thing. Being able to put that data into practice is another. I’ve been using Google Analytics for a number of years now and they have only improved on what was in place through progressive enhancements. This is why I excel at eCommerce UX and tools like Hotjar and Google Analytics have become invaluable to my toolkit.
That is something you need to consider. Progressive enhancement. You’ve only now released your minimum viable product into the wild. What can we do to improve on its performance let alone or user’s experience? So consider the use of live analytical data as a 2nd form of real world usability testing. I do.
Conclusion
No matter how you approach gathering user feedback you always want to ensure that the tasks and questions will care weight with the product and provide valuable insight into it’s use. There will always be red tape (getting buy in from product owners, finding the right tool for the job and spending money if be. Make sure to go into a usability study with a solid plan of action. Capture the data you need, share the results, make adjusts and evolve your product over time.
It’s not overly difficult, there are just a lot of moving parts and as a UX professional it’s part of your function to gain a better understanding how to make user feedback into actionable items.
Footnotes
- I’ve previously referred to usability tests as a study due to the initial response of the participants involved when they hear the word test. The word test may invoke low levels of performance anxiety as study participants will want to “test well” or “ace the test”.
Resources
- Usability.gov ( https;//usability.gov )
- Usability Hub ( https://usabilityhub.com )
- Useberry ( https://www.useberry.com )
- User Testing ( https://www.usertesting.com )