PartyCasino had just carried out a major new rebrand to it’s product across all digital touchpoints.
My task was to conduct a usability test to find out how well the new PartyCasino website worked for people who wanted to use their services. Finding out what does and doesn’t work for the user.
The first task was to create a test plan and look at the purpose of the usabilty test
What was the purpose of the usability test?
The overall aim of this usability test is to find usability problems with the PartyCasino Website so that they can be fixed.
The test has the following specific goals:
I also outlined what this usability test was not designed to achieve. Because the usability test was to be conducted with only a small number of participants (6-8 are typically enough to uncover 85% of usability problems) it is not suited for answering market research questions, which typically need larger test samples. I wanted to make the key stakeholders aware of this from the start.
The website under test is the PartyCasino website.
The key business goals are as follows:
The Website is targeted at professionals and VIPs. These users are dependent on hassle free products and are time dependent.
Consumers who use the Website achieve the following goals:
The environment in which the website is used is as follows:
These were the environments used to simulate the usability tests.
Six participants were tested in the usability test. Some tests were conducted in person and other were moderated remotely. Each participant was paid an incentive for taking part in the test.
The participants all had experience of playing games online. The key characteristics of the participants were:
The usability tests were carried out in the following locations:
The table below shows the schedule for the usability tests.
|June 19th||June 25th|
|9:00 - 10:00||Pilot testing||Participant 3|
|11:00 - 12:00||Pilot testing||Participant 4|
|14:00 - 15:00||Participant 1||Participant 5|
The purpose of the pilot test was to resolve any issues with recording equipment or with the website that may cause delays to the actual test. This was well worth doing as I found out I had to adapt one of my scenarios due to country restrictions.
The test tasks chosen are below:
Below is the method I used to data log any difficulties faced by the participants. Each observation is logged with a code and short description of the behaviour.
This is the observational code that we will use:
|*||Video highlight - an Ah-Ha moment!|
|F||Facial reaction (surprise)|
|A||Assistance from the moderator|
|Q||Gives up or wrongly thinks the task is complete|
|H||Help or documentation accessed|
|M||Misc (general observation by observer)|
I presented the usability test plan to the key stakeholders. I received good feedback and it was agreed to move forward the usability testing.
It was decided however to remove the following test scenario:
As we all agreed it was not a realistic scenario because if a player was engaged in gameplay they may not notice that their credit is running low.
The first task was send a Recruitment screener to current users in our database.
For the in person usability testing I had to make sure the users were available where I’m based in Spain and Gibraltar. For remote testing I just had to make sure we could create a time where the timezone fitted everyones needs.
Main metrics I needed to find out from the screener was the users level of internet experience and online gaming experience.
A copy of the recruitment Screener can be found here.
Once participants had been found to take part in the usability test I sent them an email, providing date and place of location. Outlining their rights for taking part in the testing and sent them over a copy of the informed consent letter to read through which they would need to sign on the day to take part.
After the user confirmed I sent a reminder email a few days before the usability test.
To make sure every usability test was consistent and carried out to a high level I conducted the usability test using a discussion guide.
I started by welcoming the participant and explaining who I was and what the aim of the test would be.
I then explained the procedure to the participants and explained that the session would not last longer than 45 minutes.
The user was then asked to sign the statement of informed consent.
Once signed we proceeded on with our usability test starting first with ‘demonstrating how to think aloud’.
In usability testing it’s very important to understand the users thought process when they are using your website and the best way to gather this is by asking the participants to think aloud. I first showed the participants and example of thinking aloud and I then gave them a simple task to do such as adding a new contact into their mobile phone.
I asked the user if they had any questions and when they were happy we started the usability test.
Reminding the user - “The most important thing to remember when you’re using it is that you are testing the website - The website is not testing you. There is absolutely nothing that you can do wrong”.
Here is a clip of a participant carrying out a task
On completion of the tasks I thanked the participants and asked a couple of questions such as:
I then handed the participants two questionnaires to fill out to measure the satisfaction of the website. Both can be viewed via these links:
Finally I thanked the participants for coming along and asked if they had any questions. I also asked if they had any suggestions about how I could run these tests better, either in the terms of scheduling or in the way I ran it.
The final task was to send the user a thank you email for taking part in the usability test.
After all the usability tests were over, it was time to analyse the results.
When measuring usability, there are three areas to analyse:
Effectivness measures how successful the users were at achieving the key tasks of the usability study.
The table below shows the task completion rate of my users.
|Participant||Task 1||Task 2||Task 3|
Efficiency measures how much time it took the users to achieve the key tasks of the usability study.
Where the table is blank is where the participant failed to complete the task. Although I could have recored the task failure data and worked out the average failure time for a task, it was not needed for this project.
|Participant||Task 1||Task 2||Task 3|
A standard deviation is a common measure in statistics for measuring varability.
Using Standard deviation we can work out that if we tested all of our users, for task 1 they would be no quicker than 388 and no slower than 488.
Collecting satisfaction data is much harder but very critical to the usability of a product.
Earlier I mentioned that I had asked participants to complete two questionnaires after completion of all the tasks.
System Usability Scale.
This asks users to answer 10 questions about the website. The questionnaire is designed with positive and negative questions to make sure the user thinks about there answers.
Calculating the results the average rating for the website came to 70 this ranks as a grade C in the SUS score table.
Post site word choice
This is the second way I measured usability success from the usability tests.
It’s a good way to get participants to be critical of the website which is being tested.
The idea is that you show the participants a list of words and ask them to tick ten which best represents the system they have just used. Using this technique people are more likely to be critical.
Below is a word cloud to represent the ten most common words chosen the usability test. The size of the word represents how many times it was chosen.