LabIUtil

  

useit.com

  

usernomics

  

usableweb

 

hcibib

Laboratório de Utilizabilidade da Informática

 

LabIUtil - Laboratório de Utilizabilidade

 

 

 

 

 

The UseMonitor Project :
The web analyser applying the Transaction Oriented Analysis to produce Usability measures
by Walter de Abreu Cybis

Have you ever asked yourself why usability appraisals based on log data analysis are so unpopular among web site ergonomic experts?
It is strange, because log data are traces of interactions accomplished by real users performing in their natural environment, while completing genuine tasks. There is nobody observing or constraining them while they accomplish theirs tasks, in their homes or offices. These are ideal conditions to gather interaction data for usability studies.

The current web analysers supply us with technical information (exception occurrences), traffic & browsing (how much, when and where users are more frequently to be found in web sites) and ROI (sales amount) perspectives. This information is useful for webmasters, software engineers and marketing professionals, but what can a usability specialist do with it? Not much beyond targeting his/her actions, i.e., to focus on what is most important or most frequently accessed by users.

In fact, simple usability metrics such as “productive time in task” or “quantity of errors in task" are not offered by current tools, and without an appropriate tool the log data analysis is quite impossible.

 

The purpose of this communication is to introduce UseMonitor tool, the first web analyser focusing on usability measures.  

Learn more...

Should you have interest in learning more about this tool, may I talk about the UseMonitor project which develops the “task” oriented approach to the log file analysis. It allows a system to automatically produce metrics concerning user efficiency and effectiveness while accomplishing transactional tasks with the web site. I have been working on it in the LabIUtil/UFSC since 2000 and sometimes during this period I have been supported by Dominique Scapin, from Inria and Jean-Marc Robert from the Montreal Polytechnic School. The effectiveness and efficiency metrics are computed according to the model proposed by ISO 9241:11 - Guidance on Usability. This standard proposes specifying usability by means of effectiveness, efficiency and satisfaction that users experiment during the accomplishment of a given task. We could not know about user satisfaction by means of the log data analysis, but this method does authorize to compute user efficiency during task trials. To do so, we need look to the log data from a quite different perspective, not as “interaction” vestiges, but as “task” traces. It is the only way we will be able to analyze user productivity by means of the log data.

In order to move from “interaction” to “task” perspective we need to know the users’ objectives. This could be done by combining at least two strategies: Analysing the path that users crossed and knowing the issues they achieved. For example, when reading log data we verify that a user got access to a register form and some minutes later the system delivered to his/her machine a confirmation message. It is reasonable to infer that this user wanted to register him/herself. The same assumption is valid for other types of transactions, with clearly observable and distinguishable starting and ending points, like to buy a product, pay bills, query for an information service like an account balance, etc.

Once the user objectives are known, it is possible to identify when he/she has begun and ended it, as well as the different paths she/he has employed during this time. In fact, this approach enables the identification of several user behaviours authorized by the web site user interface structure. In general they are related to the immediate success, the success with deviation, the success with error, the success with help, the quitting, cancelling (quitting after an error) and so on. Computing the incidence and the duration time of different successful behaviours it is possible to determine efficiency metrics. The incidence of failed behaviours, on the other hand, could inform about user effectiveness on tasks, but in these cases, we need to assume that measures will not be precise. In fact, by mere data log analysis there is no way of distinguishing quitting and cancelling behaviours. We will always be in doubt over users that quitted a task due to interface obstacles, from those who were only visiting the site and quitted it just before command some execution. So, I need to emphasize that UseMonitor tools can supply people with a more precise vision about user efficiency on successful tasks. Measures concerning user effectiveness on tasks however will not be so precise, unless the tools are associated with a remote test or induced use studies.

When talking about the “task & usability oriented log file analysis” limitations and applicability I need to highlight that the values supplied by UseMonitor set of tools correspond to average and individual measures. The average values concern all types of users, working in all kinds of environments, with access throughout high and low bandwidth connections, etc. The system also produces usability measures for individual users also, but in these cases nothing can be said about this users' context (except the OS and the browser he/she employs). Even if both kinds of measures are not segmented, they are valid to an ergonomic expert or usability engineer, since this could be obtained quickly, and at low costs. It will be employed with technical and managerial proposes, in particular to specify general test approval conditions and to monitor general usability evolution during user interface revision (something usual among web sites). The fundament for this last practice is quite simple: (i) the components of the context of use, which includes the user interface itself, determine the usability in a task; (ii) with exception to the interface, profile other elements of the context of use (users, hardware and connection) change relatively slowly across time; (iii) by other side user interfaces in websites change on a fast and frequent basis. The analysis of these statements enable us to suppose that recent changes in a given user interface are the main suspects in causing recent variation (increasing or reducing) on user efficiency in the task.

Briefly, the UseMonitor set of tools aims at supplying ergonomic experts and even webmasters, quickly and at low cost, with values concerning the user efficiency on transactional tasks in web sites supporting B2C, B2B, including ERP, home banking on Web, WAP, TVi, IVR, etc.

This is a vision of a new kind of usability practice truly engineering based while massively founded in quantitative measures obtained quickly and at low costs.
 
 

 

 


 

Contact
Email: walter.cybis@polymtl.ca

Google