January 29, 1999
School of Information Technology and Engineering
University of Ottawa
An Automated Method for Studying Interactive Systems
Information Retrieval (IR) is usually an interactive process, which makes evaluation of IR systems tricky. System studies - studies that measure relevance and precision on well defined tasks in a well defined document environment - are limited in their application because they don't do justice to user aspects and interactivity. User studies are either costly and time consuming, or of a small scale.
In our group, we explore the possibility of adding interactivity to system studies: they may help to bridge the gap between user studies and traditional system studies. In the simulations, samples are taken from the set of all possible user actions. Evaluation is done by comparing the performances after different sequences of such interactions. Several machine learning methods are then employed to identify those categories of actions that are most likely to lead to the best end results.
In the seminar, I intend to discuss this methodology and present the results of a first run of simulations.