Blog

Centro de Investigación en Diseño

Home / Posts tagged "heylighen"

The right story at the right time

In response to the lack of systematic study of architectural practice, the Building Stories methodology propounds storytelling as a vehicle for studying active cases, i.e., projects that are in the process of being designed and built. The story format provides a dense, compact way to deal with and communicate the complex reality of a real-world project, while respecting the interrelated nature of events, people and circumstances that shape its conception. With an eye to establishing a valuable knowledge resource of and for the profession, the paper explores how stories can be stored, organized and accessed so as to turn the growing story repository into a convenient instrument for students, educators and practitioners.

Read more →

Mind-ing the Task :The role of context in usability research

In this paper we describe our findings regarding the role of context in usability evaluation, particularly how the nature of the tasks can affect the users’ perception of the performance of a particular application. Our findings show a relationship between the variation in the nature of the tasks used for usability evaluation on the one hand, and the way in which subjects evaluated these applications afterwards by using user-administered questionnaires on the other hand. These findings contradict the absolute benchmarking goal of some of these tools, thus raising questions about the possibility of achieving that kind of benchmarks in software usability evaluation, and about how comparative measurements of the benefits of software and technology take place in laboratory conditions.

Read more →

How relative absolute can be: SUMI and the impact of the nature of the task in measuring perceived software usability

This paper addresses the possibility of measuring perceived usability in an absolute way. It studies the impact of the nature of the tasks performed in perceived software usability evaluation, using for this purpose the subjective evaluation of an application’s performance via the Software Usability Measurement Inventory (SUMI). The paper reports on the post-hoc analysis of data from a productivity study for testing the effect of changes in the graphical user interface (GUI) of a market leading drafting application. Even though one would expect similar evaluations of an application’s usability for same releases, the analysis reveals that the output of this subjective appreciation is context sensitive and therefore mediated by the research design. Our study unmasked a significant interaction between the nature of the tasks used for the usability evaluation and how users evaluate the performance of this application. This interaction challenges the concept of absolute benchmarking in subjective usability evaluation, as some software evaluation methods aspire to provide, since subjective measurement of software quality will be affected most likely by the nature of the testing materials used for the evaluation.

Read more →