André de Lima Salgado

''An ontology for usability finding'' [editar]

INTRODUCTION

In the formative usability field, Usability Evaluation Methods (UEMs) are methods for the diagnostic of usability findings. Such a diagnostic is necessary to identify ways to enhance the usability of an interface (Lewis, 2014).

Constantly, new software technologies are proposed and open the venue for new paradigms of user interfaces, to be used in different domains. Therefore, the literature on formative usability has to keep UEMs up to date with such new paradigms and domains (Hermawati & Lawson, 2016). For this reason, improving existent UEMs, or proposing new ones, is an important task to move forward in the field. In this context, methods for assessment of different UEMs are required to support studies in this topic (Hartson, 2001).

Despite the importance of assessing UEMs, the literature on methods of assessment of UEMs still face critics. One of the main critics about assessing different formative UEMs regards to the process of matching usability finding descriptions from different UEM reports (Hornbæk, 2010). Yet, there is no widely adopted classification to describe usability findings (Hornbæk, 2010; Yusop et al. 2017). To clear the purpose of matching usability finding descriptions, it is still necessary to assist a reliable classification.

The goal of this paper was to describe a reliable classification for usability findings, which may be used in further researches about assessment of UEMs to improve the process of matching usability finding descriptions and facilitate richer comparisons. In this direction, we adopted techniques from the Semantic Web to create an ontology of usability findings based on international software standards and evidences from the literature. The ISO/IEC 25000 series served as basis to develop our ontology. In addition, we included attributes identified as common sense in the state of the art in the description of usability defects (Yusop et al. 2017).

We employed our classification in a case study to compare Heuristic Evaluation (HE) reports from three (3) different evaluators. We asked each evaluator to put his/her reports in the format of our classification through an online form. Also, we asked each evaluator to provide their opinion about the process of classifying their findings. The results showed that our classification might be plausible for research purposes and also help evaluators to clarify their findings, as there are required to discriminate their findings in a detailed level.

The remaining of this paper is organized as follows: …….

LITERATURE REVIEW

Usability Findings

Usability Defect Reporting

METHODS

Usability Defect Ontology

alt

CASE STUDY

DISCUSSIONS

CONCLUSIONS

ACKNOWLEDGEMENTS

REFERENCES

Lewis, James R. “Usability: Lessons Learned … and Yet to Be Learned.” International Journal of Human–Computer Interaction 30, no. 9 (2014): 663–84. doi:10.1080/10447318.2014.930311.

Hartson, H Rex, Terence S Andre, and Robert C Williges. “Criteria for Evaluating Usability Evaluation Methods.” International Journal of Human-Computer Interaction 13, no. 4 (2001): 373–410.

Hermawati, Setia, and Glyn Lawson. “Establishing Usability Heuristics for Heuristics Evaluation in a Specific Domain: Is There a Consensus?” Applied Ergonomics 56 (2016): 34–51. doi:https://doi.org/10.1016/j.apergo.2015.11.016.

Hornbæk, Kasper. “Dogmas in the Assessment of Usability Evaluation Methods.” Behaviour & Information Technology 29, no. 1 (2010): 97–111.

Yusop, N. S. M., J. Grundy, and R. Vasa. “Reporting Usability Defects: A Systematic Literature Review.” IEEE Transactions on Software Engineering 43, no. 9 (September 2017): 848–67. doi:10.1109/TSE.2016.2638427.

Agenda

20/10/2017 - Validar taxonomia com especialista de domínio

28/10/2017 - Experimento com avaliadores

10/11/2017 - Análise de resultados

17/11/2017 - Artigo parcial

06/12/2017 - Artigo completo