In recent years, attention has increasingly focused on enhancing user satisfaction with user interfaces, spanning both mobile applications and websites. One fundamental aspect of human-machine interaction is the concept of web usability. In order to assess web usability, the A/B testing technique enables the comparison of data between two designs. Expanding the scope of tests to include the designs being evaluated, in conjunction with the involvement of both real and fictional users, presents a challenge for which few online tools offer support. We propose a methodology for web usability evaluation based on user-centered approaches such as design thinking and linguistic decision-making, named Linguistic Decision-Making for Web Usability Evaluation. This engages people in role-playing scenarios and conducts a number of usability tests, including the widely recognized System Usability Scale. We incorporate the methodology into a decision support system based on A/B testing. We use real users in a case study to assess three Moodle platforms at the University of Guadalajara, Mexico.
翻译:近年来,提升用户对界面(涵盖移动应用与网站)的满意度日益受到关注。人机交互的一个基本方面是网络可用性概念。为评估网络可用性,A/B测试技术能够比较两种设计的数据。将测试范围扩展至包含被评估的设计,并结合真实用户与虚构用户的参与,构成了一个现有在线工具鲜少提供支持的挑战。我们提出了一种基于用户中心方法(如设计思维与语言决策)的网络可用性评估方法,命名为“网络可用性评估语言决策法”。该方法引导人们参与角色扮演场景,并执行一系列可用性测试,包括广受认可的系统可用性量表。我们将该方法整合到一个基于A/B测试的决策支持系统中。通过一项案例研究,我们使用真实用户评估了墨西哥瓜达拉哈拉大学的三个Moodle平台。