Empirical foundations for automated quality assessment of learning objects inside repositories

  1. CECHINEL, CRISTIAN
Supervised by:
  1. Salvador Sánchez Alonso Director
  2. María Elena García Barriocanal Co-director

Defence university: Universidad de Alcalá

Fecha de defensa: 17 May 2012

Committee:
  1. Nikolaos Manouselis Chair
  2. Miguel Ángel Sicilia Urbán Secretary
  3. Juan Manuel Dodero Beardo Committee member
  4. Ricardo Colomo Palacios Committee member
  5. Julià Minguillón Committee member

Type: Thesis

Teseo: 328492 DIALNET

Abstract

Learning objects can be defined as small units of knowledge that can be used and reused in the process of teaching and learning. They are considered by many as the cornerstone for the widespread development and adoption of e-learning initiatives over the globe. Most of the current learning objects repositories (systems where learning objects are published so users can easily search and retrieve them) have been adopting strategies for quality assessment of their resources which are normally based on the opinion of the community of experts and users around them. Although such strategies can be considered successful at some extent, they rely only on human-work and are not sufficient to handle the enormously amount of resources existing nowadays. Such situation has raised the concern for the development of methods for automated quality assessment inside repositories. The present dissertation approaches this problem by proposing a methodology for the development of models able to automatically classify LOs stored on repositories according to groups of quality. The basic idea of our dissertation is to use the existing on-line evaluations (evaluative metadata) of the repositories in order to divide learning objects on groups of quality (e.g., good and not-good), thus allowing us to search for intrinsic features of the resources that present significant differences between these groups. These features (metrics) are called by us ¿highly-rated learning object profiles¿ and are considered potential indicators of quality that can be used by classification algorithms as input information to create models for automated quality assessment. In order to test our proposal, we analyzed 35 metrics of a sample of learning objects refereed by the Multimedia Educational Resource for Learning and Online Teaching (MERLOT) repository, and elaborated profiles for these resources regarding the different categories of disciplines and material types available. We found that some of the intrinsic metrics present significant differences between highly rated and poorly-rated resources and that those differences are dependent on the category of discipline to which the resource belongs and on the type of the resource. Moreover, we found that different profiles should be identified according to the type of rating (peer-review or user) under evaluation. Based on these findings, we decided to restrain the generation and evaluation of models to the three intersected subsets - considering the categories of discipline, material type and the peer-reviewers¿ perspective of quality) - with the higher number of occurrences in the repository. For those subsets we generated and evaluated models through the use of Linear Discriminant Analysis and Data Mining Classification Algorithms, and we found preliminary results that point out the feasibility of such approach for these specific subsets. The dissertation ends by presenting two possible usage scenarios for the developed models once they are implemented inside a repository. The initial results of this work are promising and we expect that they will be used as the foundations for the further development of an automated tool for contextualized quality assessment of learning objects inside repositories.