Skip to content. | Skip to navigation

  • Home
  • Archive
  • 5. Issue
  • Evaluating ourselves to death. An alternative way to compare on- and offline learning
Citation and Metadata
Document Actions
Full Text

Hochschulschrift (Dissertation)

FernUniversität in Hagen, Lehrgebiet Medientheorie und Medienpädagogik, 2008

URL: http://deposit.fernuni-hagen.de/505/1/Wir_evaluieren_uns_zu_Tode.pdf

Recently, media based learning and e-learning are up to date. Current trends focus on cooperative and collaborative learing, regarding the importance of web 2.0 applications and media literacy. Institutions of higher education seem to accepted the dare during the last years. Online- and blended learning arrangements become important parts of the curriculum - with great potentials: Interaction and activity are high in the ratings and formal learning becomes more and more informal. As this is an antagonism dicactical issues and practical benefits seem to be ambiguous. It is still relevant to evaluate and measure the additional values of learning scenarios for being able to develop concepts for the use of the new media. For measuring the success of e-learing, comparision studies are used in most cases - one setting is compared to another. But are such comparisions useful?

We are "evaluating ourselves to death" regarding the surges of evaluation studies that are conducted with various forms, various claims and definitions and various topics. One part of these studies report of no significant difference between the settings, another part says that e-learning is more successful while the third part found out that the best outcome can be reached with traditional learning. Thus, reliable conclusions are not possible - even worse: in the end it is not even clear what learning outcome really is, as the definitions change in the plurality of the studies (cp. Annabell Preussler & Peter Baumgartner 2006).

So, what makes learning successful and how can this be detected? How can a high quality in learning be reached?In the area of education this is evaluated by measuring the learning outcome. It becomes an indicator for the construct of learning quality, but actually, learning outcome is a construct as well. Observable characteristics are deducted from a particular theoretical construct and must be questioned on the basis of the underlying theoretical assumptions. If, for example, the ability of remembering is used as the key indicator of learning, then many other possible indicators (such as apply, evaluate, create, etc.) are excluded or ignored.

Secondly, there is yet another problem: For comparing the learning outcome of two (or more) different settings, mostly the same assessement for both groups is used. Then there is the risk of regarding only one specific dimension of learning. In most cases this is the dimension that one setting can achieve anyway. Advantages of the other setting are not taken into account.

In this work a meta-evaluation was conducted in order to find out how the problem is being dealt with in practice (Annabell Preussler 2008). Evaluation studies regarding the coherence of e-learning and learning outcome and and the influence of e-learning respectively were analysed and compared. One hypothesis was that learning outcome can not be operationalized clearly and that nonspecific comparisons of e-learning and traditional learning can not be conducted unlimitedly meaningfully.

In order to analyse and evaluate existent studies a meta-evaluation was conducted. This was done as a review (cp. Cook & Gruder, 1978:17), which did not explicitly focus on the representativeness of the single results but documented indifferent perceptions of learning outcome and their operationalisation and measurement. After the specification of the relevant criteria, 11 primar studies were involved in the meta-evaluation. The coding was done using the methods of deductive content analysis (cp. Mayring, 1993, cp. Widmer, 1996:64).

The meta-evaluation revealed the following: While reasonable and practical procedures reason to good feasibility of the single studies, the studies regarding accuracy (validity and reliability) and especially the adequacy of research decisions score far worse. Different conditions were either measured with the same tests or - if the learning objectives are properly tested at the same level of cognitive processes - the design of the assessement was not related to the intended learning objectives.

For future studies comparing the outcome of e-learning to traditional learning these recommendations are given:

  1. The intended learning objectives of both groups should be in the same level of cognitive process dimensions (Lorin W. Anderson & David R. Krathwohl 2001).

  2. The assessment should be adequate to these objectives

  3. The design of the assessement should correspond to the learning arrangement in the test groups.

Fulltext