Evaluation

This page needs some work to make it sound more academic in style and to replace pdfs with links. Elements of the text could also be expanded and there could be more references and links.

These are the two chapters to look at for Week 2. The first one is Dix, A. (2004) 'Evaluation Techniques' in Dix, A. (2004) Human-Computer Interaction (3rd edition) Harlow: Prentice Hall Chap. 9: 318-364, which is available as a chapter in a digital book.

The second is Wilson, B., Parrish, P. and Veletsianos, G. (2008) Raising the bar for instructional outcomes: Towards transformative learning experiences. Educational Technology, 48(3), 39-44.. This article takes a more classroom oriented view of the process ( we have permission to have this article on the wiki ).

'The verb 'evaluate' has been defined as 'to judge or calculate the quality, importance, amount or value of something, e.g. // It's impossible to evaluate these results without knowing more about the research methods employed // .**¹** Evaluation can be undertaken in a number of ways, of which four main methods are:
 * Needs analysis - readiness or need for an intervention.
 * Formative evaluation - feedback during development or piloting.
 * Summative evaluation - measures effectiveness and impact.
 * Monitoring or integrative evaluation - how well an initiative has been integrated.

Evaluation has three main goals: to assess the extent and accessibility of the system's functionality, to access users' experience of the interaction and to identify any specific problems with the system. [|Dix, A. (2004)]

There are different ways to conduct evaluation, but a useful starting point could be Nielsen who advocates a method he calls Expert based evaluation. The expert tests the device or software for its conformance based on a set of rules, known as heuristics Neilsen's 10 Heuristics. Based on those standards the evaluator can identify any problems, from his/her expert point of view.

Therefore, heuristic evaluation is not conducted from the perspective of a real user. During this method the expert could not be aware of the conditions under real users may act in the everyday use of the product, because they already know the product. I believe that conducting the heuristic evaluation on its own, as a method to evaluate software for usability problems, does not offer much accurate information. As the purpose for evaluating a device is to find any problems that may negatively affect the user performance, a **co-operative evaluation process** is needed, using real users performing realistic tasks.

There are many different ways to evaluate material, but here we are interested in different aspects of the evaluation process: the materials and the way they work as well as their educational value.

Here is a very informative article: Comparative Analysis of Heuristics and Usability Evaluation Methods // Elizabeth J. Simeral and Russell J. Branaghan, //

I found an article on evaluation in ELT. It is by Marion Williams and Robert Burden and looks at how formative evaluation provides positive benefits in the design of a programme.


 * REFERENCES**

[]