##plugins.themes.bootstrap3.article.main##

Rick Tynan Robert Bryn Jones

Abstract

Some teacher educators use numerical grades when assessing teaching competencies. In this situation, statistical analysis can be used to monitor consistency and look for correlations between assessment outcomes across teacher training partnerships and at different stages in training. Another approach is to calculate effect size metrics. These do not claim statistical significance but do seek to explain the practical impact of patterns in quantitative data. This study looks at number grade assessment data from a large secondary initial teacher education programme across schools working in partnership with a higher education provider in the Northwest of England. The proportion of variance between numerical grades for individual Teachers’ Standards and overall teaching was calculated at each formal review point over three consecutive years. Despite the complex process involved in assessing teaching competencies against performance criteria and the potential for subjective variation between individual assessors, the data consistently demonstrated underlying patterns. These suggested that quality assurance and management of assessment issues could have been a major influence on the assessors.

##plugins.themes.bootstrap3.article.details##

Section
Articles