The Use of the Common European Framework of Reference for Languages to Evaluate Compositions in the English Exam Section of the University Admission Examination

  1. Díez-Bedmar, María Belén
Revista:
Revista de educación

ISSN: 0034-8082

Año de publicación: 2012

Número: 357

Páginas: 55-80

Tipo: Artículo

Otras publicaciones en: Revista de educación

Información de financiación

(1) The author would like to acknowledge the help provided by the Comisión Interuniversitaria Andaluza para las Pruebas de Acceso a la Universidad, and specially to its Secretary Dr. Bengoa Díaz, for allowing her to have access to the English exams written for the University Entrance Examination in June 2008 in Jaén.Thanks are also due to the Vicerrectorado de Estudiantes e Insercción Laboral at the Universidad de Jaén, and Drs.Bueno González and Pérez Paredes. Similarly, the author would like to express her gratitude for making the publication of this paper possible to the project «El sistema de acceso a la Universidad: propuestas en la gestión, decisión e inferencias en el área de lenguas extranjeras» (FFI2011-22442), funded by the Ministerio de Educación.

Referencias bibliográficas

  • ALTMAN,D.G.(1991). Practical Statistics for Medical Research.London: Chapman and Hall.
  • ALDERSON, J. C. (1991). Bands and Scores. In J.C. ALDERSON & B. NORTH (Eds.), Language testing in the 1990s, 71-86). London: Macmillan.
  • ALDERSON, J. C., FIGUERAS, N., KUIJPER, H., NOLD, G., TAKALA, S., & TARDIEU, C. (2004). The Development of Specifications for Item Development and Classification within the Common European Framework of Reference for Languages: Learning, Teaching, Assessment.Reading and Listening.Final report of the Dutch construct project. Unpublished document.
  • AMENGUAL PIZARRO, M. (2003). A Study of Different Composition Elements that Raters Respond to. Estudios Ingleses de la Universidad Complutense, 11, 53-72.
  • AMENGUAL PIZARRO, M. (2003-2004). Rater Discrepancy in the Spanish University Entrance Examination. Journal of English Studies, 4, 23-36.
  • AMENGUAL PIZARRO, M. (2005). Posibles sesgos en los resultados del examen de selectividad. IN H. HERRERA SOLER & J. GARCÍA LABORDA (Ed.), Estudios y criterios para una selectividad de calidad en el examen de Inglés (121-148). Valencia: Universidad Politécnica de Valencia.
  • AMENGUAL PIZARRO, M., & HERRERA SOLER, H. (2003). What is that Raters are Judging? In G. LUQUE AGULLÓ, A. BUENO GONZÁLEZ & G.TEJADA MOLINA (Eds.), Las lenguas en un mundo global (11-18). Jaén: Servicio de Publicaciones de la Universidad de Jaén.
  • BACHMAN, L. F. (1990). Fundamental Considerations in Language Testing. Oxford: Oxford University Press.
  • BACHMAN, L. F., & PALMER, A. S. (1996). Language Testing in Practice. Oxford: Oxford University Press.
  • BUENO GONZÁLEZ, A. (1992). Errores en la elección de palabras en inglés por alumnos de Bachillerato y COU. In A. BUENO GONZÁLEZ, J. A. CARINI MARTÍNEZ & A. LINDE LÓPEZ (Eds.), Análisis de errores en inglés: tres casos prácticos (39-105). Granada: Servicio de Publicaciones de la Universidad de Granada.
  • BURSTEIN, J. & CHODOROW, M. (2002). Directions in Automated Essay Analysis. In R. B. KAPLAN (Ed.), The Oxford Handbook of Applied Linguistics (487-497). New York: Oxford University Press.
  • COUNCIL OF EUROPE. (2001).Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge University Press.
  • COUNCIL OF EUROPE. (2003). Relating Language Examinations to the Common European Framework of Reference for Languages: Learning, Teaching, Assessment, Manual: Preliminary Pilot Version. Strasbourg: Council of Europe, Language Policy Division.
  • COUNCIL OF EUROPE. (2009). Relating Language Examinations to the Common European Framework of Reference for Languages: Learning, Teaching, Assessment (CEFR). A Manual. Strasbourg: Council of Europe, Language Policy Division.
  • COCHRAN,W. G. (1977). Técnicas de muestreo. México.Trillas
  • CRONBACH, L. L., LINN, R., BRENNAN, R. & HAERTEL, E. (1995). Generalizability Analysis for Educational Assessments. Los Angeles: Center for the Study of Evaluation, Standards, and Student Testing, University of California at Los Angeles.
  • CUMMING,A. (1990). Expertise in Evaluating Second Language Compositions. Language Testing, 7, 31-51.
  • DEREMER, M. L. (1998). Writing Assessment: Raters’ Elaboration of the Rating Task. Assessing Writing, 5, 7-29.
  • DÍEZ-BEDMAR, M. B. (in press). Spanish Pre-university Students’ use of English: CEA Results from the University Entrance Examination. International Journal of English Studies, 11.
  • DÍEZ-BEDMAR, M. B. (in press). The English Exam in the University Entrance Examination: an Overview of Studies. Revista Canaria de Estudios Ingleses, 63.
  • GARCÍA LABORDA, J. (2006).Analizando críticamente la selectividad de Inglés ¿Todos los estudiantes españoles tienen las mismas posibilidades? Tesol Spain, 30, 9-12.
  • GÓMEZ MONTES, I., MARIÑO, J., PIKE, N. & MOSS, H. (2010). Colombia National Bilingual Project. Research Notes, 40, 17-22.
  • GREEN,A. (2008). English Profile: Functional Progression in Materials for ELT. Research Notes, 33, 19-25.
  • HAMP-LYONS, L. (1991a). Scoring Procedures for ESL Contexts. In L. HAMP-LYONS (Ed.), Assessing Second Language Writing in Academic Contexts (241-276). Norwood, NJ:Ablex Publishing Corporation.
  • HAMP-LYONS, L. (1991b). Basic Concepts. In L. HAMP-LYONS (Ed.), Assessing Second Language Writing in Academic Contexts (5-15). Norwood NJ:Ablex.
  • HAMP-LYONS, L. (2007). Editorial:Worrying about Rating. Assessing Writing, 12, 1-9.
  • HERRERA SOLER, E. (2000-2001). The Effect of Gender and Working Place of Raters on University Entrance Examination Scores. Revista Española de Lingüística Aplicada, 14, 161-180.
  • HUHTA, A., LUOMA, S., OSCARSON, M., SAJAVAARA, K., TAKALA, S. & TEASDALE, A. (2002). A Diagnostic Language Assessment System for Adult Learners. In J. C. ALDERSON (Ed.), Common European Framework of Reference for Languages: Learning, Teaching, Assessment: Case Studies (130-146). Strasbourg: Council of Europe.
  • HUOT, B. A. (1993). The Influence of Holistic Scoring Procedures on Reading and Rating Student Essays. In M. M.WILLIAMSON, & B.A. HUOT (Eds.), Validating Holistic Scoring for Writing Assessment: Theoretical and Empirical Foundations (206- 236). Cresskill, NJ: Hampton Press.
  • HYLAND, K. & ANAN, E. (2006). Teachers’ Perceptions of Errors: the Effects of First Language and Experience. System, 34, 509-519.
  • JOHNSON, R. L., PENNY, J. & GORDON, B. (2000). The Relation between Score Resolution Methods and Interrater Reliability: an Empirical Study of an Analytic Scoring Rubric. Applied Measurement in Education, 13, 121-138.
  • KAFTANDJIEVA, F. & TAKALA, S. (2002). Council of Europe Scales of Language Proficiency: a Validation Study. In J. C. ALDERSON (Ed.), Common European Framework of Reference for Languages: Learning, Teaching, Assessment: Case Studies (106- 129). Strasbourg: Council of Europe Publishing.
  • KHALIFA, H., ROBINSON, M. & HARVEY, S. (2010). Working Together: the Case of the English Diagnostic Test and the Chilean Ministry of Education. Research Notes, 40, 22-26.
  • KONDO-BROWN, K. (2002). A FACETS Analysis of Rater Bias in Measuring Japanese L2 Writing Performance. Language Testing, 19, 3-31.
  • LUMLEY,T. (2002).Assessment Criteria in a Large-scale Writing Test:What Do they Really Mean to the Raters? Language Testing, 19, 246-276.
  • LUMLEY,T. (2005). Assessing Second Language Writing: the Rater’s Perspective. Frankfurt: Peter Lang.
  • MCNAMARA, T. (1996). Measuring Second Language Performance. London: Longman.
  • PENNY, J., JOHNSON, R. L. & GORDON, B. (2000). The Effect of Rating Augmentation on Inter-rater Reliability. An Empirical Study of a Holistic Rubric. Assessing Writing, 7, 143-164.
  • PULA, J. J. & HUOT, B.A. (1993). A Model of Background Influences on Holistic Raters. In M. WILLIAMSON & B. HUOT (Eds.), Validating Holistic Scoring for Writing Assessment: Theoretical and Empirical Foundations (237-265). Cresskill, NJ: Hampton Press.
  • RANDALL, S. (2010). Cambridge ESOL’s Growing Impact on English Language Teaching and Learning in National Education Projects. Research Notes, 40, 2-3.
  • ROBERTS, F. & CIMASKO, T. (2008). Evaluating ESL: Making Sense of University Professors’ Responses to Second Language Writing. Journal of Second Language Writing, 17, 125-143.
  • SALAMOURA, A. (2008). Aligning English Profile Research Data to the CEFR. Research Notes, 33, 5-7.
  • SANTOS, T. (1988). Professors’ Reactions to the Academic Writing of Nonnative Speaking Students. TESOL Quarterly, 22, 69-90.
  • SHAW, S. D. & WEIR, C. J. (2007). Examining Writing. Research and Practice in Assessing Second Language Writing. Cambridge: Cambridge University Press.
  • SONG, B. & CARUSO, I. (1996). Do English and ESL Faculty Differ in Evaluating the Essays of Native English-Speaking, and ESL Students? Journal of Second Language Writing, 5, 163-182.
  • SWEEDLER-BROWN, C. O. (1985). The Influence of Training and Experience on Holistic Essay Evaluation. English Journal, 74, 49-55
  • TURNER, C. E. & UPSHUR, J.A. (2002). Rating Scales Derived from Student Samples: Effects of the Scale Maker and the Student Sample on Scale Content and Student Scores. TESOL Quarterly, 36, 49-70.
  • VANN, R. J., MEYER, D. E. & LORENZ, F. O. (1984). Error Gravity: a Study of Faculty Opinion of ESL Errors. TESOL Quarterly, 18, 427-440.
  • VAUGHAN, C. (1991). Holistic Assessment:What Goes on in the Rater’s Mind? In L. HAMP- LYONS (Ed.), Assessing Second Language Writing in Academic Contexts (11-125). Norwood, N J:Ablex.
  • WATTS, F. & GARCÍA CARBONELL, A. (2005). Control de calidad en la calificación de la prueba de lengua inglesa de Selectividad. In H. HERRERA SOLER & J. GARCÍA LABORDA (Eds.), Estudios y criterios para una selectividad de calidad en el examen de inglés (99-115).Valencia: Universidad Politécnica de Valencia.
  • WEIGLE, S. C., BOLDT, H. & VALSECCHI, M. I. (2003). Effects of Task and Rater Background in the Evaluation of ESL Student Writing: a Pilot Study. TESOL Quarterly, 37, 345-354.
  • WEIGLE, S. C. (1998). Using FACETS to Model Rater Training Effects. Language Testing, 15, 263-287.
  • WEIGLE, S. C. (2002). Assessing writing. Cambridge: Cambridge University Press.
  • WEIR, C. J. (2005). Limitations of the Common European Framework for Developing Comparable Examinations and Tests. Language Testing, 22, 281-300.
  • WOLFE, E. W. & RANNEY, M. (1996). Expertise in Essay Scoring. In D. C. ADELSON & E. A. DOMESHEK (Eds.), Proceedings of ICLS 96 (545-550). Charlottesville, VA: Association for the Advancement of Computing in Education.
  • XUELING, C., MEIZI, H. & BATEMAN, H. (2010). The Use of BEC as a Measurement Instrument in Higher Education in China. Research Notes, 40, 13-15.