Interim report: ‘MU-MAP: Mapping University Mathematics Assessment Practices’

Paola Iannone, University of East Anglia, and Adrian Simpson, Durham University, have submitted the following interim report for their project ‘MU-MAP: Mapping University Mathematics Assessment Practices‘.

Start date: 9th November 2011 – initial preparation between 01.10.11.and 09.11.11

Initial preparation:

The RA to the MU MAP Project was appointed in August 2011 and started work at UEA on the 9th of November 2011. Some initial preparation for the project was carried out by the PI and Co-Applicant before the project started. These include:

  1. Creation of a one-page website for the project at http://uea.ac.uk/edu/mumap. This website will be populated as the projects progresses. A team of web developer familiar with the UEA web design package has been commissioned to develop this website. This will house the materials and outcomes of the MU MAP project and will be maintained by UEA for the next 5 years.
  2. A dissemination workshop (two 2-hours sessions) at the forthcoming British Mathematical Colloquium (BMC -16-19 April 2012, University of Kent) has been organised with the conference organisers. The title of the workshop is “How we assess mathematics students: a survey and case studies. Findings from the MU MAP Project”

First MU MAP Meeting – Loughborough University – 17.11.11

The First MU MAP meeting with mathematics lecturers was held on the 17.11.11 at Loughborough University. During this meeting we presented preliminary findings from the survey of assessment practices and we launched the call for Mini Projects, part of Phase 4 of MU MAP.

Phase 1: Comprehensive Review of the Literature

The review of the literature (as detailed in Phase 1 of MU MAP) is well under way. We have collected papers relevant to

  1. Existing empirical research in university assessment practices, particularly focussed on STEM.
  2. Existing empirical research in mathematics university assessment practice.
  3. Existing pedagogical scholarship in mathematics university assessment practice.

Currently we are writing summaries of the papers included in the literature review. These summaries will form part of the searchable database on the MU MAP website. A summary of findings from this literature review will form the first part of the MU MAP Good Practice Book and will be posted on the MU MAP website.

Phase 2: Surveying Existing Practice

The web-based survey (as detailed in Phase 2 of MU MAP) has been completed and the results are collected in our database. The interviews with the Head of Schools of mathematics departments are under way – so far we have interviewed 27 Head of Schools.

Phase 3: Identification of Good Practice

As part of the interviews with the Head of Schools we have asked them to nominate staff in their department who assess their modules with innovative, interesting assessment practices (as detailed in Phase 3 of MU MAP). We have interviewed 17 lecturers about their assessment practices. These will constitute the anonymised short case studies of good assessment practice which will be posted on the website and will be included in the MU MAP Good Practice book.

Phase 4: Costs and Effects of Change

(As detailed in Phase 4 of MU MAP) After the call of mini projects at the Loughborough meeting we received 8 applications. We were able to fund 7 mini projects. The list of the mini-projects funded with lead applicants, institutions and brief summary of the projects is included below. The successful applicants have been notified and work is underway to start the mini-projects. Project leaders will disseminate their findings at the forthcoming BMC, as described above.

Mini projects funded by MU MAP

Audience Response devices for formative and summative assessment
Paul Hewson and David Graham
School of Computing and Mathematics, Plymouth University

Audience response devices have received tremendous attention in the learning community. Kay and Sage (2009) “Examining the benefits and challenges of using audience response systems: A review of the literature” Computers & Education 53:819-827 present a review of these devices in class use. There are many advantages, and one notable potential pitfall – that students may not appreciate being constantly assessed. However, our experience is that students do like constant assessment if it is accompanied by rapid feedback. Also, surprisingly, we have had students suggest they would prefer to take in-class tests (that is, summative assessment) using these devices. The aim of this project is to determine the suitability of audience response clickers for this kind of assessment. The literature isn’t clear on the problems – there is a lot of emphasis on “fun” and “engagement”.
We also note that we use clickers that allow for numeric as well as multiple choice input, which lets us set a wider range of questions than are possible with many systems. We also have experience in dealing with the equipment (again, the literature suggests this is the largest barrier to effective use of these devices). What we don’t know is whether it is possible or desirable to incorporate use of these clickers as part of the formal assessed work in a module. By integrating assessment more thoroughly within learning we may obtain better learning rather than a focus on preparing for tests. On the other hand, it is less easy to be flexible with the questions you use in class which can be hugely advantageous (for example Chin and Brown (2002) “Student-generated questions: A meaningful aspect of learning in science” International Journal of Science Education 24: 521-549) and more difficult to implement with clicker based learning.

Assessing proofs in pure mathematics
Timothy Hetherington
School of Science and Technology, Nottingham Trent University

Can the marking burden be reduced through innovative assessment, whilst keeping the task educationally rich and maintaining the same level of student engagement and learning from the assessment? What novel methods might be used to assess student comprehension? How can the ideas behind mathematical proof be assessed? Can an imaginative approach be developed to determine a student’s ability to identify assumptions in a mathematical argument? These are the questions that have provided the motivation for this project. So far the applicant has begun the exciting development of an interesting and innovative assessment; a multiple-choice test on mathematical proof. This project seeks to further the implementation of this novel assessment practice. It also aims to obtain detailed feedback from students to enable a comprehensive evaluation of this interesting new method of assessment of pure mathematics. Being part of the MU-MAP project will facilitate the dissemination of new ideas, the development of resources, and provide a mechanism to share good practice.

Evaluating assessment practices in a Business and Industrial Mathematics module
Edmund Chadwick
School of Computing Science and Engineering, Salford University

The Business and Industrial Mathematics module is run in the second year of the mathematics undergraduate degree at the University of Salford. The module is 100% coursework, 20 credits and spans two semesters. The module attempts to prepare and assess students for work related skills important for mathematicians in the workplace. A variety of assessment methods are used to quantify this, and this proposal aims to compare and evaluate the various assessment practices used.

Towards an efficient approach for examining employability skills
Stephen Garrett
Department of Mathematics, University of Leicester

With the recent shift in undergraduate funding, the future success or failure of a university department has been placed in the hands of student recruitment and so to league-table performance. The employability of graduates has therefore never been more important. But how best should math departments assess these skills in their students? A student’s approach to an open-ended problem, one with no necessarily right or wrong answer, is crucial for their employability and is often used on graduate assessment days. Indeed the value of a math graduate is in his/her problem solving skills. The scope for such open-ended problems in applied mathematics is great, yet traditionally these skills are not examined until final-year projects. This severely limits the number of opportunities to develop and assess these skills over an undergraduate programme. This proposed project intends to look at how best to examine a student’s approach to open-ended problems. It will compare the summative attainment of two cohorts of students, both enrolled on modules with common lectures. One group of students will be assessed with a 2hr written examination consisting of 4 compulsory questions, and an extended project over the semester. The other group will be examined by a 3hr written examination, consisting of the same 4 questions and an additional open-ended question, closely related to the extended project. Such an assessment scheme already exists within a related pair of modules at Leicester. I intend to use this as an opportunity to gauge whether open-ended exam questions are a useful addition to the assessment armoury in math departments. If open-ended exam questions can be shown to be as useful as extended projects for assessing particular skills (to be determined), their significant practical advantages mean more use should be made of them across undergraduate programmes.

Summative peer assessment of undergraduate calculus using Adaptive Comparative Judgement
Ian Jones and Lara Alcock
Mathematics Education Centre, Loughborough University

The project aims to demonstrate that sustained mathematical reasoning can be peer assessed with high reliability using Adaptive Comparative Judgement (ACJ). This will be a live trial and the outcome, checked against expert assessment, will be used as part (5%) of the summative assessment of first year undergraduates studying calculus at Loughborough University (first year modules do not contribute to overall degree credit). This innovation has important implications for assessment of mathematical skills that are traditionally seen as difficult to test, such as making judgements about the relative quality of different mathematical explanations. Therefore it needs to be evaluated rigorously and the project will allow us to undertake this evaluation.

Mathematics Lecturers’ Practice and Perception of Computer-Aided Assessment
Carol Robinson and Paul Hernandez-Martinez
Mathematics Education Centre, Loughborough University

Mathematics lecturers at Loughborough University are in the position of being able to utilise Computer Assisted Assessment (CAA) without the need for developing their own questions. Two projects, undertaken a few years ago by colleagues at Loughborough University and elsewhere have resulted in question banks containing thousands of CAA questions ready to use (see, for example, the HELM project – http://helm.lboro.ac.uk .) This project aims to evaluate the issues arising for lecturers who utilise existing resources and adopt this method of assessment. It would appear at first sight that the ready availability of CAA questions is an extremely efficient way of assessing hundreds of first year students and would be welcomed by all involved. Question banks are available for both practice and coursework tests and lecturers are freed from marking students’ work. The workload for lecturers is minimal, as dedicated e-learning staff are available to upload tests and the computer software provides pre-prepared feedback to the students and summary statistics for the lecturers. However all is not necessarily as straightforward as it might appear. For most large classes it is not possible to invigilate the coursework tests due to the lack of availability of computer labs for this purpose. Some lecturers and/or departments are concerned that plagiarism is an issue and in these cases paper-based versions of tests may need to be prepared and marked, thus reducing the efficiency of the system. Other lecturers are concerned about the questions which are available for use. Sometimes they do not fully cover the required syllabus. However the steep learning curve and associated time involved in developing new questions is prohibitive and so lecturers may be tempted to ‘make do’. Other concerns involve the procedural nature of many CAA questions. Clearly lecturers wish their students to be able to apply standard techniques to solving problems. But what of the students’ conceptual understanding of the mathematics? Is CAA able to test this? Does it matter to lecturers if it does not?

PAMPER: Performance Assessment in Mathematics – Preliminary Empirical Research
Adrian Simpson
School of Education, Durham University.

Closed book examinations dominate the assessment diet of undergraduate mathematics in the UK (Ianonne and Simpson, 2011) and while there this is leavened by some variety, oral examinations as a core component at this level disappeared by the 20th Century (Stray 2001). However, there is renewed interest in this form of assessment as more validly grading students’ performance of mathematics (Levesley, 2011). This mini-project aims to investigate the difficulties and advantages to introducing an oral performance component to the assessment of a pure mathematics course. Anecdotal evidence amongst staff at high ranking mathematics departments suggests that the modal assessment method (closed book examination) does not provide them with the clearest insight into students’ mathematical understanding. Those with experience of oral exams from their own mathematics education (such as those from most other European countries) note that these can provide such insight relatively quickly. However, there is concern about the fairness and resource implications of such examinations. It is not clear whether examiners might be biased (consciously or unconsciously) towards certain students; whether student nervousness might overwhelm their performance or how the standard of assessment can be assured. It is also felt that the time taken to assess each student individually and the administration of such a process could be disproportionate to the quality of the information gained.
This project will
a) explore the perceptions of staff and tutors on the course to implementing the assessment
b) detail the process of undertaking performance assessment in a core module
c) outline student attitudes to the assessment process

Advertisements

Comments are closed.