## Friday, August 24, 2012

### The GCSE and A-Level marking conundrum

Watching the BBC Breakfast news this morning and the discussion on marking; Charlie Stayt, one of the presenters, asked  something along the lines of "Why can't we just set a mark and say if you score over this then you get an A or an A*?" and it never really got answered. So I'm going to give it a go.

On the face of it it seems obvious - the A* rate is 75% if you get this or above on the paper then you get that mark so you know that year on year anyone with an A* must have gotten that percentage. That would be great if every year's intake of student were taking exactly the paper from one year to the next. For rather obvious reasons they don't.

Every year's paper is different and the exam boards try (or at least should try) to keep the level of difficulty between them the same; except how do you measure that? Watch a quiz show such as BBC2's Eggheads; the two teams get set different questions and see how many times you consider one question to be 'easier' than the other. It's only easy if you know the answer; if you don't it's hard. Now try to measure the difficulty of exam papers year-on-year before they are taken. You can't; you can only see how difficult they are after the fact and that's how the exam graders mark.

One of the guests mentioned the bell graph and that indicates how this process works. Consider this graph produced by University of South Alabama that shows these Bell Curves (or normal curves as they're more properly known)

Imagine these showed the results for three year's worth of exam results plotting percentage on the horizontal  and the number of students who received that grade on the vertical.

For the first year we can see the majority did well; in the third year they did badly. Now it may be that it was simply an intelligent year and an unintelligent year, but given the number of people involved it's more likely that the third year's exam was simply much harder than the first year's.

If we marked by simply precentage than the first year students are going to walk away with a lot of A-C grades and the third year a lot of D-Fail. Is that fair? Of course not so you "grade on the curve" For the first year you only get an A* if you achieve say 70% on the paper; for the third year you may only need 60%; the percentage who receive A* grades should remain roughly the same year on year.

However that leaves us with a couple of points. Firstly any exam board that produced results displayed here has a few questions to answer to Ofsted regarding their trying to keep the exams at the same difficulty level.

The second is that if they're grading on a curve how is that we keep hearing about the increase in A-C grades year-on-year? If they're marking by curve than the percentage of students at each grade should remain a constant. Now the numbers can go up if there are either more students sitting exams or they're sitting more exams and we all know the media's delight in conflating numbers and percentages to their own story's gain. But if the percentages are going up they're not grading to the curve - or they're not just grading to the curve and are using another measure in conjunction with it.

I'll try to explain:

Consider 10 students sitting the same exam and the results plot as the second year's graph - a default normal graph. We set the pass mark at 50%. so 5 students passed; 5 failed; 50% pass rate. Do the same with 20 students with exactly the same result 10 pass, 10 fail 50% pass rate.

Split those 20 students up and have 10 of them set the first year's exam (the first graph results) and the other 10 the second year's (the middle graph). Again for the latter we set the pass rate at 50% and 5 pass and 5 fail. But for the other group the pass rate is set to say 70% - how many of the 10 pass and fail?

5 of each again it's still 50%. Even if all 20 took both exams we'd still get 10 students passing the first and 10 students passing the second that is 20 passes from 40 exams or 50% pass rate.

If we constantly grade to a curve the percentage who get a particular grade will remain the same year-on-year. This is surprisingly fair. Consider you were part of the batch who took the 'difficult' first year exam. The pass rate is set to 70% you get 90% and get a B grade. The second year's take their exam and the pass grade is 50% someone gets 90% on the exam and gets awarded an A*. How is that fair? Because, roughly speaking, if you'd taken the second year's exam you would be most likely to get 70% which would have gotten you a... B; same as before.

Now it's possible to just fall on the wrong side of the line but for the most part grading on a curve allows comparison. A 2011 student getting an A grade is the equivalent (or should be) of a 2012 student getting an A grade.

In other words it wouldn't even matter if the exams were getting easier or if you'd taken a different exam board's version of the exam because all that would mean is that you'd need a higher mark to get a higher grade. Sure things would get a little squished at the lower grades and it's not something we want to happen, but we'd still have consistency year-on -year.

If we're not getting that then it's not fair on the students, and it's not useful to employers, or colleges and universities.