skip to content
 

'Point by point' marking refers to the practice of giving a mark for a single piece of information. The choice for the examiner is a binary one, to give the mark or not. Is the answer credit-worthy or not. Questions of this type should state in the question itself how much information is required to get the marks:

  • Suggest two reasons why...
  • Select one feature...

If the question has a straightforward correct answer (or answers) this will be listed in the mark scheme and the examiner will mark the question against this correct answer. If, however, the question is more open for example where a student is asked to select features from a piece of text and suggest why they are used then the mark scheme will contain suggested answers but also an instruction to credit any legitimate responses. Where a question is crediting analysis or evaluation the examiner needs to exercise their professional judgement; if the question is subjective then the marking of it will also be. Students should never be penalised for coming up with a legitmate response that isn't 'in the mark scheme'.

'Level of Response Grids' (LofRG) are the usual way more complex questions such as essays are marked. If the question targets more than one Assessment Objective (AO) the grid might be split (one grid per AO) or combined (both assessed in the same grid). Best practice where questions have a high tarif (e.g. 20 or 30 mark essay) is to separate AOs, which reduces the chance of over or under crediting a particular AO. Lower tarif (e.g. 10 mark commentary) questions often use combined grids. 

A LofRG is used to assess an answer holistically, as such it won't be applied until the examiner has read the entire response. Once they have read the response, an examiner will work to identify what Level is the 'best fit' for the answer. An examiner will read the descriptors in a certain level and assess whether they believe them to be true of the response they have just read. If they feel that they are, the examiner might look to the level above and see if the answer meets any of these criteria. If it does, then the response might be best suited to that higher level. Should they feel that the answer does not quite fit the statements in the chosen level, then they would check it against the statements in the level below and see if these provide a better fit.

A response does not need to meet all the criteria in a level to be awarded that level, it should simply be the best fit available. How well the answer fits the descriptors determines the mark given within the level; a good fit will be high up, an answer that just edges in will be lower down.

Yes, they are really well-researched and an excellent tool when designed and used well. They are subjective and complex, but this does not mean they are unreliable.

To be a reliable tool a LofRG should not have too many available levels (you will rarely see one with more than 5 or 6) and within each level there should not be too many available marks (around 3 is ideal). All of these design features reduce the amount of decisions the marker is making and increase reliability. A marker should be able to make clear and secure decisions to the questions; What level? High, medium or low in that level? What mark? 

One of the best things about LofRGs is that they can be used to credit a range of responses following the distribution of candidates. A point-by-point question does not allow for candidates to spread out across the available marks they either get each mark or they don't  and so to spread candidates out and differentiate between them reliably you would need a really huge amount of point-by-point questions. A LofRG allows for the marking of a single question to reflect the sheer range of candidates an examiner will see. It is also less disheartening for weaker students; they can get some marks, instead of getting a question wrong and getting none.

Every panel of examiners will have Lead Markers or Principle Examiners (these are called different things by different boards). These individuals get the first look at the answers when they come in from the schools. Lead markers are extremely experienced and will use their professional judgement to set the 'standard' i.e. assess what counts as 'good' or 'reasonable'. When they have done this, they will select a range of scripts which demonstrate the standard at different levels. These will be marked by them and then sent out to the wider marking team for 'standardisation'. During standardisation all examiners will mark the papers selected by the Lead Marker(s). If an examiner's marks differ greatly when compared with the senior examiner's, then they will have to complete more standardisation tasks before they are allowed to mark live papers. No examiner who has not passed standardisation will be allowed to mark.

In addition to standardisation, exam boards will have ongoing checks on examiners. Some boards ask for a sample of an examiner's work at a particular point in the marking process, others for example OCR do all their marking online and so can monitor their examiners in almost real time. This might involve seeding pre-marked scipts into an examiner's workload to check that they are still in line with the senior examiners, checking the statistics to make sure there is no odd pattern emerging, and random spot checks on individual scripts. If an issue is identified, an examiner will be stopped and their marking checked and possibly re-done.

The answer to this should always be 'no' unless a board has released this model or guidance ahead of the exam. There shouldn't be 'secret tricks' that examiners are told to look out for (for example, 'a good range means 4 points') without this information being generally available, ideally on the question paper so that students can see it during the exam.

Marking against a secret set of criteria is bad practice and disadvantages students, something Ofqual and JCQ take a very dim view of. Boards should not be allowing this to happen. If you are aware of such behaviour you should contact a representative of the board and make them aware.

Where a question might elicit a huge range of responses it is impractical to list all of these in the markscheme. Instead they will list some or most of those they can think of and include a statement such as 'all legitimate answers should be credited'. In this situation the exmainer should use their professional judgement to work out if the answer is creditable. If it is a sensible, legitimate response to the question even if it is highly unexpected then it should always be credited.

Probably not. The markscheme is a working document right up until it is released to the general team of examiners. The initial indicative content will be written by the setter, and will indicate what they think students will say. Therefore this intial content will probably be very straightforward and obvious. At standardisation the Lead Markers will look at the real answers students give, and add to the indicative content. If they encounter an odd or unexpected answer which they deem to be credit-worthy they may include it in the markscheme so that other examiners know that it has been judged to be legitimate.

This means that markschemes will sometimes contain very odd or niche information that most teachers probably haven't covered. The markscheme isn't a list of elements of the 'perfect answer'; it is a tool to enable examiners to judge whether an answer is legitimate. If there's something strange in the indicative content then don't panic!

This is actually quite a rare problem. Most teachers teach roughly the same things in roughly the same way, and so students write very similar answers to each other. An answer which is truly off-the-wall or has gone in a totally unexpected and unique direction is quite rare, and so the system can handle it pretty well.

The first thing to remember is that examiners aren't the enemy, they want to credit students appropriately. So if your student has written a very confusing essay or one that uses material with which they aren't familiar, an examiner will take the time to work through the response and try to identify unknown references. If they feel that they can't mark the essay, or aren't sure about it, they can refer it to a senior examiner, who can in turn refer it up again if necessary. There is an entire team of examiners and senior examiners, it is highly unlikely that none of them will be able to understand the student's point or identify the argument if there is one. 

If a student answers more questions than they should or commits another type of 'rubric error' (where they don't do the paper as it is designed to be done) then the examiner will simply work around this as best they can. If they have answered too many question then they will all be marked and the best marks taken. The student has already penalised themself by doing more work than is expected, the examiner won't penalise them again by refusing to mark it.

Obviously the best advice to students is to be clear, easy to follow, neat (!) and organised in their writing, but if they aren't then the examiners will do their best!