As we’ve just finished our end of year reporting process, I’ve been reflecting on giving students zero if they hand a task in late. Whether it’s a straight zero for being a day late or whether it’s losing 20% per day, it doesn’t make sense. What a policy like this says is that a student who hands in their project a day late (or 5 days late) has learned nothing when in reality, the project that they’ve completed could have communicated the most extensive understanding of the topic they were learning about. Is this fair? Does the lateness say more about their disorganisation than the development of their understanding? In NSW we are required to use Course Performance Descriptors to allocate a grade at the end of the school year for Year 10 (and now Year 11) students. Even if a student handed in all their assessment tasks late and received zero, we should still be able to give them an “A” if their projects showed an extensive understanding. I wonder how many students will have received a lower grade than they deserve because they handed a task in late?
Consider this scenario. A student is working on a project that they have become totally immersed in and is wanting to learn more and add more to their final project because they are continuing to learn more. As the due date looms, the student asks for a request to have an extension, not because they are sick, just because they don’t want to hand in something that’s not as well done as they feel they can do. It’s worth noting that I’ve never encountered this scenario myself, but I wonder what I would do if I wasn’t bound by school policy and have to do what it dictates?
A diminishing grade system, whether it be a zero policy or 20% per day, is no more than a big stick to help with management of students to complete their work in an orderly fashion I’m sure because it doesn’t seem to be there as a tool to help accurately demonstrate what students’ have learned.
Some other things to consider:
- What effect does the zero have on the student’s motivation to learn? Joe Bower has given some thought to this both here and here
- Why give a mark or grade anyway? Does it give an accurate reflection on what they know?
- The board of studies in NSW doesn’t require us to give a grade at all until the end of Year 10 if we don’t believe it’s suitable which means a zero policy could be redundant anyway (although we do need to report a 5 point scale twice per year) but so many of us do anyway. Is that just historical?
I’m planning that in 2013 I will be giving away the grades and working with students to enjoy learning without the fear of judgement. I’ll be focused on giving quality feedback and allowing students the opportunity to improve what they’ve done rather than seeing a grade as an endpoint to their learning.
Today we were having a discussion about exams and whether to keep an exam week for Grades 9 and 10. As you would expect there were very strong opinions for and against keeping exams from those that were in the discussion. I sat on the side of getting rid of an exam week because I feel that this form of assessment puts an end point on learning. Maybe that’s what is needed at the end of a stage of learning, but is that something that educators should encourage? Should we be encouraging students to think that learning has an endpoint? My observation is that too often exams are handed back with red ink on it and a grade at the top that shows what the student knew at that point in time. Rarely are students given a chance to re-sit to show that they have corrected the errors they produced or show they actually knew something that they weren’t able to communicate the first time. This is probably true of many of our types of assessment but the justification for our keeping testing has raised a number of questions for me:
Question 1: If the reason for continuing with an exam period in order to ‘train’ students for long exams, does a 2hr exam (or a series of them in a week) two years before the actual high stake exams you are training for actually enhance the students’ ability to sit a long exam better or does it just show them what they’ve got to look forward to (or dread) in future? If this does actually help, shouldn’t we be providing more opportunities to sit for 2-3 hours in the lead up to the high stakes external testing (in this case the HSC)?
Question 2: Do skills need to be tested under high pressure for us to gather information on whether our students know how to perform them? That may be the case if the normal performance of the skills is in a high pressure situation. Otherwise, shouldn’t we be giving the students an environment that gives them the best opportunity to show us what they know?
Let’s apply this to a sporting context. If I was teaching a student how to putt a golf ball into the hole. They get to practice it as much as they like before their test, but they’ll only get once chance to show me that they can make the putt. Under pressure they aren’t able to do it, so they get marked wrong. I can give them some marks for their working (or in this case their technique) but as they didn’t get the right result they can’t get full marks. Is that a fair assessment if they were able to get it write the majority of the time in practice? Isn’t this what a traditional test/exam does?
Question 3: If we set a testing regime, are we more likely to teach to the exam? Will this mean we just communicate content in order to give students something to study rather than educating them on how to learn, how to gather information and create something with the information?
Question 4: Is using tests just the easiest way of gathering a mark or grade to put on our reports?
As I’ve been writing, I’ve started to think, does the way I conduct other forms of assessment allow me to do things better? What do I need to change to give the students better information about what they know and what they can improve?
Does anyone have any definitive answers or research that gives answers to these questions? I’m open to suggestion and/or correction if my thinking is wrong.
I’ve had a handful of Heart Rate (HR) monitors sitting in my PE storeroom cupboard for a number of years now but they’ve only come out a handful of times. These are the most basic of Polar HR monitors that allow you to measure HR, set target HR zones and nothing much else. I’ve only really ever used them until now for measuring the effects of exercise intensity on HR once or twice a year.
However, today I pulled them out with my elective Physical Activity and Sport Studies class as we’d been talking about the National Physical Activity Guidelines (NPAGs) and thought it would be a good way of assessing whether different sports achieve moderate intensity. We’d put figures on what a moderate HR would be and the students were to check HR’s regularly and at the end of a 10 minute period we would discuss whether the physical activity would achieve the NPAGs.
What I realised straight away, was that the HR monitors actually engaged students in a different way. They were keen to see how hard they were working, what running harder would do to their HR . Clearly, what I’ve learned is that using simple technology like this can have a huge impact on getting students interested in what’s happening to their body while they exercise. Unfortunately, these HR monitors don’t allow me to download the data or give average HR for the duration which I think would be really valuable for student learning and understanding. It would also be great to download data to combine with GPS feedback – but more of that later.
For now it’s off to find a few more HR monitors that offer more feedback for the students …