Friday, September 28, 2012
Why a No Zeroes policy is good for learning
Ken O'Connor, an internationally recognized expert on evaluation and grading wrote a good defence of No Zero policies in the Edmonton Journal in June. I want to explain and expand on some of the issues he raised.
The first thing that Dorval's supporters do not realize is that teachers following best practices do not use averages to calculate final grades. When you don't use averages, then giving a zero or not becomes irrelevant. A student who has done work to demonstrate the learning demanded by the course curriculum deserves to be recognized for that learning even if he or she has not done all the work. A student whose missed work means that he or she has not met all course expectation should not be granted the credit, even if an average of his/her marks would give a grade above 50%.
Secondly, allowing a teacher to give zeros allows both teachers and students to avoid responsibility. A student can choose to not do work and "take a zero", avoiding his/her responsibility to do school work. A teacher can give a zero to that student and avoid the responsibility that I feel a teacher should have to follow up when a student has a problem so significant that work does not get completed. I ask parents out there, if your child did not do a school assignment, would you want the teacher to give a zero and forget about it or instead follow up with the student (and maybe you too) about what the problem was and how it can be fixed?
The third point that Dorval's proponents seem to be missing is the idea that grades and marks should try to accurately reflect student learning. If a student does not produce work, assigning any mark to it actually makes no sense. It would be a little bit like a meteorologist saying "On Thursday it rained most places in the city but because of a technical problem I did not get a rainfall reading so I will treat Thursday's rainfall as zero". Assuming that a measurement should be zero because you were unable to take the measurement is going to skew your data.
Now that you have read this post (and Ken O'Connors piece as well, I hope) you should have a better understanding that No Zero policies are not about watering down education or being permissive with students. No Zero policies are about trying to provide the best educational and learning opportunities possible to our young people based on what we know right now. If you are looking for more information on current best practices in assessment and evaluation, checking out Ken O'Connors books is a great place to start.
Why I don't average student marks
Imagine that two students have written the same series of tests in a course. Each test reflects the cumulative knowledge of the student for the whole course at that point. Student A scores 80% on the first test, 70% on the second test, then 60%, 50%, 40%. Student B scores 40% on the first test, then 50%, 60%, 70%, 80%. If we determine final grades based on averages, then both students get the same grade, 60%. But clearly student B is improving his/her learning while it looks like student A is understanding less and less. There are more statistically sophisticated methods than simple averages that can be used to try and capture the trend being shown, but they are tricky and have potential flaws. The reality is that any given statistical method for calculating final grade will have weaknesses, so Ken O'Connor, a recognized expert in student evaluation, offers this as Guideline #6 in his book How to Grade for Learning K - 12: "Crunch numbers carefully, if at all."
So instead of crunching student results into an average, best practices call for teachers to use their judgement based on the most consistent student results, with the emphasis on the most recent. In the example above, the lack of consistency would mean that a teacher should focus on the most recent results which are 50% and 40% for student A and 70% and 80% for student B. These most recent results suggest that student A may not have learned enough to pass the course while student B's learning can fairly be evaluated as being in the low 70s. That is a big difference from giving both 60%.
Monday, February 22, 2010
Teaching Risk Assessment
I must say that I completely agree with this article. I would love to see more math in the classroom that deals with the reality of statistics and probability. The more literate our citizens are about statistics, the better will be the decisions that get made in every arena.
Monday, February 8, 2010
Teacher Merit Pay and Teacher Merit.
The Globe article highlighted many problems with merit pay, and I won't repeat them here, since I have already blogged on that topic.
One point that I felt that the Globe article missed was that teacher merit is, in the end, about teacher merit. Which teachers are truly helping the majority of students learn and succeed? Those teachers need to be encouraged and rewarded (although not necessarily with money). Which teachers are failing to help and teach the majority of their students? Those teachers need to be helped so that they can improve, or, if they cannot improve enough, they should be gently removed from the profession.
The problem with the idea of teacher merit is that I don't think that a good definition of teacher merit in terms of measurable quantities currently exists. Gladwell and the Globe article discuss the idea of evaluating teacher performance by looking at standardized test results over several years, but there are no specifics. Between the lack of specifics, human inertia, and teacher union resistance, I suspect that a good definition of teacher merit through measurable values is several decades away.
So, until we have that definition, I fear that we are stuck with the status quo. Too bad. I, for one, would be happy to know whether I am actually doing a good job of teaching or whether I just think that I am doing a good job.
Monday, September 21, 2009
Mind the gap
News stories often give statistics on gaps, differences in achievement or performance between different groups. The comparisons are often based on gender or ethnicity but can look at anything. Over the last little while, articles on education in media often look at various gaps between boys and girls in school.
One thing to watch out for when looking at gaps is that there are different ways to look at a discrepancy between two groups. For example, if we have a group of people who make $20,000 a year and another group who make $100,000 a year, we clearly have a gap. One way of defining the gap is to say that the difference is $80,000 a year. Another way of defining the gap is to say that the second group makes five times what the first group does.
Why does it matter how the gap is defined? To answer, I will continue with my example. Suppose after a series of government programs intended to close the income gap between these two groups that the first group's income has been increased by 50% to $30,000. During the same time the second group's income increased by 20% to $120,000 a year. Let's look at the gap again.
If we look at the gap as an absolute we now see that the difference in income is $90,000 a year. The gap is getting bigger! Does that mean that the government programs were a waste of time and money? Maybe not. If we look at the ratio of incomes, the second group no longer makes five times the income of the first group, the ratio has been reduced to four to one. So one way of looking says the gap is getting worse, another way says that the gap is being reduced. Which one is correct? Well, like most things in life, the answer is "that depends".
And what it depends on is the context of the information. If we are looking at which group is going to be purchasing more luxury vacations, then maybe the straight difference of $90,000 is the more important figure. If we are looking at more basic purchases such as housing or food, the ratio might tell the story more accurately. And, to make things more complicated, there are lots of other factors that probably should be considered before we can accurately talk about the gap. What about taxes? The high income group does not get to keep all of its $20,000 a year gain but taxes will be lower for the other group. That will affect the difference and the ratio too.
The critical thing is to recognize that there are multiple ways of looking at any statistical information and to try and find the one that makes most sense. If you want to improve your knowledge of statistics, I listed some possible websites in the comments section of my blog entry We want everyone to be above average?
Friday, September 18, 2009
Standardized tests: analysing the analysis
This surprised me since yesterday I was blogging about a story in the Citizen talking about the same results. That story put a positive spin on the results, even though I felt the analysis was simplistic.
For those who want to look at the actual data instead of the simplistic summaries in the papers, go here and select a grade and year.
I wrote a letter to the editors of the Citizen, pointing out the flaws in the editorial, and I am also going to publish that letter here:
I would like to make a few comments about the editorial “Rising to the Test” in the Citizen on Friday, September 18. For the record, full disclosure: I am a public school board High School teacher in Math and Science (currently supply teaching).
My first comment is about interpretation of the test scores. All the results talk about the percentage of students who meet or exceed the provincial standards. For example, the Grade 3 reading test had 63% of students meet or exceed provincial standards. This makes it sound like 37% of Grade 3 students are failing in reading. However, when you look at the full results, you find that 27% of students achieved a Level 2 result, which is below provincial standards but is about the equivalent of a C. So, 90% of students are either within striking distance of provincial standards, have met the standards, or have exceeded them. Furthermore, in Grade 3 reading 8% of students scored at Level 1, which is approximately a D. In total, 98% of students got a result which would be a pass at school. Looked at one way, only 2% of are Grade 3 students are failing in reading. Yet the result that is put out by the Educational Quality and Accountability Office (EQAO) is 63%. Could that be because if EQAO published a 98% pass rate then people might suggest that we don't need the EQAO?
Another comment I have deals with media interpretation of the results. The editorial states that “most Ottawa schools are underachieving.” Yet an online story in the Citizen yesterday had the headline “Area schools outperform provincial average in reading, writing, tests reveal”. So how did Ottawa area schools go from beating the provincial average to underachieving? The editorial mentions that for the public board “only 73 per cent of Grade 6 students met the provincial standard in reading.” But 26% scored a level 2, Meaning that 99% of public board Grade 6 students are at least close to the standard. That does not sound like underachieving to me. If you want to truly claim that our schools are not up to snuff, you need to offer a better explanation than “ Many students in the nation's capital come from homes where the parents have high levels of education”, a statement that offers nothing to show how much better Ottawa should be doing based on this factor.
A third comment is about the statement “Strangely, some critics respond by questioning the value of standardized tests.” There is plenty of reason to be wary of standardized tests and their results, especially when that seems to be almost the only facet of education on which the media report. As an example, look at the statistic for primary math results published in the editorial. Ten years ago the primary math results were 56% (44% below standards) and now the result is 70% (30% below standard). So, in ten years we have gone from 44% to 30% below standards, almost a 1/3 decrease in poor results! Is this because math is taught so much better now than ten years ago? Not a chance.
Sure, some of the improvement is the result of improved teaching practices, but I am willing to bet a large sum of money that most of the improvement is because teachers have learned how to prepare their students for the test. I believe this is true because as a High School Mathematics teacher I see and hear about the Grade 9 EQAO Math test. A common refrain from teachers is that their students understand the questions quite well but have trouble answering them in the form that the test demands. The result is that teachers are forced to spend time teaching their students how to deal with the test format instead of teaching course content. So, results improve not because of more learning of content, but simply because of teaching to the test. If that does not raise at least some questions about the value of standardized testing, I do not know what will.
I agree that the results of standardized testing offer information that can be valuable when looking at how our well schools and boards are educating students. But the results need to be treated with care because the tests do not cover the whole curriculum, there are issues of teaching to the test, and there are issues with interpretation, especially as the results are often presented in a simplistic manner by the EQAO.
Thursday, September 17, 2009
We want everyone to be above average?
As I read through the article, just about every statistic was compared with the provincial average. Essentially the article is saying "above average good, below average bad." This is not completely unreasonable, but it is simplistic and makes education look like a competition between school boards when it should be about simply doing what is best for the students.
The big problem with using average as the yardstick is that, no matter how you slice it, approximately half the school boards are going to be below average. That is how average works. Currently the province has a goal to have 75% of students score at the provincial standard or above on the EQAO (Education Quality and Accountability Office) tests. As a reference, the provincial average for Grade 3 Math results is 70%. However, even if schools across the province made massive gains and all scored better than the 75% goal, still there would be about half the schools below average. Would that mean that those below average schools are bad? Nope, but that is probably what the media would report.
So, the moral of the story is that if you want to truly understand stories in the media, it is important to know about statistics and how the various statistical measures work. Otherwise you may end up trying to figure out a way to achieve the impossible of making everyone above average.