I teach Math in Ontario. We have an “Achievement Chart” (see pages 28-29) which lists four categories of knowledge and skills. When we assess and evaluate student work, we separate student performance into the “TACK” categories: Thinking, Application, Communication, and Knowledge. The Chart includes criteria for each category and descriptors for different Levels of performance.
The curriculum itself is divided into Strands for each course, and these strands describe Overall Expectations and Specific Expectations (essentially the details of the Overalls).
So when evaluating student work, we evaluate Overall Expectations in the context of the four Categories of Knowledge and Skills, and we should have a “balance” between the categories (not equality, necessarily).
The truth is that I’m having some trouble with it. I posted a little while ago that I was struggling with the Thinking category, and that’s still true. But there is another issue that’s more pervasive and possibly more problematic.
Isolating skills
When trying to separate out the different components of student performance, we would often ask questions that “highlight” a particular area. Essentially we would write questions that would isolate a student’s understanding of that area.
That’s a fairly mathematical, scientific-sounding thing to do, after all. Control for the other variables, and then effect you see is a result of the variable you’re hoping to measure.
For example, we wouldn’t ask a student to solve a bunch of systems of equations which only had “nasty” numbers like fractions in them (or other unfairly-maligned number types) because we fear that a student who is terrible with the fractions will stumble over them and be unable to demonstrate their ability to solve the system of equations. So we remove the “barrier” of numerical nastiness in order to let the skill we’re interested in, solving the system, be the problematic skill.
This isn’t a great idea
But we do that over and over again, isolating skill after skill in an effort to pinpoint student learning in each area, make a plan for improvement, and report the results. And in the end, students seem to be learning tiny little skills, procedures, and algorithms, which will help them to be successful on our tests without developing the connections between concepts or long-term understanding.
We want to have “authentic, real-world problems” in our teaching so that students can make connections to the real world and (fingers crossed) want to be engaged in the learning. But authentic problems are complex problems, and by julienning our concepts into matchstick-size steps we are sacrificing meaningful learning opportunities.
What if we didn’t have to evaluate?
We’re slicing these concepts so finely because we’re aiming for that granularity. We want to be fair to our students and not penalize their system-solving because of their fraction-failings.
But if there were no marks to attach, would we do the same thing? Would we work so hard at isolating skills, or would we take a broader approach?
My MDM4U class
I’m teaching Data Management right now, and the strand dealing with statistical analysis has a lot of procedure skills listed followed by a bunch of analysis skills. If I evaluate the students’ abilities in summarizing data with a scatter plot and line-of-best-fit, do I then ask them to analyze and interpret the data based on their own plot and line? What if they mess up the plot; don’t I then have to accept their analysis based on their initial errors? Oh wait, I could make them summarize the data, then I can give them a summary for a different data set and ask them to draw conclusions from that summary! Then they’ll have the same starting point for analysis, and they can’t accidentally make the question too easy or hard!
But I’ve just messed up one of my goals, then: I’ve removed the authenticity and retained the ownership of the task. I haven’t empowered my students if I do it this way, and I’ve possibly sacrificed meaningful complexity. Worse, I’m only doing this because I need to evaluate them. I’d much rather require them to gather, summarize, and analyze data that interest them and then discuss it with them, helping them to learn and grow in that richer context.
As always…
…I don’t have answers. Sorry. I’m trying hard to make the work meaningful and the learning deep while still exposing as much detail about student thinking as I can. I’m sure in the end it’ll be a trade-off.