Assessment and Evaluation: sacrificing complexity for granularity

I teach Math in Ontario. We have an “Achievement Chart” (see pages 28-29) which lists four categories of knowledge and skills. When we assess and evaluate student work, we separate student performance into the “TACK” categories: Thinking, Application, Communication, and Knowledge. The Chart includes criteria for each category and descriptors for different Levels of performance.

The curriculum itself is divided into Strands for each course, and these strands describe Overall Expectations and Specific Expectations (essentially the details of the Overalls).

So when evaluating student work, we evaluate Overall Expectations in the context of the four Categories of Knowledge and Skills, and we should have a “balance” between the categories (not equality, necessarily).

The truth is that I’m having some trouble with it. I posted a little while ago that I was struggling with the Thinking category, and that’s still true. But there is another issue that’s more pervasive and possibly more problematic.

Isolating skills

When trying to separate out the different components of student performance, we would often ask questions that “highlight” a particular area. Essentially we would write questions that would isolate a student’s understanding of that area.

That’s a fairly mathematical, scientific-sounding thing to do, after all. Control for the other variables, and then effect you see is a result of the variable you’re hoping to measure.

For example, we wouldn’t ask a student to solve a bunch of systems of equations which only had “nasty” numbers like fractions in them (or other unfairly-maligned number types) because we fear that a student who is terrible with the fractions will stumble over them and be unable to demonstrate their ability to solve the system of equations. So we remove the “barrier” of numerical nastiness in order to let the skill we’re interested in, solving the system, be the problematic skill.

This isn’t a great idea

But we do that over and over again, isolating skill after skill in an effort to pinpoint student learning in each area, make a plan for improvement, and report the results. And in the end, students seem to be learning tiny little skills, procedures, and algorithms, which will help them to be successful on our tests without developing the connections between concepts or long-term understanding.

We want to have “authentic, real-world problems” in our teaching so that students can make connections to the real world and (fingers crossed) want to be engaged in the learning. But authentic problems are complex problems, and by julienning our concepts into matchstick-size steps we are sacrificing meaningful learning opportunities.

What if we didn’t have to evaluate?

We’re slicing these concepts so finely because we’re aiming for that granularity. We want to be fair to our students and not penalize their system-solving because of their fraction-failings.

But if there were no marks to attach, would we do the same thing? Would we work so hard at isolating skills, or would we take a broader approach?

My MDM4U class

I’m teaching Data Management right now, and the strand dealing with statistical analysis has a lot of procedure skills listed followed by a bunch of analysis skills. If I evaluate the students’ abilities in summarizing data with a scatter plot and line-of-best-fit, do I then ask them to analyze and interpret the data based on their own plot and line? What if they mess up the plot; don’t I then have to accept their analysis based on their initial errors? Oh wait, I could make them summarize the data, then I can give them a summary for a different data set and ask them to draw conclusions from that summary! Then they’ll have the same starting point for analysis, and they can’t accidentally make the question too easy or hard!

But I’ve just messed up one of my goals, then: I’ve removed the authenticity and retained the ownership of the task. I haven’t empowered my students if I do it this way, and I’ve possibly sacrificed meaningful complexity. Worse, I’m only doing this because I need to evaluate them. I’d much rather require them to gather, summarize, and analyze data that interest them and then discuss it with them, helping them to learn and grow in that richer context.

As always…

…I don’t have answers. Sorry. I’m trying hard to make the work meaningful and the learning deep while still exposing as much detail about student thinking as I can. I’m sure in the end it’ll be a trade-off.

Advertisements

6 thoughts on “Assessment and Evaluation: sacrificing complexity for granularity

  1. I’ve been struggling with a lot of these same issues, myself. While we as teachers hope to guide students through making connections and deeper learning as you mention, at the U level, we also have to keep in mind that we are preparing students for university, which, for the time being, includes tests and exams on isolated skills. I don’t think we can discard them completely. Can we continue to test, but make the tests worth less? That might allow teachers to pinpoint where skill issues are arising, but doesn’t place all the focus on the nitty-gritty?

  2. A thoughtful post, as always. For tonight, due to a 6 hour day followed by a 5 hour drive primarily in the dark, I am tired and not as insightful as I would like to be. I listen to podcasts. One of my favourite professors is John Merriman, from Yale. He gives a midterm because, in his words, “he has to”, but here is how he evaluates it: “If you do well, we count it for more, if you do not so well, we count it less”. I am of course paraphrasing. He is one of the most respected history professors in the western world. He believes that it is a student’s thinking that is important. He also does not assign paper topics. He wants students to research and write on something they find interesting, within the scope of his course. If it is modern history, one cannot write on the dark ages, but you get the picture. I realize in maths and sciences, the content has to be more prescriptive, but I so agree with his theories. I have often found that I feel we teach backwards. We start little and build up, bu the problem is, following this progression (very industrial, isn’t it?), if we miss some parts, kids have gaps, that grow and turn them into math students like I was. We should be starting big, and working towards small. Propose a problem. A big problem. In current edu-talk A Rich Assessment Task. Let us go back to the Common Curriculum (one of two things I did not mind from Bob, Pink Floyd and company)….. (someone please tell me they get that…) and start big. Provide some big problems. History people like me make it a history problem, something as simple as “Why did the four original provinces need to become a new country in 1867?, or some complex environmental question (why do we need to deal with the current above ground storage of spent nuclear fuel), or a complex math question, and no, you are not getting a sample. Call my daughter, she is good at math, or so her multiple choice exam in University told her….and then as a learning group, identify what we need to know, in order to solve that problem. Then set about learning what we need, in order to be able to tackle the problem. Another boring personal example. I renovated a kitchen last fall/winter. I did not want to, but that is another story. One of the many things I had to figure out how to do, was not only install cabinets (fairly straightforward, really…) but install Crown Moulding. I did not just start cutting a bunch of angles on a board for fun….I had a Rich Task, let me tell you…oh, let me tell you. I knew what I needed at the end, but had to figure out what I needed to learn to get there. I also knew I had to learn a lot, because unlike a wrong answer on a test, that I could erase, making an error here cost me money, if I had to purchase more, not to mention time, frustration, anxiety, etc. So, I used some cheap pine lumber I had, and I made a mock up of the crown moulding, and I practiced cutting that, then even used it sort of as a jig, not really, but sort of. However, it was the final product I started with. I did not ask my friend who is much more skilled than I, to come show me how to cut a beveled 22.5 angle…..without having a reason…he would have thought I had gone mad….I knew why I was practicing that cut.
    Maybe that is all so simplistic, I am not sure. And, one more thing. I cannot resist this connection. The crown moulding pretty much turned out alright. A really seasoned, professional eye would see some minor errors. Two minor errors are a result of walls that are not square….and I to this day am unsure of how I could have fixed it, but it does not jump out at anyone, unless you look. Not one piece of very expensive crown moulding was cut incorrectly…..and that is because, and yes, I will shout….I MADE ALL MY MISTAKES ON THE MOCKUPS I MADE FIRST….plenty of mistakes……but how am I evaluated on my crown moulding, and ultimately, the entire kitchen? On the final product….not all the mistakes I made on the way there……they went into the sauna at the camp and played a very important role elsewhere…in heating a sauna. So, why do we still have teachers who evaluate and count stuff our kids are doing as they are learning. I know, it is not supposed to be like that, but it still is in pockets…not like it was, but it is still there…..Phew, that was not supposed to be long……

  3. I have to agree w/ Heather above that until there is a post secondary change – you have to assign grades. YOU have to assign grades as a grade 12 teacher. YOU do. But do I?

    I can only offer my perspective here from an elementary standpoint. We assess based on the same guidelines as outlined in Growing Success but the reality is we stream our students into applied, academic or sometimes essential. The reality is marks in grade 7 & 8 don’t mean a thing. A student who receives an 81 will end up in the same academic class as the student who got a 91 in grade 8. The goofy part of this comparison is likely both students received level 4s throughout a class so how does a teacher determine the numerical value? And why are the ranges so large?

    I should also mention that there is no such thing as ‘weighting’ grades in elementary. We take the most recent and consistent levels a student achieves and are then to assign a percentage mark. There is no magical formula and there are no peg marks. It is impossible to truly justify giving one student a 77 and another a 76 if they both achieved level 3s throughout a class.

    And since we stream our students into those categories for grade 9, most grade 9 teachers won’t ever look at a grade 8 report card.

    With respect to the “student choice” outlined by djaremy – when we allow students voice and choice and assist them to the best of our ability – wouldn’t most students get an “A”? If we truly assess one’s ability on a task that interests them and we scaffold it as necessary, I think most students will meet or exceed your expectations, hence trivializing the importance of marks.

    Unless a students is driven by marks in elementary school, their report card has no meaning. And for those who are driven by marks, their report has no meaning to anyone other than themselves or maybe mom and dad.

  4. Brandon,

    These are many of the same issues that I have been wrestling with a lot this year (and I am only in elementary).

    A lot of this change came from completing my Masters’ Courses. It was interesting to hear a professor tell me that I should be getting at 85% if I didn’t then the prof didn’t do a good enough job with feedback and revisions or I didn’t take him up on the feedback and revisions. This for me was a cultural shock! When I was in undergrad I was told that the person to my right and left probably wouldn’t be in my program.

    In elementary we spend a lot of time on tests because we think that we are preparing them for real world experiences. However, are we? In most instances besides school when do we have written tests? We don’t. We tend to have performance tasks, or interviews where we can demonstrate our knowledge. Very rarely do we sit down and regurgitate information. At the same time, what does a test accomplish? Most say it gives me a mark. Which is true but is that mark the best way to describe your students learning? How is it different then a performance task mark? or a project? or a presentation? Why can’t we interview?

    I know Heather and I talked about this on twitter and we had a discussion about preparing students to be under pressure or ready for tests. I was suggesting that interviews, projects (with deadlines) and such do prepare our students and give me the marks I need for a report. For me real world context is everything but I think that there are better ways to prepare our students then tests.

    In my classroom, I don’t assign marks to my students. I give them feedback and we set co constructed rubrics together. Students know based on their feedback where they are as far as grades but they don’t really care because they know learning is more important. Also my parents are coming to terms with this and are liking the feedback approach. You see all of my kids have made drastic improvements in their learning; a lot faster then they have in previous years. I think that this is due to the feedback approach.

    Now I understand that highschool and elementary are two different worlds but do they have to be? I am not too sure if I helped your situation but I think that there is a big opportunity to change the face of education. Thanks for a great blog post.

  5. What is university preparation? I agree with much of what Jonathan So said in his comment. I wanted to throw in my two cents for some of what has been discussed so far. These issues run deep and I won’t be able to comment on everything I would like, but I will scratch the surfaces of a few of these. It’s a long Friday afternoon, so I apologize if I am not as coherent or as friendly-sounding as I’d like… (I am super friendly)

    @Brandon:
    First, about the TACK business and the idea of separating skills. On p. 25, I see the following sentence: ” The four categories should be considered as interrelated, reflecting the wholeness and interconnectedness of learning.” I take this to be quite similar to what you are suggesting in the rest of your post. Nothing says that we need to separate these “categories” and consider them separately. Instead, it is the opposite. We need to be considering them together – as a cohesive whole that helps us understand different aspects of student learning. At least I have found this interpretation helpful.

    Second, with respect to the strands, I would argue similar things. Nothing specifies that we would have to evaluate each specific expectations separately! They only help inform what we should look at with respect to the overall expectations – and even those are only components for considering a strand — and ultimately even the strands are helpful (but not sole decisive) information for better helping student learning.

    Did I make sense just now? I find that much of what you want to do – can be supported by the curriculum document. I teach high school mathematics as well, and so I have in the past shared some of your concerns as well. What I mentioned were basically the ways that I have chosen to deal with it.

    @Brian:
    It’s difficult to think of traditional % ranges in conjunction with levels. I don’t think the issue is with using levels to indicate student achievement — the issue I think is in attempting to convert from one to the other, then from the other back again.., continuing a cycle where our meanings are potentially lost in the process. You gave an example of 77% and a 76%, and the difficulty of differentiating between the two when embedded in a level. But what exactly is the difference between 77% and 76%? What does it say about student learning and how does that help us push the student’s learning further? A single number, I find, mean very little to pushing learning forward. “but students want the number, it means so much to them,” you might say. But I think that is a relic of what we have created over the years of that child’s education. If, instead, we fill our “grade” with descriptions of how they are doing, and what they need to push forward, then that is significantly more meaningful. I am by no means praising the idea of using levels 1, 2, 3, 4 as a way of achieving this – but I think it is at least a direction that is helpful. — but then again this depends on how it is interpreted and how it is actually used. Which is why this is such a complicated issue.

    I am also uncertain that “weighting” helps anything. The idea of professional judgement, I think, has far more use here than a “weighting” idea. I am assuming “weighting,” here, represents some sort of numerical distribution where certain aspects of learning are given a higher percentage…etc.

    “Unless a students is driven by marks in elementary school, their report card has no meaning.”

    If this is the case – then I would say that the report card has no meaning if we are considering their learning. Sure, it has lots of other meanings with respect to stakes – social & school & university acceptance…etc but it isn’t helpful for what we are interested the most – which is student learning.

    Okay, I typed a lot. I am unsure if much of it make sense. Now to copy and paste in case the comment thing doesn’t work…

  6. Pingback: traiteur rabat

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s