Math RPG – revised and starting tomorrow

I’m going to start the Math RPG with my Grade 9 class tomorrow. It’s a way to help track and encourage homework completion, performance on evaluations, and “academic behaviour” (like getting extra help).

Here’s the sheet each student will use:

Math RPG Character Sheet and Rules.png

The character tracking sheet.

Here’s a PDF: Math RPG Character Sheet and Rules

I have two units left in the course to play with it (Measurement and Geometry), so that’s why the Levelling Up part is so short on the student version.

I’m wondering who is going to ask for a +1 Magic Sword… :)

See my previous post for longer-form rules and examples: Math RPG?

Math RPG?

I started working on a Math RPG based heavily on the Bullet Journal RPG (BuJoRPG) at Emerald Specter. I’ve been trying out a modified BuJoRPG for myself, and I wondered if something similar might motivate some students. At the least, I’m hoping it’ll make academic behaviour tracking easier and more visible.

Here’s my first draft. Students will track their own progress (I’ll check their homework completion, probably). Let me know what you think, and if you have suggestions!

Math RPG v0.1a

Improving the evaluation of learning in a project-based class

I’ve been struggling for a few years with providing rich, authentic tasks for my computer science students and then having to evaluate their work.

My students learn a lot of skills quickly when solving problems they’re interested in solving. That’s wonderful.

I can’t conceive of a problem they will all be interested in solving. That’s frustrating.

In the past, I have assigned a specific task to my entire CS class. I tried to design a problem that I felt would be compelling, and that my students would readily engage with and overcome. The point has always been to develop broadly-applicable skills, good code hygiene, and deep conceptual understanding of software design. The point is not to write the next great 2D platformer nor the most complete scatterplot-generating utility.

Unfortunately, I could never quite get it right. It’s not because my tasks were inherently weak; rather it’s that my students were inherently different from one another. They don’t all like the same things.

I believe that students sometimes need to do things that are good for them but that they don’t like to do. They sometimes need the Brussels sprouts of learning until they acquire the taste for it. But if they can get the same value from the kohlrabi of learning and enjoy it, why wouldn’t we allow for that instead?

So I’ve tried giving a pretty broad guideline and asking students to decide what they want to write. They choose and they complete a lot of great learning along the way. Their code for some methods is surprisingly intricate, which is wonderful to see. They encounter problems while pursuing a goal that captures them, and they overcome those problems by learning.

Sounds good, eh?

Of course, they don’t perform independently: they learn from each other, from experts on the Internet, and from me. They get all kinds of help to accomplish their goals, as you would expect of anyone learning a new skill. And then I evaluate their learning on a 101-point scale based on a product that is an amalgam of resources, support, and learning.

Seems a bit unfair and inaccurate.

I asked for suggestions from some other teachers about how to make this work better:

  • ask students to help design the evaluation protocols
  • use learning goals and success criteria to develop levels instead of percentage grades
  • determine the goals for the task and then have students explain how they have demonstrated each expectation
  • determine the goals for the task and then have students design the task based on the expectations
  • find out each student’s personal goals for learning and then determine the criteria for the task individually based on each student’s goals

I’m not sure what to do moving forward, and I’d like some more feedback from the community.

Thanks, everyone!

Too honest for EQAO

I administered the Grade 9 EQAO Assessment of Mathematics this semester. It’s a provincial, standardized test that students write for two hours across two days, an hour per day. Part of the test is multiple choice, and part is open response (longer, written solutions).

In the weeks before the test I practised with my kids, gave advice, and tried to make them comfortable while encouraging them to do their best. I told them to try every question, saying things like “You can’t get marks for work you don’t show!”, “You never know what you might get marks for!”, and “If you don’t know a multiple choice answer you should guess.”

One of my students left three multiple choice questions blank. 

The EQAO Administration Guide expressly forbids drawing a student’s attention to an unanswered question. So I collected her work. 

Afterward I asked her about it. “Why didn’t you answer those questions? You could have guessed; you might have gotten some right.”

She looked steadily at me. “I didn’t know the answers.”

I felt (and feel) terrible about it. 

Not that I didn’t prepare her well for the assessment. I feel terrible because I realized that I asked my students to lie

I asked them to guess “if necessary”, to hide their lack of knowledge, to pretend that they knew things they did not. Because I want them to get good marks, and I want our school to do well. 

That is a terrible thing to ask, and for a meaningless reason. 

My student didn’t just guess. She didn’t play this ridiculous game. She showed integrity. 

And I’m really proud of her for that. 

Learn-practise-perform cycle limits learning in CS

Like many courses, the beginning of my current computer science e-Learning class looked like this:

  • Teach small skill
  • Teach small skill
  • Give feedback on practice work
  • Teach small skill
  • Teach small skill
  • Give feedback on practice work
  • Evaluate performance task

This separation of learning from graded performance is intended to give students time to practise before we assign a numerical grade. This sounds like a good move on the surface. It’s certainly well-intentioned.

But this process is broken. It limits learning significantly.

If the performance task is complex enough to be meaningful, it requires a synthesis of skills and understandings that the students haven’t had time to practise. In this case I’m evaluating each student’s ability to accomplish something truly useful when they’ve only had the opportunity to practise small skills.

If instead the performance task has many small components which aren’t interdependent, students never develop the deeper understanding or the relationships between concepts. In this case I’m evaluating each student’s small skills without evaluating their ability to accomplish something truly useful, which isn’t acceptable either.

And there isn’t time to do both. I can’t offer them the time to complete a large, meaningful practise task and then evaluate another large, meaningful performance task.

The barrier here is the evaluation of performance. It requires a high level of independence on the part of the student so that I can accurately assign a numerical grade.

So I’m trying something different.

Instead of these tiny, “real-world” examples (that I make up) to develop tiny, discrete skills, I started teaching through large, student-driven projects. I got rid of the little lessons building up to the performance task, and I stopped worrying about whether they had practised everything in advance.

The process looks more like this:

  • Develop project ideas with students and provide focus
  • Support students as they design
  • Provide feedback through periodic check-ins
  • Teach mini-lessons as needed for incidental learning (design, skills, etc.)
  • Summarize learning with students to consolidate

I couldn’t design a sequence of learning tasks that would be as effective as my students’ current projects are. They’re working hard to accomplish goals they chose, and they’re solving hundreds of small and large problems along the way.

They couldn’t appreciate the small, discrete lessons I was teaching with the small, artificial stories. They didn’t have the context to fit the ideas into. It was only when the project was large and meaningful that my students truly began to grasp the big concepts which the small skills support.

And now I don’t have a practise/perform cycle. It’s all practice, and it’s all performance. It’s more like real life, less like school, and it’s dramatically more effective. It’s much richer, much faster learning than the old “complete activity 2.4” approach.

Evaluation is very difficult, though.

Inconsistency in Evaluation Practices

I’ve been having some great conversations with teachers in my school about final evaluations in high school courses (i.e. exams and final culminating tasks). I see a desperate need for the discussion, so I’m hoping this might be a place for some of it. To that end, I’m sharing some of the points people having been making. 

First, some context

When two or more teachers in a school have sections of the same course, they’re encouraged to collaborate throughout the courses and are required to have consistency in the way their final 30% is evaluated. For example, if one teacher has a large culminating task for the entire 30%, another teacher of the same course shouldn’t have a 30% formal “test” exam.

This is true in lots of schools all over Ontario. It’s not a provincial policy, but it’s a very common board/school/departmental policy.

Thoughts I’ve had and heard

These are some of the points I’ve heard about this approach in no particular order. I’ll use the term “exam” to refer to any tool that is used for the final 30% component of a student’s grade, whether it’s a test, assignment, presentation, research paper, performance, etc.

  • If you have a formal exam and I have a task, students won’t get consistent marks, which matters for post-secondary entrance/scholarships.
  • How is it different from one teacher being a “hard marker” and the other teacher being an “easy marker”? Isn’t that a bigger problem?
  • If two siblings are evaluated differently, parents and siblings will all be upset that it’s not equal.
  • Two teachers in different schools/boards don’t have to align their exams; why is it required within a school?
  • You’re more likely to have a consistent mark distribution if you use the same exams.
  • Teachers should have autonomy and be permitted professional judgement as long as they’re following curriculum, Growing Success, and other policies.
  • Students need to write formal exams to prepare for university, so there shouldn’t be other forms of exams in grade 12, especially for U courses.
  • The exam is only worth 30%. The 70% term work is more valuable, but the policy doesn’t apply to it.
  • If you say my exam is easier than another teacher’s exam, you’re implying that one of us is inaccurately evaluating student understanding and performance.
  • There is no standard for the “amount of work” a student has to do for an exam.
  • We should have provincially standardized exams for senior courses for consistency and equity.
  • An open-book exam is easier than a closed-book exam.
  • An open-book exam is harder than a closed-book exam.
  • Some students need accommodations because of learning disabilities. Is it okay to give a different form of the exam for those students? Can’t other students access the same accommodations, since they aren’t modifications?
  • If school administration would approve of both exams on their own, then two teachers should be able to have different exams at the same time.
  • Not all forms of evidence of student learning are equally valid or accurate.
  • If I come to a school for semester 2, why am I restricted by what a semester 1 teacher chose to do in their class?

What do you think?

Post some comments. Let’s work on this together.

We have to stop pretending… #MakeSchoolDifferent

I’m responding to Sue Dunlop’s challenge (which is the result of a series of challenges stretching back to Scott McLeod). I’ve only read a few of the other posts that this challenge has generated, so I apologize to anyone who already expressed these same thoughts.

  1. We have to stop pretending that it’s okay to complain about someone else instead of offering them support.
  2. We have to stop pretending that telling people to learn how to cope is an effective strategy for dealing with mental health challenges.
  3. We have to stop pretending that evaluation can be both objective and accurate when implemented by a single human.
  4. We have to stop pretending it’s acceptable and reasonable for reporting periods to dictate the pace of learning in our classrooms.
  5. We have to stop pretending that there is a single, correct solution to any one of these complex problems.
  6. We have to stop pretending that we can do this on our own.

Oops, that’s 6. Ah well.

The tagged? David Jaremy, Peter Anello, Tim Robinson, Eva Thompson, and Doug Peterson. Additional apologies if you’ve already been tagged.

Summative Task for Quadratics – #MCF3M

My Grade 11 e-Learning math class is completing a unit on quadratic equations. I have a few things happening for their summative assessment, but the part I find most interesting is the following “experiment”. It’s heavily based on the Leaky Tower task from TIPS4RM at EduGAINS.ca. I’m going to test it out tonight with my kids before I finalize the evaluation criteria and post the task. If you have feedback, I’d love to hear it. I’ll be adding photos to help explain the setup.

Leaking Bottle – Summative Task – Part 1

You’ll be completing a short experiment and writing a report to go with it. You can get help from a classmate, family member, etc. while running the experiment, but just as an extra set of hands. No one should be helping you with the math part.

Preparation

Gather the supplies you’ll need:

  • a clear, disposable, empty, plastic bottle
  • a ruler
  • a watch, phone, or other time-keeping device OR a video-recording device.

—photo here—

Carefully poke a hole in the bottle about 3cm from the bottom. Seriously, be careful here. You might try using something sharp, like a pin or a nail, to start the hole, then widen it with a pencil. You want the final hole to have a diameter of 3-7mm. Don’t worry about being super-precise.

—photo here—

Hold a ruler next to your bottle, or tape a ruler to your bottle if you need both of your hands free. You want to be able to measure the water level, so put the “zero” end of the ruler at the bottom.

—photo here—

Cover the hole and fill the bottle with water. If your bottle has a tapered top (like the one pictured here), only fill it up in the cylindrical section (i.e. before it starts to narrow). You can cover the hole with your finger, or you might try a piece of tape (if you use tape, fold the end on itself so it’s easier to remove).

—photo here—

Data Collection

If you’re recording video (easier, I think), start recording. If you’re just using a watch or other timing device, wait for a “good” time, like a whole minute, for a starting point.

Uncover the hole, letting the water in the bottle flow out into a sink or another container. Don’t make a mess; nobody wants a mess.

—photo here—

If you’re using a watch, use the ruler to record the water level every 5 or 10 seconds or so. Pick an easy time to keep track of. Record measurements until the flow of water stops.

If you’re recording a video, let the water finish flowing out, then stop the video. Play the video back, noting the height of the water every 5 or 10 seconds or so.

Analysis

You now have a table of values: time (independent variable) and height measurements (dependent variable). If you didn’t get good data (you lost track of time, the video didn’t work, etc.), perform the experiment again. It doesn’t take long.

  1. Using Desmos, create a scatter plot for your measurements.
  2. Find an equation to fit the data as best you can.
  3. Identify the key points on the graph.
  4. How should the equation you found be restricted? i.e. what should the domain and range be?
  5. Write the equation you found in Standard Form and Vertex Form.

Leaking Bottle – Summative Task – Part 2

One small change

Repeat the above experiment, but this time put another hole about 7-10cm above the first one. Uncover them at the same time, so water will flow out of both holes.

—photo here—

Your analysis will be a little more complex, because you won’t have a single, nice equation that can accurately model the data.

  1. Using Desmos, create a scatter plot for your measurements.
  2. Find an equation (or equations!) to fit the data as best you can.
  3. Identify the key points on the graph.
  4. How should the equation(s) you found be restricted? i.e. what should the domain(s) and range(s) be?
  5. Write the equation(s) you found in Standard Form and Vertex Form.

Assessment and Evaluation: sacrificing complexity for granularity

I teach Math in Ontario. We have an “Achievement Chart” (see pages 28-29) which lists four categories of knowledge and skills. When we assess and evaluate student work, we separate student performance into the “TACK” categories: Thinking, Application, Communication, and Knowledge. The Chart includes criteria for each category and descriptors for different Levels of performance.

The curriculum itself is divided into Strands for each course, and these strands describe Overall Expectations and Specific Expectations (essentially the details of the Overalls).

So when evaluating student work, we evaluate Overall Expectations in the context of the four Categories of Knowledge and Skills, and we should have a “balance” between the categories (not equality, necessarily).

The truth is that I’m having some trouble with it. I posted a little while ago that I was struggling with the Thinking category, and that’s still true. But there is another issue that’s more pervasive and possibly more problematic.

Isolating skills

When trying to separate out the different components of student performance, we would often ask questions that “highlight” a particular area. Essentially we would write questions that would isolate a student’s understanding of that area.

That’s a fairly mathematical, scientific-sounding thing to do, after all. Control for the other variables, and then effect you see is a result of the variable you’re hoping to measure.

For example, we wouldn’t ask a student to solve a bunch of systems of equations which only had “nasty” numbers like fractions in them (or other unfairly-maligned number types) because we fear that a student who is terrible with the fractions will stumble over them and be unable to demonstrate their ability to solve the system of equations. So we remove the “barrier” of numerical nastiness in order to let the skill we’re interested in, solving the system, be the problematic skill.

This isn’t a great idea

But we do that over and over again, isolating skill after skill in an effort to pinpoint student learning in each area, make a plan for improvement, and report the results. And in the end, students seem to be learning tiny little skills, procedures, and algorithms, which will help them to be successful on our tests without developing the connections between concepts or long-term understanding.

We want to have “authentic, real-world problems” in our teaching so that students can make connections to the real world and (fingers crossed) want to be engaged in the learning. But authentic problems are complex problems, and by julienning our concepts into matchstick-size steps we are sacrificing meaningful learning opportunities.

What if we didn’t have to evaluate?

We’re slicing these concepts so finely because we’re aiming for that granularity. We want to be fair to our students and not penalize their system-solving because of their fraction-failings.

But if there were no marks to attach, would we do the same thing? Would we work so hard at isolating skills, or would we take a broader approach?

My MDM4U class

I’m teaching Data Management right now, and the strand dealing with statistical analysis has a lot of procedure skills listed followed by a bunch of analysis skills. If I evaluate the students’ abilities in summarizing data with a scatter plot and line-of-best-fit, do I then ask them to analyze and interpret the data based on their own plot and line? What if they mess up the plot; don’t I then have to accept their analysis based on their initial errors? Oh wait, I could make them summarize the data, then I can give them a summary for a different data set and ask them to draw conclusions from that summary! Then they’ll have the same starting point for analysis, and they can’t accidentally make the question too easy or hard!

But I’ve just messed up one of my goals, then: I’ve removed the authenticity and retained the ownership of the task. I haven’t empowered my students if I do it this way, and I’ve possibly sacrificed meaningful complexity. Worse, I’m only doing this because I need to evaluate them. I’d much rather require them to gather, summarize, and analyze data that interest them and then discuss it with them, helping them to learn and grow in that richer context.

As always…

…I don’t have answers. Sorry. I’m trying hard to make the work meaningful and the learning deep while still exposing as much detail about student thinking as I can. I’m sure in the end it’ll be a trade-off.

Improving report card comments with a checklist

It’s report card season in Ontario, and I don’t know too many people who are happy about it.

I don’t love evaluating student performance in general, and the persistent and poisonous focus on MARKS by most stakeholders in student learning is infuriating. Marks are a huge loss of information about student performance, in my rarely-humble opinion. Along with those percentage marks we have a much-less-valued-but-more-valuable evaluation of Learning Skills. My students mostly ignored those, I think.

In truth, the hero of the report card is The Mighty Comment. It has the superpowers of Explanation and Recommendation. It’s here that I can talk about what’s going on, why, and how to improve.

After all, assessment is for improving learning. Reporting a mark of 68% doesn’t do that.

So The Mighty Comment is our hope for the future, the only power that can save our students and their parents from receiving an all-but-useless document.

Let’s do it right.

I’m teaching in a high school, and we have both a provided comment bank and the latitude to write our own comments. The only rules are that we need to follow the guidelines in Growing Success and we have to keep it under 458 characters.

I read an interesting article at rs.io called The Unreasonable Effectiveness of Checklists.

Fireworks blazed across my brain. I need a checklist to make sure I’m doing what I want to do with every comment.

So I made one

The Report Card Comment Checklist (catchy name, eh?) is now live. I also included The Verbose Report Card Comment Checklist immediately after it to help explain what I mean. Please leave comments here on the blog if you can help me to improve it.

I sat with each of my students this term to review their marks, learning skills, and comments before I submitted them to my school admin team. I wanted them to know that I tried to write what I thought and that I cared about their improvement. I articulated their strengths and what I need them to do next. I asked them each to reflect on their comment (most of them needed to be prompted) and to tell me whether they thought it was fair, accurate, etc. One student found a typo (yay!) and two asked me to clarify what I meant. About five students said their comments sounded exactly like them, which makes me proud.

I have to admit that I made the checklist this evening; I may have to edit my comments a bit next week before they’re published.

You should just click the link for the complete version, but here it is anyway:

The Report Card Comment Checklist

Check each student’s report card comment and ask yourself these questions:

Strengths

  • does it include at least one strength?
  • are the strengths related to the course?
  • are the strengths worded positively?
  • do the strengths stand alone?

Next Steps

  • does it include at least one next step?
  • are the next steps related to improvement in the course?
  • if a student reads the next steps, will they know what to do to improve?
  • are the next steps worded positively?
  • do the next steps stand alone?

Language and Tone

  • did I check for spelling, grammar, etc.?
  • did I read it out loud?
  • did I listen for sarcasm and negative feeling in my voice?

The Point

  • will the student feel that I care about their success?
  • will the student “see themselves” in the comment?
  • will the student want to continue to improve?
  • will the parent understand how to help their child improve?