What do you choose to learn about when you’re not at school?

I had an interesting talk with the student today about a variety of topics related to schooling and education. I asked her one question which has been staying with me throughout the evening so far. 

“What do you learn about when you’re not at school? What do you learn about because you want to, not because you have to? What are you curious about?”

I think each person’s answers can give some insight into what their passions are. Curiosity is an incredibly valuable commodity, and nurturing it is some of the most important work we do. Let’s help foster the inquiring mindset while being careful not to steal the passion by imposing our structures. 

“Jigsaw” activities don’t work

Maybe there is a way to make them work, but I haven’t seen it yet. 

A jigsaw activity as I have experienced it involves a group of people all needing to learn the same thing. The new learning is divided by the facilitators into some number of discrete pieces.

Suppose there are four different components to a concept or skill that participants want to learn about. Each of those components becomes a station in the room. The learners are then divided into groups of at least four, and each person within the group is assigned one of the four stations to become an “expert” at that component. 

The participants scatter to their stations, and they engage in dialogue to become experts at their concepts or skills. They then return to their home group to share their learning with their peers.

The trouble comes here. The experts have had a lot of time to think and reflect upon a concept or skill, while the remaining members of the home group have to simply accept and absorb each experts’ final learning. 

The deep learning comes when working through a concept, not by simply observing or hearing it. Instead of a jigsaw, the participants might as well simply read an article with the “answer.” It would be more efficient if acceptance was the goal. 

But the learning is in the work, not in the receipt of knowledge, so each person needs to be part of each expert group. 

If this is true, jigsawing is counterproductive. 

Am I missing something? Are we just doing it wrong?

Improving the evaluation of learning in a project-based class

I’ve been struggling for a few years with providing rich, authentic tasks for my computer science students and then having to evaluate their work.

My students learn a lot of skills quickly when solving problems they’re interested in solving. That’s wonderful.

I can’t conceive of a problem they will all be interested in solving. That’s frustrating.

In the past, I have assigned a specific task to my entire CS class. I tried to design a problem that I felt would be compelling, and that my students would readily engage with and overcome. The point has always been to develop broadly-applicable skills, good code hygiene, and deep conceptual understanding of software design. The point is not to write the next great 2D platformer nor the most complete scatterplot-generating utility.

Unfortunately, I could never quite get it right. It’s not because my tasks were inherently weak; rather it’s that my students were inherently different from one another. They don’t all like the same things.

I believe that students sometimes need to do things that are good for them but that they don’t like to do. They sometimes need the Brussels sprouts of learning until they acquire the taste for it. But if they can get the same value from the kohlrabi of learning and enjoy it, why wouldn’t we allow for that instead?

So I’ve tried giving a pretty broad guideline and asking students to decide what they want to write. They choose and they complete a lot of great learning along the way. Their code for some methods is surprisingly intricate, which is wonderful to see. They encounter problems while pursuing a goal that captures them, and they overcome those problems by learning.

Sounds good, eh?

Of course, they don’t perform independently: they learn from each other, from experts on the Internet, and from me. They get all kinds of help to accomplish their goals, as you would expect of anyone learning a new skill. And then I evaluate their learning on a 101-point scale based on a product that is an amalgam of resources, support, and learning.

Seems a bit unfair and inaccurate.

I asked for suggestions from some other teachers about how to make this work better:

  • ask students to help design the evaluation protocols
  • use learning goals and success criteria to develop levels instead of percentage grades
  • determine the goals for the task and then have students explain how they have demonstrated each expectation
  • determine the goals for the task and then have students design the task based on the expectations
  • find out each student’s personal goals for learning and then determine the criteria for the task individually based on each student’s goals

I’m not sure what to do moving forward, and I’d like some more feedback from the community.

Thanks, everyone!

Some advice I give my students

I teach high school. I say these things to almost every class. 

“Don’t trick yourself into thinking you understand something you don’t.”

“Write it down.”

“Be gentle with each other.”

“You won’t look back in ten years and wish you had been meaner in high school. No matter how nice you think you are now, when you’re older you’ll see it differently. So be kinder than you think you should be now.”

“Let people like what they like. If they’re not hurting anyone it’s fine. I don’t need you to like the music I listen to, but I do need you to let me like it.”

“Everything is fascinating if you’re curious.”

Too honest for EQAO

I administered the Grade 9 EQAO Assessment of Mathematics this semester. It’s a provincial, standardized test that students write for two hours across two days, an hour per day. Part of the test is multiple choice, and part is open response (longer, written solutions).

In the weeks before the test I practised with my kids, gave advice, and tried to make them comfortable while encouraging them to do their best. I told them to try every question, saying things like “You can’t get marks for work you don’t show!”, “You never know what you might get marks for!”, and “If you don’t know a multiple choice answer you should guess.”

One of my students left three multiple choice questions blank. 

The EQAO Administration Guide expressly forbids drawing a student’s attention to an unanswered question. So I collected her work. 

Afterward I asked her about it. “Why didn’t you answer those questions? You could have guessed; you might have gotten some right.”

She looked steadily at me. “I didn’t know the answers.”

I felt (and feel) terrible about it. 

Not that I didn’t prepare her well for the assessment. I feel terrible because I realized that I asked my students to lie

I asked them to guess “if necessary”, to hide their lack of knowledge, to pretend that they knew things they did not. Because I want them to get good marks, and I want our school to do well. 

That is a terrible thing to ask, and for a meaningless reason. 

My student didn’t just guess. She didn’t play this ridiculous game. She showed integrity. 

And I’m really proud of her for that. 

Learn-practise-perform cycle limits learning in CS

Like many courses, the beginning of my current computer science e-Learning class looked like this:

  • Teach small skill
  • Teach small skill
  • Give feedback on practice work
  • Teach small skill
  • Teach small skill
  • Give feedback on practice work
  • Evaluate performance task

This separation of learning from graded performance is intended to give students time to practise before we assign a numerical grade. This sounds like a good move on the surface. It’s certainly well-intentioned.

But this process is broken. It limits learning significantly.

If the performance task is complex enough to be meaningful, it requires a synthesis of skills and understandings that the students haven’t had time to practise. In this case I’m evaluating each student’s ability to accomplish something truly useful when they’ve only had the opportunity to practise small skills.

If instead the performance task has many small components which aren’t interdependent, students never develop the deeper understanding or the relationships between concepts. In this case I’m evaluating each student’s small skills without evaluating their ability to accomplish something truly useful, which isn’t acceptable either.

And there isn’t time to do both. I can’t offer them the time to complete a large, meaningful practise task and then evaluate another large, meaningful performance task.

The barrier here is the evaluation of performance. It requires a high level of independence on the part of the student so that I can accurately assign a numerical grade.

So I’m trying something different.

Instead of these tiny, “real-world” examples (that I make up) to develop tiny, discrete skills, I started teaching through large, student-driven projects. I got rid of the little lessons building up to the performance task, and I stopped worrying about whether they had practised everything in advance.

The process looks more like this:

  • Develop project ideas with students and provide focus
  • Support students as they design
  • Provide feedback through periodic check-ins
  • Teach mini-lessons as needed for incidental learning (design, skills, etc.)
  • Summarize learning with students to consolidate

I couldn’t design a sequence of learning tasks that would be as effective as my students’ current projects are. They’re working hard to accomplish goals they chose, and they’re solving hundreds of small and large problems along the way.

They couldn’t appreciate the small, discrete lessons I was teaching with the small, artificial stories. They didn’t have the context to fit the ideas into. It was only when the project was large and meaningful that my students truly began to grasp the big concepts which the small skills support.

And now I don’t have a practise/perform cycle. It’s all practice, and it’s all performance. It’s more like real life, less like school, and it’s dramatically more effective. It’s much richer, much faster learning than the old “complete activity 2.4” approach.

Evaluation is very difficult, though.

My students told me what’s going on in my class

I talked to my data management kids today about the not–so-great class we had yesterday. We pushed all the desks aside and put our chairs into a (sort of) circle for this conversation. I explained how frustrated I was with the lack of feedback I was getting during class, and that I was concerned that my goals did not align with their goals for the course.

I asked them why they were taking the course, and what they were hoping to get out of it. My speculation last night was partly on target: their primary goals are to get a high school diploma, with a good mark in this course, so that they could get into “the next thing” (university programs for most of them). Some mentioned that they thought statistics would be helpful for their planned program. Overwhelmingly the course is seen as a means to an end. It’s not 110+ hours of learning; it’s more like a long tunnel they must pass through to get on with life.

This is what I was afraid of, and yet sitting there with my students I can’t blame them. Our school system (through post-secondary as well) trains them to focus on achievement, which is measured by task completion and marks. Our system doesn’t (can’t?) train them to value learning over these other goals, because the system itself doesn’t value learning over task completion and marks.

We had an honest conversation about what really matters in a math class. We talked about how they all learn exactly the same things even though they don’t all have exactly the same plans for the future. We talked about how we have a “just-in-case” curriculum: you must learn these skills just in case you need them someday.

And the most frustrating part for me was that they all know that a lot of what we do in class doesn’t really matter in the sense that it doesn’t really change them. They haven’t been improved by learning how to use the hypergeometric probability distribution. They will forget it when the exam is over because it doesn’t matter much to them. It’s not something that they’ll use, likely. And if they need it, it’ll be because they’re steeped in all the math that goes along with it.

But not everything we do is like that in my class. Some things do matter. And I’m feeling a bit guilty tonight because I think I should have focused the course a bit differently, spending more time on the parts that will change my students. We’re only a few weeks from the end of the course and we don’t have the luxury of a slow, thoughtful pace that the remaining topics deserve. I can’t fix that now, but I can work on it for next year.

I grabbed the Chromebook cart and sent my kids to a Google Form with three paragraph-response prompts:

  • Start
  • Stop
  • Continue

They each wrote anonymously about what they think we should start to do in our class (perhaps an approach they like from another class), stop doing (approaches I’m taking that aren’t working for them), and continue doing (class components they don’t want to lose if I change things). Their responses were fascinating, and I’m going to read them over a few more times to make sure I get it all. It was pretty clear they don’t want any more audio clips, though :)
Our conversation also revealed that I misinterpreted their silence as a lack of interest or understanding. What I learned from them today was that there were portions of yesterday’s class that they did enjoy, but I couldn’t see it. They didn’t provide feedback I was expecting and I didn’t adjust my teaching to suit their needs. It was a difficult conversation for me (and probably them), and it took some time, but it was worth it. I understand my students better now, and I think I can be a better teacher.

It’s not all fixed, but I don’t feel quite like I did yesterday. I’m going to go to class tomorrow with a plan to improve my teaching and their learning at the same time.