Group quiz question follow-up

Here is where edu-blogging +Twitter really shines, folks. To bring you all up to speed, here is a brief summary of events:

1. I read a post by Joss Ives about 2-stage quizzes (stage 1= solo, stage 2=group)

2. I said, “cool idea, how can I make that work with standards-based grading?” and made a blog post about my quandary

4. I sent it to a few Twitter users whom I know are #SBG veterans

5. I received a great comment from Matt Townsley that helped me to see the problem more clearly

So, to tackle Matt’s questions one by one, here goes:

Matt: What instructional or classroom management concern are you trying to address by introducing this idea into your class?

I see the idea of immediately following an individual quiz with a group quiz as a chance for students to, (1) get immediate feedback from their peers about the quiz and where their knowledge level is, and (2) improve their understanding of the content/concept at a time when they should be most receptive to correcting misconceptions and filling knowledge gaps.

The main problem I think the group quiz may address is the problem of students generally sucking at diagnosing their knowledge gaps and taking intentional steps to repair those gaps. I’m hoping the group quiz will help those who bombed the quiz be more successful upon re-assessment.

Matt: Another idea – could you add a third stage? After students receive feedback (no letter grade…or a fictitious grade based on the75% + 25% formula) from the second stage, could you add a third stage where students completed it only individually?

The decision: What I ended up doing falls somewhere in between. We did a small-group whiteboard session yesterday where I circulated to ask questions and provide feedback. This served as a formative assessment for me and as a culminating learning experience for them. Today, they took the quiz individually for a grade. After all of the individual quizzes were complete, I had them complete the same quiz in small groups NOT for a grade.

In my next post, I’ll share the results and my reflection on the process!

Working group assessments in with #SBG

Yesterday, I read a few posts from physics professor Joss Ives at his blog, Science Learnification. One of the posts that really got me thinking was about weekly two-stage quizzes in his physics classes.

A two-stage group exam is form of assessment where students learn as part of the assessment. The idea is that the students write an exam individually, hand in their individual exams, and then re-write the same or similar exam in groups, where learning, volume and even fun are all had.

I really like the idea of having students take a quiz individually, then take it again immediately afterward in a group. I’m going to give this a try next time a give a quiz. If nothing else, instant feedback mixed with collaborative problem solving is a powerful combination.

What I’m trying to wrap my brain around right now is how to work this in with standards-based grading.

Since I don’t give points, I can’t do the 75% individual score + 25% group score = quiz grade split that Joss uses. If I could sit with all groups at once, I could observe and listen for individual involvement in the discussion & problem solving.

It may be that we could just do the group quiz portion as a learning experience and leave it at that. Since my students are always allowed to re-assess, there is value in learning after the assessment.

What I think would be lacking for me is the level of engagement that Joss reports in the group problem solving portion of the quiz. His kids are engaged in no small part because everyone’s grade is on the line. I’m not sure where the immediate motivation would be for many of my students.

Any ideas?

Standards-based grading 1st Trimester post-game

I have just completed my first trimester of using standards-based grading (#SBG) after taking a 2 year break from it. Now it’s time to step to the podium for the post-game press conference.

Opening Statement:

This time around, it has gone much better. No major student complaints, no parent “sit-downs” where they are mentally fitting me for concrete galoshes, and no suggestions from administrators or school board members that all teachers adopt #SBG or none do it. Now I will take your questions.

“Coach! Can you give us three things you liked about this trimester?”


#1 – Keep it simple, stupid

One of my biggest frustrations with #SBG my first time trying it was the complexity of the grade book. Score entry and task/ assessment tracking was awkward. The Power Law made grades mysterious. Averaging assessments to reach a standard score was even worse (and counter to the ethos of #SBG). District mandated grading software made all of this even worse. This time around, I have better software and I went with my adaptation of Frank Noschese’s K.I.S. SBG. This has worked much better.

#2 – The “eye test”

Students’ final grades were much in line with my informal assessment of their skills, knowledge and effort. I’ve always felt that a good teacher could give his students a very accurate grade without any scoring, points, standards, etc. We do all of that for the consumption of students, parents, administrators, etc. so that there is perceived fairness and objectivity to the grades. Of course, grades are still subjective no matter how you arrive at them.

#3 – Winning hearts and minds

Quite a few students have caught on to the system and have begun using the language of “meeting standard” and “reassessment.” I wish I could say that they all get it and they all love it but that would be a lie. They’re coming around, though, just not nearly as fast as I’d like (isn’t that always the case?). The real success has been the number of colleagues that have expressed interest in coming over to the #SBG Rebel Alliance (I can’t picture #SBG as the Dark Side). One has even decided to take the #SBG plunge for 2nd trimester!

“Thanks coach. Now can you give us 3 dislikes about your 1st trimester’s grading efforts?”

3 Dislikes:

#1 – How many points is this worth?

Yes, I still get this question and, yes, I still hate it. There are still too many students who really don’t get #SBG or how their grade is calculated. I need to get better at communicating the system more clearly, quickly and effectively. Most likely, I need to simplify what I tell them and dole it out in smaller bites on a need-to-know basis. Luckily, I get another chance at this this trimester!

#2 – “Mister, we take too many quizzes!”

The kids who have said this to me are right. I have been quiz-happy this trimester. For someone who used quizzes sparingly in the first 7 years of teaching, I’ve become too dependent on quizzes as my primary type of formal assessment. One of my main goals for 2nd trimester is to do more informal assessments (observations, conversations, discussions, whiteboarding, writing prompts, etc.) and to gather records of said assessments to use for grading purposes. I have decided that quizzes do certainly have a time and place in my classroom, though.

#3 – Grain size

This dislike is in reference to the standards I used for grading purposes 1st trimester. I struggled through much of the trimester to effectively triangulate the ideal “grain size” for my graded standards. In other words, some standards (e.g., Plate Tectonics) were too broad and actually included several different key parts (Causes of Plate Tectonics, Effects of Plate Tectonics, Plate Boundaries, Layers of the Earth). Other standards became too narrow (Microscope Skills) and could only be assessed very directly.

“Coach – How would you assess yourself for the 1st trimester?”

Overall, I’m giving myself a solid 2.5 (out of 4) for 1st trimester’s #SBG efforts. I have demonstrated basic understanding of #SBG and have applied the skill with partial effectiveness.

“What are your goals for next trimester?”

I hope to leap to a 3.5 or 4 next trimester by improving my communication to students, diversifying my assessments and honing my standard “size.”

“Okay, coach, that sounds like it would earn you a solid 3 for ‘meeting standard.’ Just how do you plan to exceed the standard?”

I hope to successfully mentor at least one colleague into the #SBG team. Beyond that, I plan to make more of an effort to spread the word to my larger base of colleagues outside of the science department. I work on a staff of over 100, so there are many opportunities to find willing converts!

I borrowed this image from this post. Thanks!

Standards-based grading welcomes me back with open arms!

A few years ago, I dove into the world of standards-based grading (SBG). While it had its merits, I decided to dump SBG for what I called UNgrading. I happily rolled with UNgrading for two years and mostly loved it. My chief struggle was finding time to conference with all students about their grades.

This year, I’m teaching at a new school with much less flexibility. My new school is much more locked in to curriculum and pacing guides, common assessments, etc. I have larger classes and a larger student load overall.

After a few weeks of existential vertigo, I needed to break the status quo. Full-fledged project-based learning with UNgrading wasn’t an option for me or for my new colleagues, so I decided SBG would champion my subversion campaign.

I have mostly avoided my previous gradebook frustrations with a version of the Keep It Simple Standards-Based Grading recommended by Frank Noschese. I have also read everything on the blogs of Shawn Cornally and Jason Buell and they have been crazy helpful. Yay blogosphere!

The cool thing is that several of my colleagues have expressed interest in jumping on board the SBG Express! My new administrators have been incredibly supportive of SBG as well.

I’m not happy that my primary form of assessment so far has been lab reports and quizzes. I definitely need help in this area.

I still have a lot of room for improving how well I communicate my grading method to my students (and parents). The kids are only just now starting to get it, 12 weeks into the school year.

In spite of these struggles, I feel like I’m on the right track!

Lesson Trial – Argument Writing

NOTE: This lesson trail is an assignment for the Teaching 2.0 Master of Science in Education program at University of Wisconsin – Oskosh. Specifically, this is an assignment for ED715: Current Trends in Curriculum and Instruction – Inquiry & Problem Solving taught by Eric Brunsell. This is the first in a series of 3 mandatory lesson trials for this course, in which we must apply learnings from our coursework to our classroom instruction and reflect on the results.

Lesson Trial – Argument Writing



Inquiry can be broken down into three key areas: questioning, investigation, and argumentation. Without any one of these three legs, inquiry loses its power.

Good questions are at the heart of inquiry – I hope that goes without saying. There is no inquiry without genuine questions.

Questioning has to be followed up with investigation – seeking answers to questions. This could take the form of scientific experiments, deep research, interviews, etc. No matter the type of investigation, if this step is neglected, the questions are meaningless and the answers are pure fluff.

Finally, we come to the leg that makes inquiry social and human – the argument. Great investigations based on interesting questions hold only so much power without this critical step.

Exposing one’s work to criticism – to share, to get feedback, to educate others – is the step that makes inquiry soar. With students, this may be the hardest step – and the most important.

In Teaching Argument Writing (Hillocks, 2011), the author lays out a powerful case for the importance of crafting strong arguments. He follows his argument with a clear method for teaching students to do so. My lesson trial was based upon this method.

Lesson Trial

My students had just completed a 3 day lab investigation with the common nematode, C. elegans. The investigation was a comparison of wild type C. elegans and a genetic mutant variety. The mutant C. elegans had the ability to maintain a normal level of activity when exposed to a salty environment, whereas the wild type had to essentially freeze in place for 24 hours in order to adapt to the salt.

To make a long story short, my students had ample data and needed to make a conclusion.

To start the argument writing process, I prompted my students with the question, “Which type of C. elegans was better, the mutant or the wild type, and why?” Because of the ambiguity of the answer (one could argue the merits of each side), students were forced to pick a side and use data to back up their argument.

Normally, this is the point where I have to pester students over and over again to use data in their conclusions and to explain how their data supprts their conclusion.

My students were already in groups of 3 or 4 for their lab work, so I asked them to work with that group. To begin, I had them get a whiteboard (I have several poster sized whiteboards made from shower board) and draw this graphic organizer:


Once they had done that, I asked them to pick a side as a group and write their claim (their answer to my question) in the top section.

Next, after a brief discussion about evidence, I asked them to gather evidence to support their claim. They could use their lab notes, our class data (posted in a spreadsheet projected for all to see), or some data tables that I had provided them earlier in the lab. These data tables contained data from experiments previously done on C. elegans in labs.

The next step was probably the hardest, and required a bit more discussion and explanation.

For each piece of evidence they listed, I asked them to come up with reasoning to connect the evidence to their claim. To do this, they had to come up with common sense or scientific ways to explain how each piece of evidence supported their claim.

Finally, I asked each individual to write a conclusion in paragraph form. To do this, they used the whiteboard their group had generated and turned their claim, evidence and reasoning into a paragraph or two.


While the reasoning was weak at times, these were very solid conclusions.

Usually a good portion of my students make claims entirely based on vague quality statements about lab data (e.g., “because the temperature went up”, or “because the pH changed a lot”, etc.).

However, over 90% of the conclusions I collected in this lesson trial contained specific data to support their claim.

Not only do too few students use data consistently in conclusions but those who do often just throw it in there and expect the data to speak for itself (e.g., “the temperature was 90 deg. C”, or “the pH rose from 7 to 11″, etc.). There is often little or not explanation of HOW the chosen data supports the claim.

In this lesson trial, approximately 70% of my students had reasoning that clearly connected their data to their claim. As mentioned before, some of the reasoning was very weak or vague. That being said, I rarely get ANY reasoning from students in a first draft of a conclusion.


Overall, this process was very effective. I have tried many things in the past to teach students to write good conclusions. I have provided models of various levels of quality, detailed rubrics, feedback and revision protocols, and more. However, none of those processes has been as efficient as this one at getting students into the ballpark of a quality conclusion.

This process could easily be modified to culminate in paired discussions, a whole class discussion, or full-fledged lab report writing. It could also be a great lead in to deeper inquiry – they could find the weak points in their claim and “go back to the drawing board” to gather more data.


Hillocks, G. (2011). Teaching argument writing, grades 6-12: supporting claims with relevant evidence and clear reasoning. Portsmouth, NH: Heinemann.

Special thanks to Dr. Maureen Munn and Dr. Jeff Shaver from the University of Washington Genome Sciences Educational Outreach Program for providing the C. elegans lab and all associated materials. It was awesome!

Morgan Freeman “Your argument is invalid” Image courtesy of Pop Hangover: 


Meaningful Grading

Note: This post is part of the Teaching 2.0 Masters in Curriculum and Instruction Program at the University of Wisconsin-Oshkosh. My current classes are about Project Based Learning and Assessment.


For grades to be valuable, they must be meaningful to all stakeholders. Teachers, parents, administrators, and students all have a vested interest in the value of grades. However, the most important link in this chain – the student – is the one that is often forgotten. Grades affect the lives of students throughout their formal schooling experience. Grades color, and often taint, the way students perceive their school experience and, ultimately, themselves. Yet, by the time students reach me in high school, they have been conditioned to look at grades as reward and punishment for following orders, meeting deadlines, and guessing what is in the teacher’s head. As Susan Brookhart states in Grading and Learning: Practices that Support Student Achievement (2011), “As students progress through school, their dissatisfaction with and cynicism about grades increase and their belief in the fairness of grades declines (Evans & Engelberg, 1988).” Not only is this an unfortunate situation for student motivation and enjoyment of school, it can be detrimental to learning; “Grading policies that are intended to elicit student compliance are not conducive to the active pursuit of learning.” (Brookhart, 2011) Isn’t student learning the primary goal of education?

The Challenge
My challenge at the high school level is to break through these years of conditioning with a different approach to grading. I have found that this is a monumental challenge. Not only does a lone teacher diving into this battle on his own battle the inertia of years of conditioning, he also must swim against the current created by his colleagues who teach the other classes that each student is currently experiencing.

What I’ve Already Implemented
My answer to this challenge has been to strive to make grades as meaningful and as connected to student learning as possible. However, I have also endeavored to devalue grades in my classroom. I don’t give “points” for any task, activity, or assignment. Period. My gradebook is completely focused upon each project that my students engage in and the critical science standards aligned with that project. At the culmination of each project, I ask my students to self assess the quality of their project work and the level of standard attainment. This self assessment process includes a reflection on the project, a rubric, and a conference between myself and the student. At this conference, the student tells me what grade they feel they have earned and why. This process is repeated at the end of each semester with a semester portfolio and reflection, followed by a conference. This reflects the principle that, “grades for individual assignments should reflect the achievement demonstrated in the work. Grades for report cards should reflect the achievement demonstrated in the body of work for that report period.” (Brookhart, 2011)

Next Steps
My next step – and this may be the most challenging step – is to involve students more effectively in the identification of learning goals for each project and in determining methods for reaching those goals. I have done this at times with mixed results. That being said, I don’t know that I’ve ever done it effectively and intentionally enough. My plan is to hold discussions with my classes early in our upcoming projects about the ultimate end goal of the project and the required learning for said project. I will share the state standards with them and ask them to help me identify the ones that they most want to tackle within the project. We will discuss ways to meet those standards; both along the way and in the final product. I will spread this process out over several days in small doses in order to prevent student burnout with reading state standards. The next step will then be to discuss specific activities, lessons, etc. that they would like me to deliver in order to help them meet these goals that we have agreed upon. Finally, we will co-create a rubric for the final project that will help them to see the criteria upon which they will assess the quality of their learning and the final product of our project.

I believe that this process will also allow students to more effectively track their own learning throughout a project. This is an area that my students really struggle with. And yet, “long-term projects lend themselves to monitoring and feedback along the way before the final project is finished and graded.” (Brookhart, 2011). While I have made feedback and revision an important part of my classes, I have not yet settled on an effective process for students to track their own progress. Regular blog reflections and revisiting the project rubric have helped. That being said, I often feel that students don’t understand the goals upon which they are reflecting and self-assessing. Furthermore, I feel that many don’t see the value of or reasoning behind these goals. I am convinced that this will cease to be a problem if I bring them into the goal identification process from the beginning.

At the culmination of the project, students will go through the same portfolio, reflection, self-assessment and conference process that they have been experiencing thus far. However, I believe that this process will be much more meaningful for them when they have a deeper understanding of the learning goals within the project. I plan to pilot this process in my physics

Effective Feedback

Note: This post is part of the Teaching 2.0 Masters in Curriculum and Instruction Program at the University of Wisconsin-Oshkosh. My current classes are about Project Based Learning and Assessment. This synthesis essay is intended to focus on effective feedback.


The deeper I have moved into project based learning, the more I have seen the power that true feedback wields. When my students are engaged in a meaningful (to them) task that is challenging and engaging, nothing can stop their learning. When I see a community of learners pulling together towards a common goal, I see amazing growth from everyone in the group.

This is where feedback becomes the core of what I do as a teacher. Whether I’m facilitating a whole class critique session or simply giving informal verbal feedback to a student in passing, this mentoring is absolutely essential. I constantly push myself to set up projects for my students that make feedback and revision an authentic and central aspect of our process.

Critique is one of the best ways to provide feedback to my students. In a whole class critique session, my students see aspects of quality work modeled by their classmates while also discussing how to make that work better. In a peer critique or galler critique session, all students get feedback from their peers to help improve their work. In these models, they also see examples of how their peers are attacking the task; this can help them get past sticking points. Teaching students to give effective self and peer feedback is the best way that I can effectively and efficiently ensure all students get feedback.

I recently read (okay skimmed) a study titled “Tell Me What I Did Wrong: Experts Seek and Respond to Negative Feedback.” This study found that as one’s expertise grows in a given area, he or she seeks increasingly critical feedback to improve performance. From this, one can infer that novices (i.e. students) desire and need more positive feedback in order to see what they are doing right. As their knowledge grows in a given content area or skill, they need increasingly critical feedback in order to move forward.

I taking my lessons from critique and from this study (and other similar articles I’ve encountered) and using them to continue to hone my feedback skills. When I give students feedback, I tend to use the “Notice, Wonder, Suggest” format. In other words, I point out things that I notice about the work, ask questions and make suggestions for improvement. When kids give each other feedback, I usually coach them to use “Praise, Question, Suggestion.” Both of these formats exist to make sure that kids get some indication of what they are doing well along with some ideas for improvement. In the future, I plan to intentionally point out the things that they are doing well when I give early feedback (or to kids who are struggling).

The great thing about this post is that I am giving myself feedback on my feedback technique as I’m writing it!