What are we doing to help kids achieve?
A few months ago I reported that I was involved in a pilot standards based grading (SBG) program. SBG is a departure from my past practices. In the past, students received grades on a 100 point scale. Each assignment had a similar weight with the exception of the final. SBG grading not only has a different format but the idea is that we are trying to build a different culture. The goal is to stress mastering the standards. It is NOT the goal to just “get as many points as I can to get an A”.
My experiment in SBG started well but required some “tweeks” along the way. The grading program we were using for the pilot was not able to calculate “decaying average” well. It did a great job of showing how students were doing within the standards. The grading program did not do well when it came to actually calculating the decaying average. I had to make adjustments for second quarter. Every assignment was still aligned to standards. Each assignment was either a “formative” assessment or a “summative” assessment. All of the formative assessments for that standard count toward 50% of the grade for that standard. The summative assessment counts for the other 50% of the grade for that standard. In theory, students would typically complete three to five or more formative assessments before they would ever take the one summative assessment for that particular standard. With this method, the program could calculate the proper grade for the standard. Also, this mimicked the idea of decaying average. The formative assessments were more of practice and counted less. It was expected that students would get better with practice. The summative assessment should then count for more because it was a better snapshot of their performance. Overall, this seemed to play out well in the class.
Another change was the “redos”. If a student scored a 0,1 or 2 on an assessment, they could “redo” the assessment for a different grade. Their grade could go up or down on the redo. There was only one instance where I felt a student did not try on the initial attempt because they thought the next attempt would be easier. I continued to enforce the idea that they had to show me evidence of learning before they could take the redo. I also changed the highest possible grade they could get on the redo. The highest grade a student could get in the beginning was a 4 out of a 0 to 4 scale for the redos. It was changed to a 3. The vast majority of the students did not try to “play” the system. This change addressed the few who did.
I also learned many new aspects about SBG. First, I was concerned about the grading. About 95% of all assignments were free response questions in which students had to provide an answer with multiple types of evidence. It is extremely difficult to find levels of understanding with multiple choice or true false questions. I was afraid that this would take forever to grade. It was much easier with a well defined rubric that students clearly understood. Every question was aligned to a standard. It was assigned a number of 0 to 4. If students came up to question a grade, it was more about a particular standard and the difference between “emerging” or “mastery”. Also, I just read Melissa Hemiling’s blog on “Sustainable Grading Practices”. Her methods would fit perfectly into my SBG system. I am excited to try to incorporate some of her ideas. I am hoping it makes feedback more effective and will continue to positively change the culture of the classroom. I never would have considered her methods with the traditional way that I had been doing things. I was thrilled to read about a shared experience from another teacher. It helps to validate what we are both trying to achieve. It makes me think I am a little less crazy for trying this….
The second aspect that caught me off guard is that I learned I really had to “up” my game when it came to assessments. Again, almost every assessment was a free response question. Most assessments, students could use their notes. That meant that I could ask some really hard complex questions. I was pleasantly surprised to find most of the time students rose to the occasion. In my mind, I had to change my expectations for the class on what I thought we could achieve as a class. I feel as if we can master more complex ideas under the SBG system when students are allowed to use their notes on an assessment.
The culmination of two quarters of SBG came at the semester exam. The semester exam had 7 questions. Each question addressed multiple standards and was a free response question. Four of the questions students could take home and had a week to complete. They were to turn these in at the beginning of the scheduled semester exam. Students had two hours during exam time to complete their final three questions. Half of the class worked on a question in which they had to develop an experiment and use lab equipment to collect data (the picture shows some items they had to analyze using mass and volume). They then had to draw a conclusion from that data. The other half of the class worked on the other two open ended questions. After an hour, the groups switched and then turned in their work. The most important aspects of SBG were evident in the conversations with students after the exam.
Literally every conversation was around the standard or the question. Not a single student asked, “What is my grade?” or “What is the answer?”. One student said, “I kind of like this. I really had to think about these problems. Most of my exams, I cram the night before and then forget the stuff.” The most important reason to use SBG came down to what happened with one student. This particular student did well first quarter. Second quarter, the student only made it to school about three days a week. Their home life was extremely challenging and the student’s grade dropped drastically. Under the SBG system, the student received a solid B on the exam and passed the semester. I have a theory that for the second quarter, the student had to be a master problem solver outside of school just to survive. The exam was all about doing what this student already knew how to do...solve problems….just in a slightly different context with a different subject. If the exam had been a bunch of multiple choice questions with one or two free response questions at the end, I do not know that she would have even finished it.
SBG tends to stress performance based assessments. I have a theory that the good students will do well no matter what you throw at them. However past research and speaking to others who use SBG indicates that SBG can catch students who would not have done well with just pen and paper "check the box" type of assessments. Their success in the SBG system is authentic. They truly are demonstrating mastery on a high performance based level. What the research does not report is that when I told the student that they got a solid B on the exam, they had tears of joy. This, for me, makes a sometimes difficult journey worthwhile…...
As a side note, I really want to give a shout out and "Thank you" to Lauren Stewart. Without her advice and help, this journey would have been much more difficult. Also need to thank my colleague, Trisha Underwood, one of our technology innovators who helped me learn a whole new grading system.
All comments must abide by the ChemEd X Comment Policy, are subject to review, and may be edited. Please allow one business day for your comment to be posted, if it is accepted.
Comments 1
Thanks for the shout out! I
Thanks for the shout out! I love the idea of a lab-based question as part of the semester exam! What a great way to test problem-solving skills and give students who might struggle on a straight paper-pencil test a place to shine! I really like your approach of letting students use their notes to gain the opportunity to ask more complex questions. I have an internal struggle about exams every year asking myself "what is a summative assessment" and "what level of 'rigor' is appropriate". This definitely is challenging me to question what I consider "rigor" to mean! Thanks for the food for thought!