Monday, May 14, 2018

Module 6: Success Story Two, Landmark Elementary

The second success story I read occurred at Landmark Elementary. The story covered the first three years of implementation of data teams, so I was able to see how they evolved through the school over time.

Over those three years, the school saw a huge shift in the achievement of students. The data teams focused on a narrow goal, but it had a broad impact as students were able to apply their understanding in the narrow focus to more aspects of the curriculum. The data teams began to see better differentiation through the process of determining which students had what kinds of needs, so all students were being more successful than they had previously been.

Landmark elementary saw two main obstacles in their implementation of data teams. The first was the amount of time they had to dedicate to the process. It took a total of eight hours over the course of a month to complete the first full process! As the teams learned how to work more productively as a data team, their meetings were able to become more efficient, and they were eventually able to shift their focus from collecting data to analyzing the data.

The second obstacle was the inability to continuously focus on the new initiative of data teams. They would start making positive progress, but then other things got in the way. Budget cuts led to stress, which removed the focus from data teams. It was easy to fall back into old habits. The building kept hitting a "cotton candy wall" and bouncing back to their old ways.

I have seen this happen to myself, my collaborative PLC, and my building as a whole. I have struggled with keeping the new techniques that are taught to me by Learning Sciences International in the forefront of my mind. I know the strategies and can implement them into most lessons, but sometimes I fall into the old routines I've developed, even though I've only had a few years to develop those routines. I can only imagine how difficult it is for more veteran teachers to change the way they've taught for many years. My PLC also struggles with "staying the course." Similar to what Landmark Elementary teachers did at the beginning of their implementation of data teams, we will set goals that end up being too unrealistic. When students don't meet those goals, we abandon them in favor of new goals rather than continuing to work toward them. We eventually forget completely about the goals we set early in the semester.

If time had been spent using Howard Gardner's Factors to Change People's Thinking (DuFour, Chapter 8), the teachers may have begun thinking about the data team process as critical to student achievement. When stress levels are high, teachers focus on the essential things they need to help their students succeed and forget about the extra "frills." When data teaming is an essential, it remains part of the focus no matter how much stress is present.

One things that worked well for Landmark Elementary is that they were able to take the time to develop the strategies that worked the best for their students. At my school, we are being told what strategies to use in order to be compliant with Schools for Rigor. My PLC is usually a high-functioning team, so I believe that we would be able to come up with different strategies, test them, and tweak them until we were seeing outstanding growth from our students. I do feel that we have put a lot of time into learning the Schools for Rigor strategies and implementing them, but as we get better at them, it takes less of our time. Landmark Elementary saw similar outcomes of their implementation of data teams.

While my PLC is usually high-functioning, we have most certainly had days where we were not operating to our full potential. This typically happened when our priorities for our meeting were not aligned with each other. I feel that the structure of the data team meeting could help us focus and align our priorities. Having more meetings where we are high-functioning would be great because I leave those meetings feeling inspired and motivated rather than frustrated and burnt out.

The benefit that arose from data teams at Landmark Elementary that I'm most affected by is that teachers and administrators became more accountable to each other and to the students. We want our students to hold themselves and each other accountable for their learning, but we often forget that we need to hold ourselves and our colleagues accountable for helping students succeed at a high level.

Sunday, May 13, 2018

Module 5: Success Story One, Co-op Arts and Humanities Magnet High School

The first success story I read was from the Co-op Arts and Humanities Magnet High School in New Haven. The focus was on creating a culture in the building and among the staff in order to make the Data Teams process successful. Many things about their implementation were positive.

Because they "set the stage" though professional development meetings and beginning the process at the level of administration and teacher leaders, they were able to see improvement and model the process for other teachers. Embedding this process from the top down helped promote a culture of utilizing data analysis within the school. Having a common planning time for teachers in the same department and having one teacher from each of those teams participate in the Teacher Lead Team helped the implementation of the process go smoothly.

There were also a few issues that arose through Co-op's implementation. The arts teachers, especially music teachers, struggled to embrace data in their discipline. They thought of data as being something that had to be tied to numbers, and it is difficult to assign numbers in terms of score to something as subjective as musical performance. They were able to get past this obstacle and ultimately embrace data when they focused on specific criteria students must meet in order to be considered proficient. Focusing on that also helped students know what was expected of them to move beyond their current level. The school could have also focused on training around forms of assessment. The arts teachers had probably been conducting formative assessments all along that they could have drawn data from as long as they knew the goals of the assessments.

Another problem that Co-op encountered was that teachers were prone to come up with activities for students to do rather than strategies for teaching that would address the needs they defined. This is something that my PLC struggles with as well. Similar to the teams at Co-op, my PLC has a common planning time at least twice a week. However, too much of our time is focused on activities rather than addressing the needs of students. If we were more deliberate in structuring our meetings with focused questions, we would be able to see an even higher rate of student achievement.

Luckily, the culture within my PLC is amenable to change and trying new things. The school as a whole struggles with maintaining that culture, though. We remind ourselves constantly that we are implementing changes because our focus is on "what's best for students." However, there are still plenty of teachers that are hesitant (especially with Schools for Rigor), and like some of the arts teachers at Co-op, don't believe that the new initiatives will work in their discipline until they are shown how it can specifically work for them. Co-op was able to have a staff that largely embraced the new initiative, but there are enough "resistors" at my school that about half the staff would need to be re-hired in order to get everyone fully on board.

One thing that Co-op does to help it be successful that I would like to see in my school is offer professional development that is differentiated and continues throughout the school year. A lot of the PD offered at my school is focused on only our most recent initiatives and offered in a "one size fits all" style. This has started to get better as building leaders see the need for differentiation around our old and new initiatives, but we still have a long way to go in order to be truly benefiting all staff.

The success story includes "consistent modeling and monitoring" as one of the elements of successful implementation. I think that's the most important to me because it is so easy to fall back into old routines if we are not held accountable and shown how we are expected to implement new initiatives.

While state testing is by no means the only or best way of showing a school's success, Co-op had large gains in their state testing scores. This shows that data teaming was successful in at least one measurable way. Perhaps this type of evidence would be enough to help get a few more of the resistant teachers agreeing to implement the process of data teams in my school and others that are on the brink of implementation.

Wednesday, May 9, 2018

Module 3: Moving Forward with Data Analysis

Assessment is a critical aspect of teaching, but only when it is done correctly. Giving a test at the end of the unit is only important for assigning grades, not for teaching. Assessments that guide instruction and help students know how to progress are the critical ones.

Tomlinson suggests that the best ways to make assessments useful are to assess before teaching to know what students know going into a topic, offer appropriate choices so that students have a variety of ways to show their knowledge, provide feedback early and often so students know how to move forward, and encourage self-assessment and reflection so students know how they're doing.

Personally, I don't find as much value in assessing before teaching as the author places on it. In my courses, the topics share common background knowledge and typically build off one another. I typically know students' levels of understanding without a separate pre-assessment. At most, I would give a pre-assessment toward the beginning of each semester. I have found that assessments that are out of context from what I've been covering in class stress students out and cause test anxiety, even when I emphasize that it is not going to be graded, so it does not give me an accurate representation of student knowledge anyway.

I do agree with Tomlinson about the importance of feedback after assessments. When students are given feedback, I have seen them feel empowered to fix mistakes and to deepen their understanding in order to improve because they know where their strengths and deficiencies are. Providing more feedback also helps spur students self-assessing.

Observation is the assessment format that I find easiest to give feedback for, which is one of the reasons it is my most frequently used types of assessment. I also like that I receive feedback quickly, which allows me to alter my lesson on the spot if needed, especially because an informal interview makes it much easier to determine what students know compared to what misconceptions they may have. Observation requires minimal planning, which is good since it happens multiple times in a single class. For my math classes, the planning that is required is mostly creating problems (process tasks) that students will need to solve. The primary issue that observation causes is that it is harder to track progress in a way that could be shown to parents, administration, and other teachers. Being part of a School for Rigor means that this issue has been reduced greatly. We use a tracker that is specifically set up to track the results of observation, and it automatically charts the results in pie graphs.

I would love to explore students using the feedback I provide in a more formal way. If I was more purposeful with students self-assessing, they would be able to internalize the success criteria so that they can analyze errors and check their own work as they go. I like the idea of assessment conferences because it could teach students to advocate for themselves rather than merely accepting the score I give them without having to think about the reason for the score.

Using established criteria helps students know exactly what is expected of them and would help them to self-assess. It is also a crucial aspect to assessing diverse students. If a teacher compares assessment results to other students, it creates a mindset that someone has to "lose" rather than all students being capable. Additionally, diverse students benefit from having a wider range of tasks that have multiple approaches so that a student has the chance to show their understanding of the content in a way that is best for that student.

Some students will benefit from modifications to assessments. Students with disabilities and English Language Learners both benefit from extended time. The extended time could accommodate extra thinking time for ELL students to process what they are being asked to do, and students with disabilities could have breaks built in to their extended time.

Diverse materials can help diverse students as well. It may be useful if students with disabilities have multiple ways to respond to assessments. This could be achieved by having multiple approaches to the assessment because one approach could be paper and pencil while another uses a computer. ELL students could take advantage of dictionaries to help translate and a variety of visual materials so that they don't get hindered by vocabulary if it is not necessary.

These accommodations help assessments achieve their goal: to tell us what the students know, give the teacher and student feedback, and ultimately guide instruction.

Wednesday, May 2, 2018

Module 2: Evolution of Data Teams

As a more "left-brained" thinker, data excites me. The numbers automatically create connections in my mind, and I can pick out meaningful patterns. However, not everybody is as enthusiastic about using data as I am.

One obstacle that teachers face when it comes to data is that they don't find it useful or think that it doesn't matter. In my experience, this is largely because the data set seems too abstract. Teachers could benefit from being trained more explicitly how to interpret data and then make use of it. When time allows, I could talk to my administration about scheduling a PD session that I could lead about how to interpret data and respresent it in a way that makes it more meaningful. In chapter 7 of Learning by Doing, DuFour points out that "providing evidence of results is one of the most effective ways to win the support of resistors." If we can help teachers to collect the right kinds of data and then interpret it, it is likely that they will be more willing to "jump on board" new initiatives, including the creation of Data Teams.

Another excuse I've heard from teachers about why they don't take the time to analyze data is that there are too many factors in place that are outside of the teacher's control. I agree that there is a lot contributing to the success (or failure) of students, but data can actually help us see which subsets of students are thriving in different environments. It's our job as teachers to help ALL students succeed, and using data to find the things that work for different students is a way of accomplishing that.

Perhaps the most common reason I hear for teachers not wanting to use comparative data is that "comparative" is far too often confused with "evaluative." In a society full of competition, it's hard not to see the teacher with the best data as "winning" and the teacher with the worst data as "losing." I have seen veteran teachers get defensive when they see that a younger teacher has gotten better results. I've heard "I'm just tougher on students, so I score more harshly" on several occasions despite practicing collaborative grading. These teachers are fearful that if they aren't getting better results, they'll be pushed out, and they have anecdotal evidence that it could happen.

The question is: how will those teachers be convinced that it's okay to not be the best? DuFour addresses this concern by saying "there will always be a teacher with the lowest results on any given common assessment, just as there will always be 50% of the students in the bottom half of the graduating class." I don't think that's enough to convince the teachers who are fearful of losing their job. It may help if care was taken to keep data anonymous at a higher level. After all, the people who benefit from having names attached to data are the teachers, not the administration or district. I would also like to see people with more power putting the focus on individual growth and a willingness to put in the work to achieve that growth. If less effective teachers must be pushed out, let it be the teachers who refuse to do what's best for the school as a whole rather than those who just need more coaching.

The administration in my building communicates frequently that the data they collect during walk-throughs is not evaluative, but not everyone believes them. The culture of the school plays a big part in this. Many teachers feel wary about decisions made by administration because the process of gaining consensus for new initiatives has not been effectively present.

Personally, I feel that many staff members at my school would be appreciative if the administration brought new initiatives to us in the process DuFour outlines in chapter 7 of Learning by Doing. Specifically, I think that if there was a focus on building shared knowledge, the more stubborn staff members would be more willing to listen to what was being presented. They may listen more intently in order to find ways to support their rejection, but it would be a foot in the door. Some administration seems to want to avoid conflict, so they tell us how it's going to be. However, everyone involved needs to "recognize that conflicts are more productive when members have found common ground on major issues" (DuFour chapter 8) so that we can re-focus on what matters - the students. A shared purpose can help build trust and a positive climate, which can lead to more people being willing to share their views. When more views are shared, more creative solutions are able to arise.

Although the building as a whole falls in the developing stage, my PLC is easily within the sustaining stage of responding to conflict. Visitors to our PLC are taken aback by how much arguing happens, but because it's a safe space with high levels of trust, we are able to use the arguments in a way to come up with solutions that are even better than we had thought of before. On several occasions, we have disagreed about what strategy would be best for teaching specific content. Two of us will end up deciding to teach it one way and the other two teach it another way. Afterward, we can come together to compare the results, then make note of what worked better so that we can use the more successful strategy in the future. My PLC has been told by multiple observers that we are one of the most positively productive teams in the building, and we get asked how we do it. I strongly agree with DuFour that "the real strength of a PLC is determined by the response to disagreements ... that inevitably occur."

I have become very close with my PLC, and we work well together, so when I heard that we are moving toward data teams instead of PLCs, I was wary. When I learned that a data team was basically a PLC with more structure and a stronger focus on results, my wariness quickly turned to excitement. We have already cemented our skills in collaboration and inquiry, but becoming a data team means that the meetings we already have regularly will be more focused and driven by the actions we as teachers will take and the results those actions get.

In order to move from progressing to sustaining in data teams, my team will need to focus on how to "make midcourse corrections and celebrate short-term wins" (McNulty). Currently, we only discuss specific data after a unit is over, so our adjustments don't take place until the following year.

I am excited to be able to focus more intentionally on data with my team as we move forward, and I hope our building as a whole will be able to move to this stage in a purposeful way as well.

Thursday, April 26, 2018

Module 1: Exploring Assessment

One of the first things I learned in my teacher prep program about actually running a classroom was the difference between formative and summative assessments. It was drilled into my mind that formative assessments happened during instruction and that summative assessments happened after instruction. At the time, I didn't realize why it mattered that those were two different things. Now I understand that the reason their differences are important lies more in the reason the assessment is being given than at the time it is being given.

As a teacher, it feels like the mandated summative assessments are more of a hindrance than a help, especially during the spring semester. My instruction and time spent working on the actual curriculum gets interrupted by ACTs, Iowa Assessments, and Map testing. The state, nation, district, and colleges might all look at the data from these and use them to compare my students to others, but I don't use them in a way that improves my classroom instruction or the students' understanding of Algebra 2 material. My own summative assessments, the end-of-unit tests created at the district level, help to inform me of changes that need to be made in the future, students that need additional instruction, and comparisons to my colleagues, but they still feel too frequent and time-consuming.

Examining the balance between summative and formative assessments has helped to put these into perspective. ACTs and Iowa Assessments are given once a year each, MAP testing happens three times a year, and end-of-unit tests happen three times per semester. That adds up to 13 of these summative assessments, not including re-take opportunities. However, I could easily give 13 formative assessments in a single week.

An example of a formative assessment question and a summative assessment question are both shown below.
In-Class Practice Problem - Formative



Test Question - Summative
The difference is not in the problem. They are both testing knowledge of the same standard (and are actually the same problem with one minor change). The difference is in the use of the problem.

When students complete problems on their whiteboards, they get immediate feedback from me as well as their peers. Techniques I've learned through Schools for Rigor have helped this be more useful to the students. I give students success criteria, and we work through it to make sure everyone agrees what it means to be successful. Then I ask students to monitor their own work for the criteria, and once they feel more comfortable using it, they monitor their partner's work. Meanwhile, I am able to monitor through observation and give feedback by using "stoplight cups" that correspond to whether a student is at "Not Meeting," "Progressing Toward," or "At Target." Thanks to the opportunity for practice and immediate feedback, students are able to detect and fix errors. By checking the work of their peers, they get better at looking for mistakes in their own work.

The test question, on the other hand, is meant primarily to check a student's understanding and assign a grade. Students will get feedback in the form of an SRG grade eventually, but not in a way that allows for revision of understanding.

Garrison and Ehringhaus describe five strategies that could aid in formative assessments. Four of them (criteria and goal setting, observations, questioning, and self and peer assessment) have been covered extensively in my Schools for Rigor Training. The remaining strategy is one that I am very interested in pursuing - student record keeping.

The article suggests having students track their learning. In the past, I have shown students a collection of all the new pieces of information they have learned, and they are astonished to see such a vast compilation of knowledge. If they had been tracking it themselves, they would have seen the "small wins" they've been making along the way, even if they have been struggling.

I plan to ask students to track their progress through the last unit of the year. Every exit ticket, whiteboard problem, and partner practice problem give students a sense of exactly where they are and what they still need to work on.

Module 6: Success Story Two, Landmark Elementary

The second success story I read occurred at Landmark Elementary. The story covered the first three years of implementation of data teams, so...