Inspired by a colleague’s presentation during the KIST ‘Teach together; learn together’ professional development event, I took a more formal approach to the impact cycle than I have in the past.
First, I copied the raw data from my students’ diagnostic assessments into an Excel spreadsheet. I added a row at the bottom to show the average result of each test item as a percentage, then used conditional formatting to create a visual perspective into the data.
This allowed me to identify a general area of need: Reading. Then I simply copied and pasted the test items with average results of less than 50% along with the corresponding learning outcome indicated in the test documentation.
The common weak thread, in my analysis, can be expressed by the verbs ‘describe’ and ‘explain’. Surprisingly, in Bloom’s Taxonomy, these terms are associated with Knowledge and Comprehension, or ‘lower’ order thinking.
One issue involves the outcomes related to author’s purpose. Put bluntly, there is no such learning outcome in the standards for Grade 4 at KIST. The students are being assessed in a high stakes manner on learning outcomes that the school doesn’t explicitly teach. I, of course, can add standards about author’s purpose to my working documents. Indeed, that is the purpose of this impact intervention. However, it’s clear that teachers’ voices are needed in the development of the school’s assessment and planning documents to ensure that they are relevant and in alignment with one another.
My plan for having a measurable impact on student learning is to ensure that they are exposed to the idea of author’s purpose, and explore it in a variety of ways in our guided reading sessions. This can be done by direct mini lessons and reinforced by revisiting the concept whenever we encounter a novel or remarkable example in the texts we explore.
Another approach would be through precising and close reading of a master text. For this, the grade four team selected an abridged version of Swiss Family Robinson to be integrated with our unit of inquiry in April and May. This plan might be our students’ first opportunity to read a novel together. The text uses rich vocabulary and imagery, so I believe there will be many opportunities to analyze and summarize selections, and hypothesize about Mr Wyss’ purpose for various literary choices.
To avoid over-assessing my students, I will plan to use the end-of-year English Diagnostic Assessment, of the same type as the one at the start of they year, to measure impact. Throughout the school year, I have assessed and gathered data on a wide variety of learning outcomes informally during guided reading sessions, but this will be the only formal assessment of the learning outcome of author’s purpose.
At KIST, students complete two important diagnostic assessments at the beginning of the school year. One is academic from the United Kingdom Standards and Testing Agency. The other is a Student Survey which allows the learning community to evaluate our classroom environment.
On the academic tests, only 12% of my class achieved ‘just below expectations’ and only 8% were in reading and math. That result indicated to me that academics were an area of strength and that interventions would be needed on a limited and individual basis. With differentiation strategies in place, a classroom culture that would cultivate peer support and collaboration would be helpful to increase the depth and quality of learning.
Turning attention toward the student survey, I identified two major areas of concern that could potentially derail academic progress and achievement.
This post will focus on an action plan to improve classroom climate and morale with the goal of increasing academic achievement through increased enthusiasm and positive engagement.
As detailed in the post, Elementary mindfulness, daily meditation is one strategy that could contribute to a more reflective classroom climate. However, such negative survey results showed a need for a targeted intervention with the goal of helping students to be more Reflective.
Another important opportunity for reflection is our weekly Community Circle. To help my class understand the importance of reflecting together, we elevated Community Circle to a top priority. On top of never cancelling or shortening our sessions, I devised an evaluation system by which active participation results in a ‘meeting expectations’ grade in Listening and Speaking. Knowing that their contributions as members of a community was being monitored, students practiced more intent listening and thoughtful speaking.
I set a goal to award at least one IB Learner Profile Award or PYP Attitude Certificate to each student as quickly as their actions and choices would allow. The result was over 100 being awarded and received, and every student received at least one. To provoke parent encouragement, every award was accompanied by an email to the student’s parents with a photo of them receiving it and a description of how it was earned.
The importance of being reflective
The most precise tool in this plan was to create an opportunity for students to reflect on the way the listen and speak to each other. After collaborating with my grade level team about the questions, the result was a G4B Daily kindness and respect reflection form. Completing the form was assigned as home learning every school day for three months. My assumption was that over time, regular reflection would increase students’ mindfulness to help them to improve their communication and interpersonal interactions.
The form was submitted over 800 times and the results were a satisfactory upward trend. A short term intervention might produce more dramatic results, but would not necessarily produce a lasting outcome. These data demonstrate collective and gradual improvement. It also shows that students were generally more critical of themselves than the class as a whole, and that they each improved in relation to their peers.
The most encouraging results were in the domain of listening. The class showed greatest improvement in listening actively and intently, two skills with a clearly causal connection to academic achievement.
High risk cases
Using the academic diagnostic assessment results to identify ‘high risk’ students, I made a point of checking their reflections occasionally and conferencing with them to increase awareness of their own behavior.
The first case is a student who is well known for having attention challenges as well as socially awkward patterns of behavior, as well as ‘just below expectations’ results on at least one diagnostic assessment.
Interestingly, the results clearly converge, indicating that this student believes that their behavior has improved to more closely match their perception of the class. I have observed this to be true anecdotally, as well, as students in the class have taken responsibility for helping this student to interact more productively and follow directions more consistently.
Another ‘at risk’ student took a very different journey. This may be the only example of a student rating the class lower than themself at the beginning of the survey.
There are students who could reasonably evaluate their own behavior as being better than the class as a whole. Unfortunately, this student is not one of them. We discussed his reflections in detail and there were many instances when I pointed out when choices, ranging from playing with a pencil case to shouting over group members during discussions, were examples of poor listening. The result seems to be increased awareness of their own actions, resulting in a dramatic drop in scores, followed by improvements illustrated by increases in some areas.
Another student who is not achieving academically has also had several issues outside of class related to inappropriate use of language. This is another case in which these reflections may have served as a ‘reality check’.
What is most interesting about this case for me is in which areas this student felt they were doing well and comparing that to their evaluation of the class. At first, two speaking categories were higher than the class, yet the scores converge at the end while the remaining areas dropped.
Are results like these desirable? If the goal is increased awareness, and there is a clear problem, then reflections that become gradually more negative could show increased awareness or acceptance of the problem.
Some students were not ‘at risk’ based on their diagnostic assessments, but warrant special attention for other reasons. The next student is well known, if not notorious, for being at the center of most episodes of misbehavior and interpersonal drama in our class.
Interestingly, they seem to accurately assess that their behavior is less kind and respectful than the class as a group. Yet, I am struck by the ambiguity of the self reflections. There doesn’t appear to be any strong trend and the averages of the scores simply converge at 3.5 at the end. This is a case that raises more questions than answers, the most important being whether the student is very aware of their choices, but simply failed to make or observe any progress. It’s also possible that these results could indicate a deep lack of mindfulness about the student’s own actions and interactions with others.
It is possible that a differently designed reflection tool could reveal more insights into this case.
The following graphs are included simply because the look fascinating. The first shows a strange consistency, yet also a clear trend of improvement.
Next, here’s another example of consistency based on category and gradual progress.
At the end of the three months, I asked the students to answer the original questions of concern: ‘Students are respectful to each other in my class.’ and ‘Students’ behave appropriately in my class.’ This survey was random, like the initial one.
The results are improved, but much more dramatically than I expected.
There has been a fundamental shift in behavior and the perception of behavior in my class since the beginning of the school year. While it is impossible to attribute the change to any one variable, it is safe to say that all efforts to increase kindness and respect had a cumulative effect.
Data has been an undercurrent in my teaching since my first classroom in 2007. Of course, in that year, I struggled to gather data and there was virtually no chance of utilizing much of it to inform and enrich instructional planning. For good or ill, data is not essential to the survival of a first year teacher.
Each year after, I slowly improved, including a variety of experiments like the one shared in the post Student Empowerment | COETAIL final project. I tried different forms, organizers, notebooks, etc, until finally unveiling an integrated digital system last year. I shared it as a presenter at the GAFE Summit 2016 in Kobe, Japan, and used it for the school year to publish students’ ongoing assessment data, and other key information such as website usernames and passwords, directly to them as web pages. After celebrating and discussing the system, I felt that it was terribly unsatisfying.
Inspiration came in the form of media such as Jack Norris’ keynote presentation from Strata + Hadoop World in San Francisco, Let’s Get Real: Acting on Data in Real Time, embedded below.
The concept of ‘data agility’ through converged data and processing appealed to me because what I sought a tool which would organize all assessment data in a way that could be searched, shared, and analyzed. Over the years I had been introduced to many ‘tracking systems’, only to discover that they were utterly unmanageable at scale. Ticking boxes on scope and sequence documents or highlighting learning objectives almost arbitrarily seemed like a show at best. In fact, a colleague who shared such a system with me admitted that at the end of a term, due to a lack of hard data, he would simply choose outcomes to highlight on every student’s document regardless of their actual progress or learning. To quote Mr Norris, I wanted my data to ‘get real’.
‘Small data observes the details or small clues that uncover large trends. The idea is that by honing in on the elements that make up relationships and narratives in schools, education can be enriched.’ The Edvocate
What I wanted to do was bring transparency to the relationships between myself, students, parents, and administrators. Further readings within the big data and data science trends likeData Quality Should Be Everyone’s Job by Thomas C Redman directed my attention toward the purpose for the data. Before data is collected, it should already have a purpose, and that purpose dictates the design of the collection, publishing, and analysis tools.
The next piece of the design puzzle was my school’s Assessment Handbook. In it were the categories, criteria, and descriptors on top of which my system would function.
Utilizing a system of Google Sheets, data is entered and student progress viewed in potentially real time, depending on the efficiency of my data entry. As we began using the system I shared a video, Tour of your data book, embedded below, which illustrates the details of the user experience much better than I can describe in words.
This system has been remarkably effective and unlike last year, I only plan to make minor tweaks, especially to the user interface. Feedback from students and parents revealed, as I expected, that there are too many graphs and that it’s difficult to know which are more or less important.
Another feature I plan to add is a Google Form which mirrors the data entry document which would allow teaching assistants, specialists, and even parents or students themselves to contribute data to the system.
One admirable feature of professional development at KIST is the annual Impact on Learning study. Teachers design a data driven experiment based on a pedagogical approach or strategy and then analyze the data to reflect on the efficacy of that aspect of their teaching. To start, I formulated a question and answer dialogue:
On which group of students do I want to have the greatest impact?
All of them. Inclusive practices and thoughtfully designed learning experiences which emphasize student choice and voice should provide opportunities for all students to excel.
Which group of students are most difficult to reach with inclusive practices and learning experiences that emphasize student choice and voice?
Students who are reluctant to share their ideas in class or participate actively in learning engagements are the most difficult to reach.
Why don’t those students participate?
The reasons they don’t participate are as diverse as the people themselves. However, if they don’t participate now, they likely didn’t before either. If not, then their opportunities for practice have been limited, possibly severely.
Often, students (and people in general) with little experience speaking in a group feel shamed by their lack of fluency. Lack of confidence leads them to withdraw more, causing them to practice even less. I have been tempted in the past to ‘call out’ reluctant students, but Alfie Kohn’s article, ‘Your Hand’s Not Raised? Too Bad: I’m Calling on You Anyway, provides needed perspective into this issue. When done improperly or insensitively, calling on these students might do more harm than good. Being fairly introverted myself, I sympathize with many people’s preference to remain in the shadows of a crowd, but nine year old introverts, preferences aside, need to practice public speaking in a safe environment.
Being introverted doesn’t make you averse to collaboration. Everyone should practice social skills. https://t.co/P87RzlBL68
What I needed was a strategy to encourage the students to grow as courageous Communicators by sharing their ideas with the whole class.
My methodology for gathering data is simple. At various times in class, I propose an open ended question. For example, I might ask for interpretations of an idiom, impressions of an image, or opinions about a famous quote. That there are no correct or incorrect answers is made clear to students, as is the fact that ‘I don’t know’ is an acceptable response. Students may also ‘pass’. Sometimes the provocations are directly connected to our unit of inquiry, sometimes not. Using a deck of laminated cards with the students’ names written on them, I ensure that every student has an equal opportunity to speak. My response to every contribution is ‘thank you’, and I very rarely paraphrase or ask clarifying questions in this context. To students who ‘pass’, I simply respond with ‘OK’.
Cards are separated into two categories and then data entered into a spreadsheet about who contributed an answer and who did not.
The raw data is relatively easy to process to produce interesting graphs.
Some students always talk and some never talk by default. Filtering out those students makes the graph more readable, but still not very revealing.
Referring to diagnostic assessment data in Reading from the beginning of the school year, I included only students who score ‘just below expectations’ or ‘below expectations’.
Next, filtered for students who scored ‘Just below expectations’ or ‘Below expectations’ on Aug diagnostic assessment in Writing. Again, this graph doesn’t instantly reveal anything other than a general upward trend in participation. Next, I wanted to explore a possible correlation between this exercise and improvement. The next graph is students with more than 10% improvement on ongoing informal and formal assessments in Reading from Q1 to Q3. Next, students with more than 10% improvement on ongoing assessments in Writing from Q1 to Q3.
This is the first graph which indicates a clear correlation. With only two exceptions, students whose writing has improved are also increasing their participation. Since the objective is to improve language skills, I tried including only students who consistently achieve below 80% on ongoing assessment in English language.
Compare to consistently strong achievers in language. Interesting that consistently higher performers seem to have random participation while consistently lower performers are participating progressively more and more. If only for the purpose of having more data visualization, members of most advanced guided reading group. And the least advanced guided reading group. It’s exciting to see that this activity is impacting the exact group of students it was designed to benefit. When the class completes its end of year diagnostic assessment in Language, I expect to see similar improvements among students who have gained confidence as communicators through this simple activity. Finally, here is the whole class average.
Rather than analyzing individual students, this graph reveals something I hadn’t expected. If I compare the number of opportunities to speak with the average rate of participation, there is stark correlation. Number of data points (% participation): October 4 in one week (61%) November 16 (61%) December 3 (41%) January 7 (60%) February 7 (74%) March 7 (71%) It would seem that the more we do this activity, the more participation there is. Thinking of the student trying to build confidence, it makes perfect sense. If one hesitates, one loses an opportunity. However, missing a chance might be just the motivation one needs to seize the next one. If that next opportunity comes sooner than later, one is more likely to take it. And so, the data comes full circle from thinking of individual students, back to individual students. Peter Gow’s post, The Data Challenge for Schools – What Problem Are You Trying to Solve?, reminds me that the importance of data is not about averages, it’s about outliers. The greatest impact can often be made where there are cracks or gaps in the data. What is important is being intentional when gathering data so that when it is organized and interpreted, it answers the initial question. It’s also important to remember that while ‘data’ and ‘gut’ are not the same, as Doug Johnson notes in his post, Data or gut?, through investment in time and training, it is possible to align the gut more precisely through data.