Finals week this past semester marked the third year my students took an identical final exam. Capturing individual cohort performance and comparing cohort performance across the years are important to inform my instruction; using the same exam facilitates comparisons between teaching methods.

As an example, each year that I have taught this course I approached content, method and student learning differently. My first year, I employed more of a traditional direct instruction model often making use of the textbook and a student workbook. I neither had a wealth of group worthy activities to use nor any time to create them as I had three different preps and BTSA. Year 2 started off with a lengthier reteaching of arithmetic properties and rules along with pre-algebra content: two months of reteaching, in fact. During this time, I emphasized old-school word problems as well, in the hopes of acclimating students so we could engage with them more throughout the year: that did not materialize. After the two month reteaching phase, I mixed in elements of discovery-based group activities, such as discovering linear relationships through pile patterns, such as the following.

This year, with the formal onset of Common Core in our district, I jumped headfirst into the new mathematical practices and our district-defined curriculum. In my opinion, it was a complete disaster. More on that later.

## Data Deluge

The following series of tables and graphs illustrate aspects of each cohort as compared to one another, as well as in one case with all algebra 1 students in California.

In some ways, student outcomes each year do not seem to vary much, especially when normalized by incoming CST scores. At the same time, a cursory analysis of certain ratios of cohort results seems to show that movement away from a direct instruction method possibly lessened outcomes. Further analysis is required, however. I may address this in a later post.

## Key Readiness Test and Final Exam Statistics

The 2013-2014 cohort ended up with results slightly stronger than the 2012-2013 cohort on most measures. To date, the 2011-2012 cohort is the strongest overall.

## Final Exam versus Readiness Test Scatterplots

The scatterplots are quite busy. Yet, they give a glimpse into the entirety of each cohort from the weakest to the strongest outcome as well as possible learning gains in the semester. The latter point is entirely speculation as the relationship between the readiness test and the final exam has not been characterized. At the same time, there is a smidgen of reasonableness to the claim, especially when considering the extent to which content coverage during the year overlapped between the assessments.

## Final Exam Score Histograms

Similarly to the scatterplots, the histograms above provide a richer context for comparing the cohorts. While the average score for the 2013-2014 score exceeded that of the prior year cohort, their positions are reversed for their median scores.

## Incoming Students CA Algebra 1 CST Scores

Average CST scores for the 2013-2014 cohort are nearly identical to that of the 2012-2013 cohort at 275 out of 600. The 2011-2012 cohort scored higher at 290 out of 600. Regardless, each cohort has a Below Basic rating on the CST, on average.

Comparing cohorts to all students in California who took the algebra 1 CST in the figure above shows that my students CST scores are more concentrated in the Below Basic range for all three years with the 2011-2012 cohort possessing the largest group of Basic scores and the 2013-2014 the lowest.

The following figure compares distributions of my students ratings on the final exam using the scale shown in the figure to that of all California students on the CST using the algebra 1 CST scale. My intent with this graphic was to see to what extent the distribution of my students scores on the final exam mirrored the distribution of performance ratings on algebra 1. The level of precision in this comparison is at the ballpark level; that is, my students’ scores are in the ballpark for what all students attained on a similarly rigorous exam.

As I opined last year, the specific conclusions one might make from this set of data are limited and tenuous. Nonetheless, I found it worthwhile to invest the time in the data creation, analysis, and discussion.

Additionally, there turned out to be an extremely beneficial result for my current students this year, too. During this analysis, I decided I needed to apply a transformation to students’ scores in the grade book, as I no longer use the same grade scale in the course. This raised scores at the lowest end by 15-percentage points decreasing by 5-percentage points per grade level until the scales aligned.

Pingback: Data for Differentiation | Reflections of a Second-career Math Teacher

I’m curious as to what you have learned from all the data analysis that you can apply in the future that you feel is honestly likely to improve outcomes.

LikeLike

In some ways, as the data seem to show, outcomes (on the same assessment) may not change significantly, for similar student characteristics, even if methods change. At the same time, it is possible that outcomes on certain topics may have improved, which is a focus of a deeper analysis yet to be performed. Specifically, I spent a considerable amount of time on graphs of linear equations and solving systems of equations this year than the prior two. I am curious if performance on those questions improved this year…

This data and analysis served primarily as a quality check and record for future, deeper analysis.

LikeLike

Pingback: Success with Algebra Intervention? | Reflections of a Second-career Math Teacher

Pingback: Earning One’s Stripes As a Teacher | Reflections of a Second-career Math Teacher