Doug Lemov recommended another book called * Driven By Data, *which he seemed to regard as the counterpart to his own volume of 49 techniques. He regarded it as making no sense to use his 49 techniques if you were not also teaching rigorous material. So I decided to read this other book, which is by Paul Bambrick-Santoyo. I’ll put in an Internet link when I have Internet service (I’m writing at my favorite café but their Internet seems to be down).

The essence of Santoyo’s argument is simple. Businesses use data to find, locate and keep customers. Schools have customers who are in the market for colleges, universities and jobs — yet without the right grades on the right tests, their classroom grades don’t matter much. School is the place where one learns the standardized test materials… But they’re not necessarily taught with sufficient rigor. If you come to be the principal of a school that’s been failing these tests for years, then your first job is usually to figure out why your school is failing, and how to make it NOT fail.

What Bambrick-Santoyo found, and did, was to make data at the core of his school’s teaching process. He and his teaching staff first looked at the standardized tests their students were supposed to pass. They evaluated mathematics problems based on the set of skills they had to have mastered to get each question correct. And then they took a long hard look at their scope and sequence.

I’m guessing at this point, because the book glosses over it. But they must have found that the math teachers were not asking hard enough questions. And the math teachers saw it right away. One sample SAT problem in the book shows a rectangle, with the area given and the sides related to it as proportions of X. The question was: find the value of X. The math problem the teachers taught, on the other hand, was the quadratic equation which would result as part of the solution to the SAT question. But the school never taught the necessary geometry skills, at the appropriate levels, that would enable their students to make the cognitive leap from geometry to algebra and back again (the given equation would generate two right answers, -1 and 6; but you’d have to know that a rectangle in real space couldn’t have a side of -1 meters long).

Bambrick-Santoyo and math colleagues built interim assessments (not tests!) based off the lists they generated of skills necessary to score well on the standardized tests their students had to pass for the school to pass. The assessments belonged to the school, so they had no difficulty drilling down through the data to the level of individual questions and individual students. Giving the assessments every six to eight weeks, the school was able to identify which students knew which mathematical reasoning skills, and which ones didn’t. They were able to teach, reteach, review and progress students to the point where 100% of their students were passing the math sections at or above grade level.

And then it only took another year for the humanities teachers to buy in. Within three years a poverty-stricken, failing school turned itself into one of the finest schools in its state.

Obviously this matters less if you don’t think that tests are rigorous enough or that’s passing tests is dumb, because it doesn’t teach you real-world problem-solving skills. Or if you think there’s too much testing going on anyway.

But the book’s point is that without rigorous interim assessment, you don’t know what to gloss over in class and what requires your hyperfocused attention. You don’t know what needs review or how to review it.

Bambrick-Santoyo was able to move kids around classroom to classroom and teacher to teacher, because he had the data that showed that Keshawn was struggling with fractions and that Mrs. Evanston was the best fractions teacher they had in the building. And then, he went back to Mr. Wasserstein because he was the best algebraic teacher they had. The data-driven school actually had more flexibility, not less, because public data and accountability to the state or national exam raised performance.

Moreover, the public data changed the school’s relationship with its community. Because Keshawn wasn’t failing math. He was failing fractions. And we know how to help him pass fractions now. It’s exactly Hans Rosling’s point in all those TED videos — if you have good data, you can make better decisions. You can push your kids toward a greater degree of success than they might have achieved without it.

And here’s what I suspect. Once these schools have six or seven years of quality education backed up by quality data and high scores on standardized tests, the communities where they are, are going to be home to an arts and performance Renaissance. Because the culture of the schools will be centered around academic success as the only option. And the colleges and universities will say, “ok, your kids get great grades… but what about their art portfolios? What about their musical proficiency?” and the schools will go after those programs with the same dedication and focus.

The news coming out of data-driven schools isn’t always positive, particularly when it comes to how they treat teachers. It’s very much a “get results that are measurable, or else,” kind of model. The people that run these schools are math nerds who don’t always get along with people like me.

But I find myself considering the ways in which I always despair over performance on final exams — how this kid or that one flubbed this part or that part, or missed this question or that one, or completely failed to answer this whole section. Bambrick-Santoyo’s point is that it doesn’t have to be like that; if I designed my final exam first, and then figured out what skills a kid needed to pass that examination… well. I could test for those skills, because I’d have names for them, and questions that matched those skills, and a way to review them or reteach them.

That’s powerful.

[…] Driven By Data (Book Review) — Paul Bambrick-Santoyo’s approach to assessment design in schools changed the way that I thought about teaching and learning, and how to go about redesigning assessment. It’s still incredibly labor-intensive for the teacher, but a powerful way to assess what students are and aren’t learning. […]

[…] driven by data […]

[…] it’s going pretty good. I’m just worried about how to structure the examination. Paul Bambrick-Santoyo talks about designing assessment questions in such a way that you know what it is that a wrong answer […]

The practices described here are built on the beliefs that the purpose of K-12 schooling is to produce college ready citizens and that the measure of the standardized test is the target for education. While there are many that feel that way, I have strong concerns about how that plays out in the real world.

While I strongly believe in the appropriate use of classroom data to inform instruction, I feel it’s a limiting approach to propose teaching to the test (or training for the test) when those tests are not often the best measures of learning, and the tests will only sample the curriculum in a given year.

Much of what is described here is good use of formative and interim assessment data to move students forward. My major gripe is how the accountability industry of public education conflates test outcomes with education itself.

Now, if a school has no viable and coherent curriculum, and there is no vertically aligned scope and sequence, a test on state standards certainly uncover that for the benefit of students.

If this becomes the reality, then I will have been proven wrong in my thinking of where the current accountability approach is moving schools. My experience in the first ten years of accountability has been the narrowing of the currciulum and in increase in remediation to the exclusion of anything that is not tested. As I see it, building something as rich and vital as a complete education on measures from a single assessment of limited validity that samples only part of the curriculum will not lead to a world class education.

I agree with some of your concerns about the testing industry, and the nature of testing as being the only accountable measure of a school’s success. And you may certainly be more right about how the first ten years of data-based instruction has worked out. It hasn’t really affected this school much at all. Yet.

It’s a rough problem. Without genuine data, we teachers are just guessing. With data, we risk turning ourselves into yet another system exploiting children. It’s all a rather careful balancing act.