* Important Note: Mehlhorn deleted the Medium posts quoted below after this critique was published. Spielberg took screenshots that are included in his PDF. They were OCR’d so that they could be included as text in this card. The posts that were deleted are https://medium.com/@DmitriMehlhorn/money-is-not-the-problem-facing-american-schools-3c321c9db042#.gk6i4gpqu and https://medium.com/@DmitriMehlhorn/that-time-i-trusted-bruce-baker-of-rutgers-university-e597ddd9bc47#.zbck9olz3.
Spielberg 16 — Ben Spielberg, Research Assistant at the Full Employment Project at the Center on Budget and Policy Priorities, holds a B.S. in Mathematical and Computational Sciences from Stanford University, 2016 (“Errors in school funding pieces that demand transparent correction,” 34justice—a scholarly blog, May 1st, Available Online at https://34justice.files.wordpress.com/2015/10/five-corrections-mehlhorn2.pdf, Accessed 07-04-2017)
5) The Bruce Baker/Eric Hanushek debate on school funding research:
This is where Mehlhorn really goes off the rails; his attack on Baker, especially given all of the errors and misleading writing in his own work, is inexcusable. First, he writes the following:
Twitter, of all things, provided me with news to the contrary in August of 2014. Mark Weber, a sincere reform skeptic and public school teacher in New Jersey, goes by the Twitter handle “Jersey Jazzman,” and is a part-time doctoral student at the Rutgers University Graduate School of Education. Weber pointed me to publications written by one of his professors, a fellow named Bruce Baker. Although I had never previously heard of Baker, Weber was not the only person who recommended him. Ben Spielberg, who graduated from my undergraduate alma mater, told me flatly that Baker’s “research is legitimate.” Spielberg, Weber, and other reform skeptics cited Baker often, and indeed Baker was described by AEI’s Rick Hess as the 40th-most cited education scholar in America. Even better, it seemed that Baker was willing to engage folks I knew to be smart and careful, such as Ulrich Boser at the Center for American Progress and Rebecca Sibilia at EdBuild.
Sounds like he didn’t know much about Baker before I told him Baker’s research was legitimate, right? Wrong. For reference, the Twitter conversation with me that Mehlhorn linked happened on July 31, 2015.
Mehlhorn tells a story in which he is initially wowed by Baker’s research. He writes:
Baker’s papers blew me away. They totally reversed the narrative. For instance, Baker pointed me to a 2012 piece he wrote called “Does Money Matter in Education,” which concluded that school spending is important and impactful for students.
This conclusions was the opposite of the consensus in academia when I had been a student in the 1990s. How had the prior research been so wrong? What had happened in the previous 15 years? Well, Baker cited Northwestern University’s Larry Hedges, who re-reviewed Hanushek’s 1986 survey of evidence using “quality control measures” to exclude some studies and change some interpretations. According to Baker, this settled the matter: “by the early 2000s, the cloud of uncertainty conjured by Hanushek in 1986 had largely lifted in the aftermath of the various, more rigorous studies that followed.”
I was surprised, but frankly relieved. As I wrote in response to Baker at the time, “Thank heavens. Someone who actually talks evidence. Talk soon.” Shortly thereafter, I read another piece from Baker regarding implementation of high-stakes testing, and frankly his analysis was solid. I assumed that this level of analysis was typical of Baker’s work, and was further relieved that a high-profile reform skeptic was taking the time to do careful research. As I wrote to him, “Bruce, your facts & analysis R best I’ve seen on UR side. Wish AFT/NEA pushed you, not smears.” I circulated Baker’s work to elevate that approach.
The quotes above came on August 9, 2014 and August 11, 2014. But then, according to Mehlhorn, he digs deeper and realizes Baker is not to be trusted:
But wait, something smells fishy The first clues that something was fishy came as I dove deeper into Baker’s body of work. The highly respected Ulrich Boser had written a report on waste and inefficiency in school spending, and Baker had written a rebuttal. Baker’s rebuttal was, as I wrote to him, “More strident, Something was starting to smell fishy less compelling than UR usual.” I was being delicate; Baker’s rebuttal was full of personal insults and exclamation points. Disappointing for an alleged scholar.
Then, I read a Baker critique of Mathematica policy research regarding the effectiveness of KIPP charter schools. Baker’s critique was terrible, a long list of hand-waving attacks that seemed to call into question the very possibility of actual empirical research in education. As I wrote to him, the methodology of his approach seemed like that of climate denialists, whose attacks often are a “kritik” of the very idea of research.
Things got worse still when I started to read Baker’s work about teachers’ unions, a subject about which I had substantial personal exposure from visiting state legislators in places where unions were active. As I wrote to Baker in response to a blog of his on the subject, his thumb appeared to be on the scale of the internal workings of his models. His methodology on unions was so sloppy it seemed deliberate.
His “less compelling than UR usual” comment comes on August 12, 2014, three days after he apparently first encountered Baker (Mehlhorn seemingly reads very quickly, which may explain his penchant for missing important details). The comment on the Mathematica critique came on August 17, 2014. In between that comment and the comment Mehlhorn made about teachers unions on October 9, 2014, he and I had what I believe to be our first ever interaction – on September 8, 2014. Baker was not mentioned during that conversation, as far as I can tell.
Finally, the Jackson et al. study came out and Mehlhorn went on a fact-finding mission. He had an epiphany:
In 2015, Rucker Johnson and others published an NBER analysis of the impacts of school spending. The NBER report was broadly sympathetic to Baker’s 2012 claims that money can matter, so I read the report with interest.
Wait a minute... the 2015 NBER report, entirely focused on the question of “does money matter in education,” did not once mention the Bruce Baker publication from 3 years earlier with the title “Does Money Matter in Education?”
That seemed odd. Even more odd, the NBER paper referred to studies from 1995 and 1996 that showed school spending doesn’t lead to better results. Wait, what? Wasn’t that the period of time that Baker reviewed, when he wrote that the “cloud of uncertainty” created by Hanushek in 1986 had lifted based on subsequent work? Why didn’t Baker mention those 1995 and 1996 studies by other scholars?
With my antennae finally up, I dug into Baker’s 2012 claims more fully. As it turns out. Baker omitted so much context from his report that his conclusion borders on outright mendacity. For instance, Baker chooses not to mention that Hanushek wrote several peer-reviewed rebuttals to Hedges’ work, including that they engaged in “statistical manipulations ... to overturn prevailing conclusions,” and that they “misinterpret the implications of their analysis [and,] through a series of analytical choices, systematically bias their results toward the conclusions they are seeking.” Baker wrote a conclusion that “uncertainty” created by Hanushek “lifted” after 1986, without even deigning to mention that Hanushek didn’t agree? Baker’s presentation of this conclusion was so skewed that later scholars on the exact same subject didn’t even mention Baker’s paper?
Judging from a comment on it, the article Mehlhorn mentioned came out on or before July 8, 2015, more than three weeks ahead of the conversation he and I had in which I asserted that Baker's research was legitimate. By his own admission and given his extremely fast reviews of other research, Mehlhorn clearly didn't trust Baker at this point - it is absurd of him to imply, as he does in the piece, that he was an unsuspecting victim of my encouragement to read Baker's work.
What about Mehlhorn's critiques of Baker, which also appeared in his first "rebuttal" to my piece (see below)?
The conclusions of Professor Bruce Baker: Even more than Jackson and his team., Ben relies heavily on articles published by Bruce Baker of Rutgers University’s Graduate School of Education. This reliance is common among reform skeptics, as Baker reaches the most anti-reform conclusions to be found within mainstream academia. Particularly cited by Ben is Baker’s 2012 editorial published by the Albert Shanker Institute in which he writes that “by the early 2000s, the cloud of uncertainty conjured by Hanushek in 1986 had largely lifted in the aftermath of the various, more rigorous studies that followed. ” Baker justifies this claim largely by citing Northwestern University’s Larry Hedges, who re-reviewed Hanushek’s studies “quality control measures.” Reading Baker’s paper by itself, it is understandable why Ben finds a clear academic consensus that money matters.
The problem is that Baker omits so much that his conclusion borders on outright mendacity. For instance, Baker chooses not to mention that Hanushek wrote several peer-reviewed rebuttals to Hedges’ work. One of Hanushek’s responses could have been written with Ben in mind: “Hedges, Laine, and Greenwald commit the larger error of asking the wrong question. This problem tends to get lost in their statistical manipulations and their zeal to overturn prevailing conclusions about the effectiveness of pure resource policies in promoting student achievement.”
A later paper from Hanushek goes into great detail about how Hedges and company “misinterpret the implications of their analysis [and,] through a series of analytical choices, systematically bias their results toward the conclusions they are seeking.” While Hanushek’s rebuttal is devastating, the more important point is that Baker simply pretends it does not exist - he paints a story of academic consensus that is entirely false.
In assessing Baker, it is worth noting that serious education researchers tend to not even mention Baker. Jackson and his team, for instance, write an entire paper that “money matters”, and don’t once mention Baker’s 2012 editorial. Rather, they refer to studies from 1995 and 1996 (which Baker ignores) that school spending doesn’t lead to better results.
The reason Baker gets so little play in serious education academia is because he writes editorials, not studies. His analyses are designed to achieve his intended results, and he does this by making subjective and one-sided decisions about what to include and what to ignore. [This is a point Dropout Nation Editor RiShawn Biddle hit upon four years ago.] This is expected for expert witnesses at trials, but it is disturbing for someone who pretends to be an academic, and is not transparent that he gets paid for reports by parties with a direct financial stake in his outcomes.
This problem was underscored in a 2011 tape-recorded conversation in which Baker said he would play with data, manipulate the questions he asked, and “pull things in and out” of his models “to tell the most compelling story” in exchange for a substantial research grant. This telephone conversation, including Baker’s own partially exculpatory comments, appears in full at about the 3-minute mark of this video clip. [Baker offers a rather lengthy explanation and defense of what happened.]
None of this automatically invalidates Baker’s conclusions, but most of his research suffers the same kinds of glaring deficiencies I just mentioned regarding his 2012 Shanker Institute paper. Some day, someone may decide to write a point-by-point review of Baker’s editorials, but for now the main point is to take his sweeping anti-reform conclusions with a heaping of salt.
Well, for starters, after Mehlhorn wrote that first rebuttal, I noted the following:
1) Mehlhorn devotes a lot of space to attacking Bruce Baker for editorializing. Baker certainly does have strong opinions, but I actually think it’s nice that he’s transparent about his perspective - all researchers have biases, and it’s in many ways preferable to know about them upfront. Baker’s work is strong and consistent with other recent research. The research Mehlhorn relies on - from Eric Hanushek, a member of the Right-wing Hoover Institution (note that Mehlhorn does not once mention Hanushek’s affiliation and biases) - is typically much older and a clear outlier (as I explained above).
In his anti-Baker follow-up, however, Mehlhorn still mentioned that Hanushek was at Stanford but once again conveniently forgot to mention his affiliation with the Hoover Institution (since Mehlhorn is, like me, a Stanford alum, he should know that there is a huge difference between "Stanford" and "Hoover"). Mehlhorn also fails to mention that Hanushek's conclusions have been criticized by many other well-respected researchers over the years; here is one example from 2001 and another from 2002 (though as the second link notes, there isn't nearly as much of a chasm between Hanushek's academic research and other research on the subject as Hanushek's policy advocacy - against increased investments in schools - makes it seem).
Note also that Hanushek accepts large payments to testify against increases in school funding even when he hasn’t analyzed relevant state-level data.
Mehlhorn must not have read Baker's paper as closely as he said he did, either; his assertion that Baker ignores Hanushek's rebuttals to Hedges (who wrote his paper with Rob Greenwald and Richard Laine when they were all at the University of Chicago) is false. Baker mentions them in a footnote, as shown below:
16 Greenwald and colleagues explain: “studies in the universe Hanushek (1989) constructed were assessed for quality. Of the 38 studies, 9 were discarded due to weaknesses identified in the decision rules for inclusion described below. While the remaining 29 studies were retained, many equations and coefficients failed to satisfy the decision rules we employed. Thus, while more than three quarters of the studies were retained, the number of coefficients from Hanushek’s universe was reduced by two thirds.” (p. 363)
Greenwald and colleagues further explain that: “Hanushek’s synthesis method, vote counting, consists of categorizing, by significance and direction, the relationships between school resource inputs and student outcomes (including but not limited to achievement). Unfortunately, vote-counting is known to be a rather insensitive procedure for summarizing results. It is now rarely used in areas of empirical research where sophisticated synthesis of research is expected.” (p. 362)
Hanushek (1997) provides his rebuttal to some of these arguments, and Hanushek returns to his “uncertainty position: ‘The close to 400 studies of student achievement demonstrate that there is not a strong or consistent relationship between student performance and school resources, at least after variations in family inputs are taken into account.” (p. 141)
Hanushek, E.A. (1997) Assessing the Effects of School Resources on Student Performance: An update. Educational Evaluation and Policy Analysis 19 (2) 141-164
Hanushek, Eric A. ’’Money Might Matter Somewhere: A Response to Hedges, Laine and Greenwald.” Educational Researcher, May 1994,23, pp. 5-8.
Here are the other studies Baker cites before stating that "the cloud of uncertainty created by Hanushek in 1986 had largely lifted:"
18 Wenglinsky, H. (1997) How Money Matters: The effect of school district spending on academic achievement.
Sociology of Education 70 (3) 221-237
19 Taylor. C. (1998) Does Money Matter? An Empirical Study Introducing Resource Costs and Student Needs into
Educational Production Function Analysis. In U.S. Department of Education. National Center for Education Statistics. Developments in School Finance, 1997.
20 Baker, B.D. (2001) Can flexible non-linear modeling tell us anything new about educational productivity?
Economics of Education Review 20 (1) 81-92.
Figlio, D. N. (1999). Functional form and the estimated effects of school resources. Economics of Education Review, 18 (2), 242-252.
Dewey, J., Husted, T., Kenny, L. (2000) The ineffectiveness of school inputs: a product of misspecification. Economics of education Review 19 (1) 27-45
Oh, and by the way, those studies from 1995 and 1996 that Baker ostensibly ignored? He didn't. Here's the section, on pages 3 and 4, where he references the 1996 study that Jackson et al. cited:
In short, while family background certainly matters most, schools matter as well. Furthermore, there exist substantive differences in school quality that explain a substantial portion of the variation in student outcomes.
Subsequent studies using alternative data sources to explored the relationship between schooling quality and various outcomes, including the economic rate of return to schooling - e.g., future earnings. For example, David Card and Alan Krueger (1992) studied the relationship between school quality measures, including pupil to teacher ratios and relative teacher pay, on the rate of return to education for men bom between 1920 and 1949. Card and Krueger found that men educated in states with higher-quality schools have a higher return to additional years of schooling. Rates of return were also higher for individuals from states with better-educated teachers.12
Similarly, Julian Betts (1996) provided an extensive review of the literature that attempts to link measures of schooling quality and adult earnings, including Card and Krueger’s study. Betts explains that, while the overall results of such studies were mixed, they were generally positive. More specifically, he pointed to more positive results for studies evaluating the association between district-level spending and earnings, as opposed to those attempting to identify a link between school-level resources and earnings, for which results are murkier.
This summary highlights a different aspect of Betts' paper than Jackson et al. chose to, which may be why Mehlhorn missed it, but it is accurate. Back in 1996, Betts found both that "[t]he studies that measure spending by state averages almost always find a positive association between educational expenditures and average earnings" and that "when researchers have attempted to identify the specific components of total educational spending that most influence earnings, most studies found either no link or a positive link that is not robust to changes in specification or subsample." Baker highlighted the former - money matters - while Jackson et al. highlighted the latter - at the time, the literature wasn't clear about money mattered and how it could be used productively.
Mehlhorn also asserts that Jackson et al. didn't cite Baker's workin their paper "because he writes editorials, not studies. His analyses are designed to achieve his intended results, and he does this by making subjective and one-sided decisions about what to include and what to ignore."
The more likely explanation seemed to me to be that Baker's review was a review, not original academic research, and that Jackson et al. were citing only research that their findings called into question. I spoke with Jackson and he confirmed this intuition; he respects Baker's work and has no issues with Baker's review (which Baker recently updated).
Finally, Mehlhorn tries to impugn Baker's integrity by citing the work of James O'Keefe, a notoriously dishonest "conservative activist" known for "deliberately misrepresenting" information about the individuals he targets and releasing "selectively edited videotape" (O'Keefe also enjoys breaking the law while pursuing his entrapment schemes). Feel free to read the emails yourself: Baker clearly did not agree to anything unethical.
I believe Mehlhorn should both issue a correction and apologize to Baker for inaccurately maligning his character.
FYI: here is Baker’s response to the O’Keefe “sting.”
Baker 11 — Bruce D. Baker, Professor in the Department of Educational Theory, Policy, and Administration in the Graduate School of Education at Rutgers, The State University of New Jersey, former Associate Professor of Teaching and Leadership at the University of Kansas, holds an Ed.D. in Organization and Leadership from the Teachers College of Columbia University, 2011 (“Dealing with the Devil? Policy Research in a Partisan World,” School Finance 101—a scholarly education blog, November 22nd, Available Online at https://schoolfinance101.wordpress.com/2011/11/22/dealing-with-the-devil-policy-research-in-a-partisan-world/, Accessed 07-04-2017)
This note is in response to James O’Keefe’s attempt to discredit me on his Project Veritas web site (though I think his point was intended to larger than this). I was lucky (?) enough to be part of one of his investigative set ups earlier this fall. I wrote and held on to this post and all related e-mails.
His scheme was uncovered in this Huffington Post piece to which he refers in his most recent report:
Back in September, I was contacted by this fictional Peter Harmon who characterized himself as working for the Ohio Education Association, but never made it absolutely clear that he was working for the state teachers’ union of Ohio. In my case, unlike the EPI case, Harmon didn’t (I don’t recall) indicate being a hedge fund guy or being backed by one, but rather that he had “funders.” He dropped me a phone message and an email which were pretty innocuous, so I agreed to talk by phone. That’s where I pick up in this string of e-mails:
EMAIL #2 – PHONE CALL SET UP
Sent: Monday, September 19, 2011 10:14 PM
Thank you for getting back to me. We are eager to talk with you about this project. Would 3pm tomorrow work alright for you?
Then there was the strange phone call (which I’m quite sure in retrospect was recorded) where first, “Peter Harmon” wanted me to do a study showing that the collective bargaining legislation in Ohio would hurt children, to which I suggested that a) evaluating collective bargaining legislation is outside the realm of my expertise and b) that even if I agreed that it might, I’d have no clear, defensible way to analyze and argue that point.
From there I suggested things that I can and often do analyze and argue, in each case pointing out that the ability to make such an argument is contingent upon data to support that argument. For example, evaluating the competitiveness of teacher wages over time, or evaluating the distribution of state aid cuts. These are two issues on which I have already actually evaluated Ohio data. I pointed out that there are 3 basic types of products we might be talking about – a) critiques of policy reports or arguments by others (for a few thousand dollars), b) policy briefs/research brief reports (typically about ten thousand dollars) or c) full scale research report (thirty to fifty thousand dollars, with clarification that projects of this magnitude would have to go through RU and/or be done over the Summer). I attempted repeatedly to shift his focus to answerable questions and topics within my expertise, and to topics or issues where I felt I could be helpful to him, on the assumption that he was advocating for the state teachers’ union.
It got strange when Peter Harmon laid down his requirement that if they were going to fund a study, they didn’t want it coming out finding the opposite of what they wanted. I did explain that if he had a topic he was interested in, that I would be willing to explore the data to see if the data actually support his position on the issue and that I would do so before agreeing to write a report for him. The phone call ended with no clear agreement on anything, including no agreement on even what the topic of interest was. In fact, my main point was repeatedly that he needed to figure out what the heck he even wanted to study, though I tried to keep it friendly and supportive. No reason to argue on a first phone call.
It was a strange and disturbing conversation, but I played along until I could get off the phone with the guy. Note that the playing along in a conversation like this also involves trying to figure out what the heck is up with the caller – whether he/she has a particular axe to grind – or other issues that would make any working relationship, well, not work out.
Sadly, as twisted as this phone call was, I’ve had similarly twisted conversations with real representatives of legitimate organizations. However, with most legitimate organizations, you can later identify the less sleazy contact person. My approach has generally been to humor them while on the phone… perhaps probe as to see how twisted they really are… and when the phone conversation ends….let it pass. Move on.
To: email@example.com; firstname.lastname@example.org
Subject: Next Meeting
I have good news, my colleagues are very interested in moving forward.
We are confident we can cover the expense of this potential study.
We have a few ideas we would like to run by you for this project.
When would be a good time to call you next?
So now, Harmon is basically suggesting that he can generate the $30 to $50k figure which I had given him for a bigger study, a figure I had basically given him to encourage him to think about doing something else – like contracting a few short policy briefs or critiques. But, he still has no idea what he supposedly wants me to write about. Quite honestly that’s really strange. So my response is simple – it’s essentially a get your act together and don’t both me again until you do. In other words, here are a few examples of the work I do and am proud of. Figure out your damn question and let me know when you do.
EMAIL #4 – BAKER REPLY
From: Bruce Baker [email@example.com]
Sent: Friday, September 23, 2011 10:06 AM
Subject: RE: Next Meeting
Rather busy for next week or so. Would prefer if you could at least send an outline of potential topics & research questions of interest, so I can mull them over.
For examples of reviews/critiques of policy reports, see:
Here’s Harmon’s attempt at figuring out his question:
EMAIL #5 – HARMON REPLY
Thanks for getting back to us.
Once of the topics we want to pursue is research regarding spending.
Specifically and increase in spending having a good effect on children. If you need to limit the scope of your research to a specific county, district or other local geographic area. that’s OK.
I will take a closer look at the examples you sent on your last email to get a better idea of what you would like from our end. But,I hope this more specific goal better illustrates what we are looking for.
Let me know when would be good time to call, so I can clarify whatever questions you have about this.
So, Peter Harmon wants me to explain, or more strangely to show that increasing spending is good for children. Okay. Anyone even modestly informed would know that’s an odd way to frame the question or issue. But clearly, given my body of work, I have argued on many occasions in writing and in court that having more funding available to schools can improve school quality, which is something I would certainly argue is good for children. Would I somehow use data on a specific district or county to do this? No…. uh… not sure? I’d probably start with an extensive review of what we already know from existing research on money and school quality.
At this point, I’m ready to drop the whole discussion, but receive an e-mail notice of a new Economic Policy Institute paper on public employee wages in Ohio. So, to save Mr. Harmon money paying for a new study on this topic, I a) send him a link to that study, and b) explain that I’m already working on a paper related to his issues of concern.
Working on some related projects myself, which may be of use to you in near future. Will be back in touch as schedule frees up.
And so it ended. And as I suspected by this point, it appears that this whole thing was a sham… and an attempt at a sting. Interestingly, this appears to be when Harmon moved on to go after EPI.
Quite honestly, O’Keefe’s concept for the investigation isn’t entirely unreasonable except that he and his colleagues didn’t seem to fully understand the fundamental difference between research projects per se, and policy analyses – between writing summaries and opinions based on data that already exist and research that’s already been done – versus exploring uncharted territory – where the data do not yet exist and where the answers cannot yet be known.
At this point, I think a few clarifications are in order about doing policy research, or more specifically writing policy briefs in a highly political context.
First, why would I ever vet the data on an issue before signing on to do work for someone? Well, this is actually common, or should be in certain cases. For example, let’s say the funder wants me to show that “teachers in Ohio are underpaid.” I don’t know that to be true. I’m not going to take his money to study an issue where he has a forgone conclusion and a political interest in that conclusion but where the data simply don’t support that conclusion. It is relatively straight forward for me to check to see if the data support the conclusion before I agree to write anything about it. This is an easy one to check. There are a standard set of databases to use, including statewide personnel data, census data and Bureau of Labor Statistics data and there are standard credible methods for comparing teacher wages. If the argument holds up applying the most conservative (most deferential analysis to the “other side” of an argument) analysis, then it’s worth discussing how to present it or whether to move forward.
A different type of example which I’ve learned by experience is that it’s always worth taking a look at the data before engaging as an expert witness on a school funding related case. I often get asked to serve as an expert witness to testify about inequities or inadequacies of funding under state school finance systems. Sometimes, attorneys have already decided what their argument is based only on the complaints of their clients. It would be utterly foolish of me to sign on to represent those clients and accept payment from them without first checking the data to see if they actually have a case.
Then there’s the issue of doing work for partisan clients to begin with. That’s a different question than doing work for sleazy clients. But sometimes, if it’s a legitimate organization, there may be a sleazy contact person, but further checking reveals that the organization as a whole is credible – and not sleazy. But back to the point…
Quite honestly, the toughest kind of policy analysis to do is for partisan clients – clients with an axe to grind or a strong interest in viewing an issue in one particular way. That is usually the case in litigation and increasingly the case when it comes to writing policy briefs on contentious topics. What this means is that the analyses have to be “bullet-proof.” There are a few key elements to making an analysis “bullet proof.”
First, the analysis must be conservative in its estimates and one must avoid at all cost overstating any claims favored by the client. In fact, the analysis needs to be deferential, perhaps even excessively, to the opposing view.
Second, the analysis must use standard, credible methods that are well known, well understood and well documented by others. Examples in my field would include comparable wage analysis, or wage models which typically include a clearly defined set of variables.
Third, the analysis must rely on publicly accessible data, with preference for “official” data sources, such as state and federal government agencies. This is because the analyses should be easy for any reader to replicate by reading through my methods and downloading or requesting the data.
So here are my final thoughts on this issue…
If this kind of stuff causes anyone to place greater scrutiny on my work of that of any others writing policy briefs on contentious topics that’s fine. It’s not only fine, but desirable. I am fully confident that my work stands on its own. Unlike some, I don’t simply take a large commission to offer my opinion without ever having looked at any data. For example, Eric Hanushek of Stanford University took $50,000 from the State of Colorado to testify that more money wouldn’t help kids and that Colorado’s school funding system is just fine, without ever having looked at any data on Colorado’s school funding system. See:
By contrast, I did indeed accept a payment of $15,000 for writing a nearly 100 page report filled with data and detailed analyses of Colorado’s school funding system raising serious questions about the equity and adequacy of that system (available on request). In fact, I had already come to the conclusions about the problems with Colorado’s school funding system long before I was engaged by the attorneys for the plaintiff districts (as one will find in many of my blog posts referring to Colorado).
My rule #1 is always to check the data first and to base my opinions on the data. So I welcome the scrutiny on my work and I especially welcome it directly. If you have a criticism of my work, write to me. The more scrutiny on my work the better.
Note #1: for an example of the types of policy briefs and/or analyses to which I am referring here, see: NY Aid Policy Brief_Fall2011_DRAFT6
In my view, this is a solid, rigorous and very defensible analysis. It is a policy brief. It uses numerous sources of publicly available data. And, it was written on behalf of an organization which has self-interested concerns with the NY school finance formula.
Note #2: Indeed there were some poor word choices on my part in the phone conversation. “Play with data” is how I tend to refer to digging in and vetting the data to see what’s there. This blog is dedicated to what I would refer to as playing with data. Looking stuff up. Downloading large data files (IPUMS, NCES). Running statistical models. My friends and colleagues, as well as my students know full well that I take great joy in working with data and that I consider it play. But I’ll admit that it sure doesn’t sound too good when taken out of that context.
Note #3: A few people have asked about the portion of the conversation where I suggest that if I find results that do not support the funders’ views, I will not charge them for the work. Some have suggested that this is an example of burying an undesirable result, which would in my view be unethical. So, what’s the point of not charging them? Actually, it’s so that the result won’t get buried. If I do a bunch of preliminary data analyses only to find that the data do not support a funder’s claims/preferences, I’d rather not write up the report for the funder and charge him/her, because they then own the report and its findings, and have the control to bury it if they so choose. Now, I typically don’t permit gag-order type clauses in my consulting contracts anyway, but, it’s much easier just to avoid the eventual pissing match over the findings and any pressure to recast them, which I will not do. If I keep the results of my preliminary work for myself, then I have complete latitude to do with them as I see fit, regardless of the funder’s preferences. It’s my out clause. My freedom to convey the findings of any/all work I do.
I’ve come to this approach having had my results buried in the past on at least two occasions, one in particular where the funder clearly did not want the results published under their name due in part to pending litigation in which they were a defendant. Much to my dismay, the project coordinators (agency that subcontracted me) capitulated to the funder. I was, and remain to this day, deeply offended by the project coordinator’s choice under pressure by the funder, to edit the report and exclude vital content. Yeah… I got paid for the work. But the work got buried, even though the work was highly relevant. I’m unwilling to go down that road again.
They Say: “Schools Not Key – General”
The best evidence proves that better schools can significantly reduce intergenerational inequality.
Johnson 16 — Rucker C. Johnson, Associate Professor at the Goldman School of Public Policy at the University of California-Berkeley, Faculty Research Fellow at the National Bureau of Economic Research, Faculty Research Fellow at the W.E.B. Du Bois Institute at Harvard University, Research Affiliate at the National Poverty Center at the University of Michigan, Research Affiliate at the Institute for Poverty Research at the University of Wisconsin, holds a Ph.D. in Economics from the University of Michigan, 2016 (“Can Schools Level the Intergenerational Playing Field? Lessons from Equal Educational Opportunity Policies,” Economic Mobility: Research & Ideas on Strengthening Families, Communities & the Economy, Edited and Published by the Federal Reserve Bank of St. Louis and the Board of Governors of the Federal Reserve System, Available Online at https://www.stlouisfed.org/~/media/Files/PDFs/Community-Development/EconMobilityPapers/EconMobility_Book_508.pdf?la=en , Accessed 06-19-2017, p. 291-294)
Recent research has shown that intergenerational mobility is much lower in the United States than previously assumed (Chetty et al. 2014; Mazumder 2005; Solon 1992), is significantly less than many other advanced developed countries (Jäntti et al. 2006), and black children experience significantly lower rates of upward mobility conditional on their parents’ positions in the family income distribution (Bhattacharya and Mazumder 2011; Hertz 2005). Moreover, there is a high degree of persistence in economic status across generations in the United States, particularly in the lower and upper tails of the income distribution. What are the main transmission mechanisms of intergenerational mobility, and where does one look for the early developmental origins of inequality in life outcomes? Various dimensions of inequality in adulthood are rooted in childhood conditions, wherein schools play a pivotal role in either reinforcing or mitigating the intergenerational reproduction of socioeconomic advantage (Card and Krueger 1992). Residential segregation by race and class that leads to unequal access to quality schools is often cited as a culprit in perpetuating inequality in attainment outcomes. However, the role of school quality factors in contributing to the intergenerational persistence of economic status, and in being a source of racial differences in rates of intergenerational mobility, have received little attention in the literature. [end page 291]
The nature and amount of public investment in children has changed substantially during the post-World War II era. The major thrust of policies aimed at equality of opportunity over this period has been intended to ensure educational access to quality resources K–12 and beyond, and more recently greater investments in pre-school years. Over the past five decades, three major government interventions have had substantial impacts on the provision of school resources and have narrowed black-white differences in access to dimensions of school quality:
1. court-mandated school desegregation
2. state legislation and legal action aimed to change the distribution and level of school funding
3. the expansion of targeted early childhood pre-school programs for disadvantaged children through Head Start
This paper draws on recent research on the long-run impacts of school desegregation (Johnson 2015), effects of school finance reform-induced increases in school spending (Jackson, Johnson, and Persico 2015), and evidence on the long-run effects of Head Start (Johnson and Jackson 2015), and combines them with a focus on these three major school reforms’ impacts on intergenerational mobility. It focuses on how school quality factors contribute to the intergenerational persistence of economic status and are a source of racial differences in rates of intergenerational mobility. The collective evidence from the roll-out of desegregation implementation, school finance reforms, and expansions of early childhood education programs is strong in providing a testbed for the study of the efficacy of the first-generation suite of equal education policy reforms. This paper explores the mechanisms that tie childhood school-level factors to aggregate mobility rates.
Court-ordered school desegregation has been described as the most controversial and ambitious social experiment of the past 60 years. Despite the magnitude of these changes, no large-scale data collection effort was undertaken to investigate school desegregation program effects, particularly on longer-run outcomes. Before the study by Johnson (2015), there were no quasi-experimental studies of the impacts of desegregation that had followed students over a long horizon beyond their early 20s. While many prior studies have examined effects of school resources on test scores and more proximate student achievement outcomes, less evidence is available on how school spending influences intergenerational mobility (Jackson, Johnson, and Persico 2015, a notable exception). Similarly, controversy about whether Head Start produces lasting benefits in practice has surrounded the program since its inception. [end page 292]
In parallel literature, there is an impressive body of evidence on the measurement of intergenerational mobility and the extent of mobility for different countries and over time (Bjorklund and Jäntti 1997; Solon 1992). However, little is known about the precise mechanisms underlying the persistence of economic status across generations; identifying what factors inhibit or facilitate upward mobility for those born into humble beginnings has remained illusive. Identifying the major factors and pathways that lead to economic (im)mobility is important for the optimal design of education policies and implementation of effective childhood interventions to promote greater equality of opportunity. There is currently a paucity of direct evidence from the United States on the effects of school quality on intergenerational income mobility.
This paper extends two branches of literature on economic mobility:
1. the relationship between school resources/quality and socioeconomic success
2. racial inequality in adult socioeconomic attainment outcomes that are rooted in childhood conditions
At the nexus of these two literatures, this paper examines the role of school quality as the key propeller of upward mobility. An important contribution of this work is that it uncovers sources and identifies mechanisms underlying generational mobility, integrating the analysis of the linkages between educational investment opportunities across the continuum of developmental stages of childhood—including pre-school program participation and K–12 school resources—to investigate their long-run consequences on the extent of intergenerational mobility.
The persistent residential segregation of poor and minority populations coupled with the heavy reliance on local property taxes to fund K–12 schools, often leads to disparities in school resources. In light of this, this paper investigates the extent to which patterns of segregation influence whether schools weaken or reinforce the role of family background in determining children’s outcomes and compares the intergenerational mobility rates across communities and time periods with differing access to educational opportunities and school quality, separately by race. In this way, this analysis considers a narrower slice of the broader question of how where you live influences life chances and economic success.
This investigation requires not only a convincing research design to address concerns about endogeneity bias but also requires high quality income data spanning multiple years of adulthood for two generations of the same set of families. This study combines high-quality intergenerational income data with compelling research designs to identify the causal effects of school desegregation, school spending, and Head Start, respectively. [end page 293]
The study analyzes the economic status trajectories of children born between 1945 and 1979 followed through 2013 using data from the Panel Study of Income Dynamics (PSID) and its supplements on early childhood education, where the data have been geocoded to the census block level. This intergenerational microdata set is linked with administrative data on school district per-pupil spending, Head Start per capita spending, and comprehensive case inventories on the timing and type of court-ordered school desegregation and school finance reforms spanning the period 1965–2010. Thus, this analysis uses the longest-running U.S. nationally representative longitudinal data spanning four decades linked with multiple data sources containing detailed neighborhood attributes and school quality resources that prevailed at the time these children were growing up.
A sharp increase in generational income mobility among African Americans among successive birth cohorts born between 1955 and 1979 shows its relatedness to dimensions of access to school quality. The study explains black-white differences in upward mobility and its subsequent convergence among successive cohorts born between 1955 and 1979 with a focus on the role of school quality. The study analyzes the effects of the court-ordered desegregation plans of public schools, implemented in the 1960s, ’70s, and ’80s, and subsequent court-ordered school finance reforms that accelerated during the 1980s and ’90s on the extent of intergenerational mobility. The wide variation in the timing of implementation of desegregation plans and school funding formula changes is exploited to identify their effects. Using policy-induced changes in school spending (school resource inputs) across cohorts within the same district and across different districts from the same cohort is used to estimate the impact of school spending on socioeconomic status attainments.
Consistent evidence demonstrates that low-income and minority students experienced both larger reform-induced increases in school spending (access to school resource inputs) and larger resultant impacts of a given change in spending on long-term outcomes. African Americans who grew up following school desegregation implementation, and poor children following court-ordered school finance reforms, were more likely to occupy a higher position in the income distribution than their parents, and distances moved across the distribution were greater, relative to those experienced for prior birth cohorts who were 18 or older at the time of their schools desegregation implementation or imposition of school finance reforms. The results highlight the role of childhood school quality in contributing to (and subsequently narrowing) racial differences in intergenerational mobility.
“Other policies also important” doesn’t answer our advantage.
Spielberg 15 — Ben Spielberg, Research Assistant at the Full Employment Project at the Center on Budget and Policy Priorities, holds a B.S. in Mathematical and Computational Sciences from Stanford University, 2015 (“The Truth About School Funding,” 34justice—a scholarly blog, October 20th, Available Online at https://34justice.com/2015/10/20/the-truth-about-school-funding/, Accessed 07-04-2017)
Pitting education funding against social insurance and safety net spending, as former Tennessee education commissioner Kevin Huffman did in a recent article, is also absurd. While it’s true that adequate income support and health care matter most for low-income students and that school-based reforms cannot, contrary to Huffman’s assertion, “be the lynchpin of social mobility in America,” schools are still very important. Those truly committed to an equal opportunity agenda should stop taking potshots at its components and start getting to work on raising the revenues necessary to implement it.
As David Kirp wrote recently about pre-K programs: “Money doesn’t guarantee good outcomes, but it helps…In education, as in much of life, you get what you pay for.”
In America right now, we unfortunately don’t pay for the education system our students deserve. Until we do, we won’t get it.
They Say: “Schools Not Key – Finland”
Finland proves our argument.
Saunders 17 — Doug Saunders, International Affairs Columnist at The Globe and Mail—a Canadian newspaper, 2017 (“Finland’s social climbers: How they’re fighting inequality with education, and winning,” The Globe and Mail, January 5th, Available Online at https://www.theglobeandmail.com/news/national/education/how-finland-is-fighting-inequality-with-education-andwinning/article29716845/, Accessed 07-04-2017)
The Finnish obsession is not with education per se, but with making sure that kids like Lara allget the maximum possible school experience. That obsession has produced results: The odds of someone like her, born below the poverty line, becoming a middle-class adult are better in Finland than in almost any other country. More important, those odds are measurably better than they were 20 years ago. And it’s almost all because of the way the Finns changed their schools.
Now, a school system originally re-engineered to fill the gap between the rural poor and better-off urbanites is also addressing the wealth gap between established Finns and a growing population of poor immigrants, refugees and asylum seekers – making Finland’s challenges increasingly similar to those of Canada.
This is a rare example of a country where national policy has been used to build a better pathway out of poverty and into a productive life. In recent years, that policy, while still successful, has begun to feel the pressures of a more diverse population and a fast-changing economy.
So, in both their successes and their controversies, it’s worth looking inside Finland’s schools to see what lessons the world’s most successful education reform has for the rest of us.
They Say: “Status Quo Solves – ESSA”
ESSA fails — won’t narrow opportunity and achievement gaps.
Ogletree and Robinson 16 — Charles J. Ogletree, Jr., Jesse Climenko Professor of Law and Director of the Charles Hamilton Houston Institute for Race and Justice at Harvard Law School, holds a J.D. from Harvard Law School and an M.A. in Political Science from Stanford University, and Kimberly Jenkins Robinson, Professor of Law and Austin Owen Research Scholar at the University of Richmond School of Law, Researcher at the Charles Hamilton Houston Institute for Race and Justice at Harvard Law School, former Associate Professor at the Emory School of Law, former General Attorney in the Office of the General Counsel at the United States Department of Education, holds a J.D. from Harvard Law School, 2016 (“The K-12 Funding Crisis,” Education Week, May 17th, Available Online at http://www.edweek.org/ew/articles/2016/05/18/the-k-12-funding-crisis.html?print=1, Accessed 06-07-2017)
Notwithstanding generally mediocre state records on narrowing disparities in educational opportunity and achievement, the Every Student Succeeds Act relies on states as the engines for educational improvement. Under the new federal law, states will only be required to intervene in the bottom 5 percent of low-performing schools, in schools where student subgroups are struggling, and in high schools where the graduation rate is 67 percent or less.
If the problems with our education system were limited to these schools, the requirements might be promising.
But the authors Eric A. Hanushek, Paul E. Peterson, and Ludger Woessman, in their 2013 book Endangering Prosperity: A Global View of the American School, reveal that the shortcomings of U.S. education are far more widespread than the lowest-performing schools and students. Children from all income levels show lackluster academic performance on international assessments when compared with their international peers.
Given the refusal by most states to devote funding to disadvantaged students for the resources needed to compete with their more advantaged peers, history suggests that the ESSA approach is unlikely to decrease gaps in educational opportunity and achievement significantly.
ESSA fails — too narrow.
Robinson 16 — Kimberly Jenkins Robinson, Professor of Law and Austin Owen Research Scholar at the University of Richmond School of Law, Researcher at the Charles Hamilton Houston Institute for Race and Justice at Harvard Law School, former Associate Professor at the Emory School of Law, former General Attorney in the Office of the General Counsel at the United States Department of Education, holds a J.D. from Harvard Law School, 2016 (“No Quick Fix for Equity and Excellence: The Virtues of Incremental Shifts in Education Federalism,” Stanford Law & Policy Review (27 Stan. L. & Pol'y Rev 201), Available Online to Subscribing Institutions via Lexis-Nexis)
Some may wonder if the newly-enacted Every Student Succeeds Act (ESSA) n15 - the latest reauthorization of the Elementary and Secondary Education Act of 1965 - will provide the push states seem to need to reform funding systems. However, the ESSA does not appear to be a promising avenue to incentivize such reforms. n16 The ESSA repeals the federal accountability system in the No Child Left Behind Act and instead allows states to design their own accountability systems to identify and improve struggling schools. n17 ESSA requires states to increase learning in the five percent of schools that perform the worst on state assessments, schools with high dropout rates, and schools in which a subgroup consistently performs poorly. n18 While states may adopt some targeted interventions, the ESSA's focus on such a small subset of schools is unlikely to drive states to overhaul entire funding systems that either [*204] consistently favor low-need, high-wealth districts or that do not adequately adjust funding levels to address the greater needs of some students. n19 Thus, the ESSA's narrow focus on low-performing schools provides inadequate incentives to encourage states to reform their funding approaches and to boost overall student achievement.