Results for “evidence-based”
29 found

Shruti Rajagopalan and Janhavi Nilekani podcast

In this episode, Shruti speaks with [the excellent] Janhavi Nilekani about India’s high rate of C-sections compared with vaginal births, problems with maternal healthcare, the present and future of Indian midwifery and much more. Nilekani is the founder and chair of the Aastrika Foundation, which seeks to promote a future in which every woman is treated with respect and dignity during childbirth, and the right treatment is provided at the right time. She is a development economist by training and now works in the field of maternal health. She obtained her Ph.D. in public policy from Harvard and holds a 2010 B.A., cum laude, in economics and international studies from Yale.

Here is the link.

Regulatory quality is declining

One example of such evidence-free regulation in recent years comes from the Department of Health and Human Services (HHS). In 2021, HHS repealed a rule enacted by the Trump administration that would have required the agency to periodically review its regulations for their impact on small businesses. The measure was known as the SUNSET rule because it would attach sunset provisions, or expiration dates, to department rules. If the agency failed to conduct a review, the regulation expired.

Ironically, in proposing to rescind the SUNSET rule, HHS argued that it would be too time consuming and burdensome for the agency to review all of its regulations. Citing almost no academic work in support of its proposed repeal — a reflection of the anti-consequentialism that animates so much contemporary regulatory policy — the agency effectively asserted that assessing the real-world consequences of its existing rules was far less pressing an issue than addressing the perceived problems of the day (by, of course, issuing more regulations).

Through its actions, HHS has rejected the very notion of having to review its own rules and assess whether they work. In fact, the suggestion that agencies review their regulations is an almost inexplicably divisive issue in Washington today. “Retrospective review” has become a dirty term, while cost-benefit analysis has morphed into a tool to judge intentions rather than predict real-world consequences. The shift highlights how far the modern administrative state has drifted from the rational, evidence-based system envisioned by the law-and-economics movement just a few decades ago.

Here is more from James Broughel at Mercatus.

Direct Instruction Produces Large Gains in Learning, Kenya Edition

In an important new paper, Can Education be Standardized? Evidence from Kenya, Guthrie Gray-Lobe, Anthony Keats, Michael Kremer, Isaac Mbiti and Owen Ozier evaluate Bridge International schools using a large randomized experiment. Twenty five thousand Kenyan students applied for 10,000 scholarships to Bridge International and the scholarships were given out by lottery.

Kenyan pupils who won a lottery for two-year scholarships to attend schools employing a highly-structured and standardized approach to pedagogy and school management learned more than students who applied for, but did not win, scholarships.

After being enrolled at these schools for two years, primary-school pupils gained approximately the equivalent of 0.89 extra years of schooling (0.81 standard deviations), while in pre-primary grades, pupils gained the equivalent of 1.48 additional years of schooling (1.35 standard deviations).

Who are Bridge International Academies? - YouTubeThese are very large gains. Put simply, children in the Bridge programs learnt approximately three years worth of material in just two years! Now, I know what you are thinking. We have all seen examples of high-quality, expensive educational interventions that don’t scale–that was the point of my post Heroes are Not Replicable and see also my recent discussion of the Perry Preschool project–but it’s important to understand the backstory of the Bridge study. Bridge Academy uses Direct Instruction and Direct Instruction scales! We know this from hundreds of studies. In 2018 I wrote (no indent):

What if I told you that there is a method of education which significantly raises achievement, has been shown to work for students of a wide range of abilities, races, and socio-economic levels and has been shown to be superior to other methods of instruction in hundreds of tests?….I am reminded of this by the just-published, The Effectiveness of Direct Instruction Curricula: A Meta-Analysis of a Half Century of Research which, based on an analysis of 328 studies using 413 study designs examining outcomes in reading, math, language, other academic subjects, and affective measures (such as self-esteem), concludes:

…Our results support earlier reviews of the DI effectiveness literature. The estimated effects were consistently positive. Most estimates would be considered medium to large using the criteria generally used in the psychological literature and substantially larger than the criterion of .25 typically used in education research (Tallmadge, 1977). Using the criteria recently suggested by Lipsey et al. (2012), 6 of the 10 baseline estimates and 8 of the 10 adjusted estimates in the reduced models would be considered huge. All but one of the remaining six estimates would be considered large. Only 1 of the 20 estimates, although positive, might be seen as educationally insignificant.

…The strong positive results were similar across the 50 years of data; in articles, dissertations, and gray literature; across different types of research designs, assessments, outcome measures, and methods of calculating effects; across different types of samples and locales, student poverty status, race-ethnicity, at-risk status, and grade; across subjects and programs; after the intervention ceased; with researchers or teachers delivering the intervention; with experimental or usual comparison programs; and when other analytic methods, a broader sample, or other control variables were used.

Indeed, in 2015 I pointed to Bridge International as an important, large, and growing set of schools that use Direct Instruction to create low-cost, high quality private schools in the developing world. The Bridge schools, which have been backed by Mark Zuckerberg and Bill Gates, have been controversial which is one reason the Kenyan results are important.

One source of controversy is that Bridge teachers have less formal education and training than public school teachers. But Brdige teachers need less formal education because they are following a script and are closely monitored. DI isn’t designed for heroes, it’s designed for ordinary mortals motivated by ordinary incentives.

School heads are trained to observe teachers twice daily, recording information on adherence to the detailed teaching plans and interaction with pupils. School heads are given their own detailed scripts for teacher observation, including guidance for preparing for the observation, what teacher behaviors to watch for while observing, and how to provide feedback. School heads are instructed to additionally conduct a 15 minute follow up on the same day to check whether teachers incorporated the feedback and enter their scores through a digital system. The presence of the scripts thus transforms and simplifies the task of classroom observation and provision of feedback to teachers. Bridge also standardizes a range of other processes from school construction to financial management.

Teachers are observed twice daily! The model is thus education as a factory with extensive quality control–which is why teachers don’t like DI–but standardization, scale, and factory production make civilization possible. How many bespoke products do you buy? The idea that education should be bespoke gets things entirely backward because that means that you can’t apply what you learn about what works at scale–Heroes are Not Replicable–and thus you don’t get the benefits of refinement, evolution, and continuous improvement that the factory model provides. I quoted Ian Ayres in 2007:

“The education establishment is wedded to its pet theories regardless of what the evidence says.”  As a result they have fought it tooth and nail so that “Direct Instruction, the oldest and most validated program, has captured only a little more than 1 percent of the grade-school market.”

Direct Instruction is evidence-based instruction that is formalized, codified, and implemented at scale. There is a big opportunity in the developing world to apply the lessons of Direct Instruction and accelerate achievement. Many schools in the developed world would also be improved by DI methods.

Addendum 1: The research brief to the paper, from which I have quoted, is a short but very good introduction to the results of the paper and also to Direct Instruction more generally.

Addendum 2: A surprising number of people over the years have thanked me for recommending DI co-founder Siegfried Engelmann’s Teach Your Child to Read in 100 Easy Lessons.

Emergent Ventures winners, 15th cohort

Emily Oster, Brown University, in support of her COVID-19 School Response Dashboard and the related “Data Hub” proposal, to ease and improve school reopenings, project here.

Kathleen Harward, to write and market a series of children’s books based on classical liberal values.

William Zhang, a high school junior on Long Island, NY, for general career development and to popularize machine learning and computation.

Kyle Schiller, to study possibilities for nuclear fusion.

Aaryan Harshith, 15 year old in Ontario, for general career development and “LightIR is the world’s first device that can instantly detect cancer cells during cancer surgery, preventing the disease from coming back and keeping patients healthier for longer.”

Anna Harvey, New York University and Social Science Research Council, to bring evidence-based law and economics research to practitioners in police departments and legal systems.

EconomistsWritingEveryDay blog, here is one recent good Michael Makowsky post.

Richard Hanania, Center for the Study of Partisanship and Ideology, to pursue their new mission.

Jeremy Horpedahl, for his work on social media to combat misinformation, including (but not only) Covid misinformation.

Congratulations!  Here are previous Emergent Ventures winners.

What I’ve been reading

1. Matthew Hongoltz-Hetling, A Libertarian Walks Into a Bear: The Utopian Plot to Liberate an American Town (And Some Bears).  A fun look at the Free Town project as applied to Grafton, New Hampshire: “During a television interview, a Grafton resident accused the Free Towners of “trying to cram freedom down our throats.””

2. Cass R. Sunstein and Adrian Vermeulen, Law & Leviathan: Redeeming the Administrative State.  Self-recommending from the pairing alone, there is a great deal of interesting content in the 145 pp. of text.  It is furthermore an interesting feature of this book that it was written at all on the chosen topic.  Perhaps the administrative state is under more fire than I realize.  And might you consider this book a centrist version of…maybe call it “state capacity not quite libertarianism”?

3. Michael D. Gordin, The Pseudo-Science Wars: Immanuel Velikovsky and the Birth of the Modern Fringe.  A somewhat forgotten but still fascinating episode in the history of science, extra-interesting for those interested in Venus.  I had not known that Velikovsky pushed a weird version of a eugenicist theory stating that Israel was too hot for its own long-term good, and that its inhabitants needed to find ways of cooling it down.

4. History, Metaphor, Fables: A Hans Blumenberg Reader, edited by Bajohr, Fuchs, and Kroll.  I love Blumenberg, but the selection here didn’t quite sell me.  Better to start with his The Legitimacy of the Modern Age, noting that book is a tough climb for just about anyone and it requires your full attention for some number of weeks.  Might Blumenberg be the best 20th thinker who isn’t discussed much in the Anglo-American world?  And yes it is Progress Studies too.

5. Laura Tunbridge, Beethoven: A Life in Nine Pieces.  Smart books on Beethoven are like potato chips, plus you can listen to his music while reading (heard Op.33 Bagatelles lately?).  In addition to some of the classics, this book covers some lesser known pieces such as the Septet, An die Ferne Geliebte, and the Choral Fantasy, and how they fit into Beethoven’s broader life and career.  Intelligent throughout.

6. Sean Scully, The Shape of Ideas, edited and written by Timothy Rub and Amanda Sroka.  Is Scully Ireland’s greatest living artist?  He has been remarkably consistent over more than five decades of creation.  This is likely the best Scully picture book available, and the text is useful too.  Since it is abstract color and texture painting, he is harder than most to cancel — will we see the visual arts shift in that direction?

Jonathan E. Hillman, The Emperor’s New Road: China and the Project of the Century, is a good introduction to its chosen topic.

Robert Litan, Resolved: Debate Can Revolutionize Education and Help Save Our Democracy: “…incorporate debate or evidence-based argumentation in school as early as the late elementary grades, clearly in high school, and even in college.”

I am closer to the economics than the politics of Casey B. Mulligan, You’re Hired! Untold Successes and Failures of a Populist President, but nonetheless it is an interesting and contrarian book, again here is the excellent John Cochrane review.

There is also Harriet Pattison, Our Days are Like Full Years: A Memoir with Letters from Louis Kahn, a lovely romance with nice photos, sketches, and images as well, very nice integration of text and visuals.

Wednesday assorted links

1. Russian billionaire wants to buy cancelled Confederate statues.

2. “Nursing homes have new COVID-19 tests that are fast and cheap. So why won’t N.J. allow them to be used?

3. Where are the missing right-wing firms?  And Arnold.

4. The vaccine protocols.

5. The world forager elite.

6. An evidence-based return to work plan.

7. The nasal spray, which will be entering clinical trials.

8. On the Abraham Accords.

Sunday assorted links

1. John Cleese on PC and wokeness.  I think the first comment is satire rather than serious, but one can’t be entirely sure these days.  The best-known Monty Python episodes these days are entirely acceptable, but some of the now lesser-known works are pretty…out there.

2. “Meanwhile, for-profit companies charge schools thousands of dollars for the training, making the active shooter drill industry worth an estimated $2.7 billion — “all in pursuit of a practice that, to date, is not evidence-based,” according to the researchers.”  Link here.

3. Ross Douthat on how many lives a more competent president would have saved (NYT).

4. Why don’t coaches/manangers adjust more?  A parable from the NBA, but with much broader applicability.  Note that sometimes the star player is the problem too.

5. Herd immunity thread.

6. New Chetty et.al. paper on macro and the pandemic.

Friday assorted links

1. “The evening’s entertainment harshly criticizes capitalism, and at $2,000 a seat…” (NYT)

2. “Pledges to Notre Dame by rich stir resentment…” (NYT)  More information here.

3. Has TikTok learned how to censor the internet?

4. Me on The Gist, Slate podcast with Mike Pesca.  And Ryan Bourne reviews *BIg Business* in the Daily Telegraph.

5. Have we finally figured out how general anesthesia works?

6. Emily Oster on evidence-based parenting and breast-feeding (NYT).

Is Dentistry Safe and Effective?

The FDA may be too conservative but it does subject new pharmaceuticals to real scientific tests for efficacy. In contrasts, many medical and surgical procedures have not been tested in randomized controlled trials. Moreover, dental care is far behind medical care in demanding scientific evidence of efficacy. A long-read in The Atlantic spends far too much time on a single case of egregious dental fraud but it’s larger point is correct:

Common dental procedures are not always as safe, effective, or durable as we are meant to believe. As a profession, dentistry has not yet applied the same level of self-scrutiny as medicine, or embraced as sweeping an emphasis on scientific evidence.

…Consider the maxim that everyone should visit the dentist twice a year for cleanings. We hear it so often, and from such a young age, that we’ve internalized it as truth. But this supposed commandment of oral health has no scientific grounding. Scholars have traced its origins to a few potential sources, including a toothpaste advertisement from the 1930s and an illustrated pamphlet from 1849 that follows the travails of a man with a severe toothache. Today, an increasing number of dentists acknowledge that adults with good oral hygiene need to see a dentist only once every 12 to 16 months.

The joke, of course, is that there’s no evidence for the 12 to 16 month rule either. Still give credit to Ferris Jabr for mentioning that the case for fluoridation is also weak by modern standards–questioning fluoridation has been a taboo in American society since anti-fluoridation activists were branded as far-right conspiracy theorists in the 1950s.

The Cochrane organization, a highly respected arbiter of evidence-based medicine, has conducted systematic reviews of oral-health studies since 1999….most of the Cochrane reviews reach one of two disheartening conclusions: Either the available evidence fails to confirm the purported benefits of a given dental intervention, or there is simply not enough research to say anything substantive one way or another.

Fluoridation of drinking water seems to help reduce tooth decay in children, but there is insufficient evidence that it does the same for adults. Some data suggest that regular flossing, in addition to brushing, mitigates gum disease, but there is only “weak, very unreliable” evidence that it combats plaque. As for common but invasive dental procedures, an increasing number of dentists question the tradition of prophylactic wisdom-teeth removal; often, the safer choice is to monitor unproblematic teeth for any worrying developments. Little medical evidence justifies the substitution of tooth-colored resins for typical metal amalgams to fill cavities. And what limited data we have don’t clearly indicate whether it’s better to repair a root-canaled tooth with a crown or a filling. When Cochrane researchers tried to determine whether faulty metal fillings should be repaired or replaced, they could not find a single study that met their standards.

Is NIH funding seeing diminishing returns?

Scientific output is not a linear function of amounts of federal grant support to individual investigators. As funding per investigator increases beyond a certain point, productivity decreases. This study reports that such diminishing marginal returns also apply for National Institutes of Health (NIH) research project grant funding to institutions. Analyses of data (2006-2015) for a representative cross-section of institutions, whose amounts of funding ranged from $3 million to $440 million per year, revealed robust inverse correlations between funding (per institution, per award, per investigator) and scientific output (publication productivity and citation impact productivity). Interestingly, prestigious institutions had on average 65% higher grant application success rates and 50% larger award sizes, whereas less-prestigious institutions produced 65% more publications and had a 35% higher citation impact per dollar of funding. These findings suggest that implicit biases and social prestige mechanisms (e.g., the Matthew effect) have a powerful impact on where NIH grant dollars go and the net return on taxpayers investments. They support evidence-based changes in funding policy geared towards a more equitable, more diverse and more productive distribution of federal support for scientific research. Success rate/productivity metrics developed for this study provide an impartial, empirically based mechanism to do so.

That is by Wayne P. Wals, via Michelle Dawson.

DARE to Look at the Evidence!

We must have Drug Abuse Resistance Education…I am proud of your work. It has played a key role in saving thousands of lives and futures.

Speaking at the 30th DARE Training Conference, Attorney General Jeff Sessions was enthusiastic and strongly supportive of DARE, the program started in Los Angeles in 1983 that uses police officers to give young children messages about staying drug free and resisting peer pressure.

And what do our excellent colleagues at GMU’s Center for Evidence-Based Crime Policy say about DARE?

D.A.R.E. is listed under “What doesn’t work?” on our Review of the Research Evidence. 

Rosenbaum summarized the research evidence on D.A.R.E. by titling his 2007 Criminology and Public Policy article “Just say no to D.A.R.E.” As Rosenbaum describes, the program receives over $200 million in annual funding, despite little or no research evidence that D.A.R.E. has been successful in reducing adolescent drug or alcohol use. As Rosenbaum (2007: 815) concludes “In light of consistent evidence of ineffectiveness from multiple studies with high validity, public funding of the core D.A.R.E. program should be eliminated or greatly reduced. These monies should be used to fund drug prevention programs that, based on rigorous evaluations, are shown to be effective in preventing drug use.”

A systematic review by West and O’Neal (2004) examined 11 published studies of D.A.R.E. and reached similar conclusions. D.A.R.E. has little or no impact on drug use, alcohol use, or tobacco use. They concluded that ““Given the tremendous expenditures in time and money involved with D.A.R.E., it would appear that continued efforts should focus on other techniques and programs that might produce more substantial effects” (West & O’Neal, 2004: 1028).

Recent reformulations of the D.A.R.E. program have not shown successful results either. For example, the Take Charge of your Life program, delivered by D.A.R.E. officers was associated with significant increases in alcohol and cigarette use by program participants compared to a control group (Sloboda et al., 2009).

Algorithm Aversion

People don’t like deferring to what I earlier called an opaque intelligence. In a paper titled Algorithm Aversion the authors write:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

People who defer to the algorithm will outperform those who don’t, at least in the short run. In the long run, however, will reason atrophy when we defer, just as our map-reading skills have atrophied with GPS? Or will more of our limited resource of reason come to be better allocated according to comparative advantage?

Tuesday assorted links

1. “Ekki staðalbúnaður í smalamennsku!”  With video, of course, and implying the advantages of water transport.

2. The new “I, Pencil”?

3. Steven Landsburg makes some good points, but Summers may be able to invoke threshold effects.

4. Harvard faculty actually seem to hate the best parts of Obamacare.  Bravo to this article.  And quick summaries of evidence-based medicine.

5. “I’m the poster child of evil [art] speculation…”  An excellent piece, also NYT.

6. How big is the sexism problem in economics?  Kimball and anon.

7. Sorkin covers the Lucian Bebchuk fracas.

Sarah Constantin replies on MetaMed

Not long ago I linked to this Robin Hanson blog post on MetaMed.  I was sent this reply, which I will put under the fold:

I noticed you linked Robin Hanson’s article on MetaMed on Marginal Revolution.  I’m the VP of research at MetaMed, and I just wanted to tell you a little bit more about us, because if all you know about us is the Overcoming Bias article you might get some misleading impressions.

Medical practice is basically a mass-produced product. Professional and regulatory bodies (like the AMA) put out guidelines for treatment.  At their best, these guidelines follow the standards of evidence-based medicine, which means that on average they will produce the best health outcomes in the general population.  (Of course, in practice they often fall short of that standard.  For example, checklists are overwhelmingly beneficial by an evidence-based medicine standard, and yet are not universally used.)

But even at their best, the guidelines that are best from a population-health standpoint need not be optimal for an individual patient.  If you have the interest and the willingness to pay, investigating your condition in depth, in the context of your entire medical history, genetic data, and personal priorities, may well turn up opportunities to do better than the standardized medical guidelines which at best maximize average health outcomes.

That’s basically MetaMed’s raison d’etre.  And it’s a pretty conservative hypothesis, in fact.  We may harbor a few grander ambitions (for example, I come from a mathematical background and I’m working on some longer-term projects related to algorithmically automating parts of the diagnostic process, and using machine learning principles on biochemical networks in novel ways) but fundamentally the thing we claim to be able to do is give you finer-grained information than your doctor will.  We’re, of course, as yet unproven in the sense that we haven’t had enough clients to provide empirical evidence of how we improve health outcomes, but we’re not making extraordinary claims.

Robin Hanson seems to be implying that MetaMed is claiming to be useful only because we’re members of the “rationalist community.”  This isn’t true.  We think we’re useful because we give our clients personalized attention, because we’re more statistically literate than most doctors, because we don’t have some of the misaligned incentives that the medical profession does (e.g. we don’t have an incentive to talk up the benefits of procedures/drugs that are reimbursable by insurance), because we have a variety of experts and specialists on our team, etc.

The “rationalist” sensibility is important, to some degree, because, for instance, we’re willing to tell clients that incomplete evidence is evidence in the Bayesian sense, whereas the evidence-based medicine paradigm says that anything that yet hasn’t been tested in clinical trials and found a 5% p-value is completely unknown. For instance, we’re willing to count reasoning from chemical mechanisms as (weak) evidence. There’s a difference in philosophy between “minimize risk of saying a falsehood” and “be as close to accurate as possible”; we strive to do the latter.  So there’s a sense in which our epistemic culture allows us to be more flexible and pragmatic.  But we certainly aren’t basing our business model on a blanket claim of being better than the establishment just because we come from the rationalist community.

  • 1
  • 2