#AppliedEconomics #CredibilityRevolution #EconomicResearch #EmpiricalEconomics #NobelPrize
As a devoted follower of the economic realm, you may have come across the term “credibility revolution” in applied economics. This revolution, spearheaded by renowned figures like Angrist, Krueger, Imbens, and Duflo, has long been upheld as the gold standard for empirical research in economics. 📈 However, recent rumblings in the field have led to questions about the sustainability of this revolution’s prestige.
🔍 The Economist recently raised eyebrows with an article highlighting concerns about the credibility revolution’s impact on the field of economics. The article suggests that subsequent papers have been overturning results, even those popularized by iconic works like Freakonomics. This revelation has created a ripple effect, casting doubt on the reliability of published research and leading economists to adopt a more critical stance. 🤔
Unveiling the Truth Behind the Credibility Revolution
In response to these thought-provoking revelations, it’s crucial to delve deeper into the concerns raised by The Economist. The notion that the credibility revolution may have “eaten its own children” poses a significant challenge to the integrity of applied economics research. How can we address these concerns and navigate the evolving landscape of empirical economics?
Seeking Solutions for a Resilient Future
To tackle the credibility crisis in applied economics, it’s essential to embrace practical solutions that uphold the core principles of rigorous research and data-driven analysis. Here are some actionable steps to consider:
1. Embrace Transparency and Reproducibility: Prioritize transparency in research methodologies and data sources to enhance the replicability of findings.
2. Foster Collaboration and Peer Review: Engage in constructive dialogue with peers and subject experts to strengthen the validity of research outcomes.
3. Invest in Robust Methodologies: Uphold the highest standards in research design and statistical analysis to ensure the credibility of research findings.
4. Stay Updated on Emerging Trends: Continuously monitor developments in the field of economics to adapt to changing dynamics and trends.
By adopting these proactive measures, researchers and practitioners in applied economics can navigate the challenges posed by the credibility revolution and uphold the integrity of empirical research. 🌟
In conclusion, the discourse surrounding the credibility revolution in applied economics serves as a poignant reminder of the ever-evolving nature of the field. By remaining vigilant, embracing innovation, and upholding the highest standards of research practice, we can reaffirm the prestige of empirical economics and pave the way for transformative discoveries. 💡
Let’s embark on this journey together and shape a future where credibility and excellence define the landscape of applied economics. 💪🌍 #EconomicExcellence #ResearchIntegrity
the economist article is paywalled and I can’t see the federal reserve article but i think it’s misunderstanding the credibility revolution. the credibility revolution isn’t really related to Levitt’s papers. it emerged as a response to a feeling that empirical methods in economics weren’t very good.
This was around the 1970s and 1980s; Ed Leamer’s 1983 “Taking the Con Out of Econometrics” and Lalonde’s 1986 paper “Evaluating the Econometric Evaluations of Training Programs with Experimental Data” are the two canonical citations. The TLDR is labor economics (and economics more broadly) had trouble replicating results from randomized control trials in their observational data (the Lalonde paper) and the results economists did have were usually very brittle and not very credible (Leamer’s paper).
Enter David Card, Joshua Angrist and Guido Imbens (and others) who pioneered a lot of econometric techniques that made economists much more confident that they were correctly identifying causal effects. Card and Krueger’s 1992 paper on minimum wage increases is often considered the genesis of this movement. Note here that Steven Levitt isn’t really a major figure.
Where Levitt gets credit is, in my opinion, for bringing these techniques to a mainstream audience and for showing that they could be applied to not traditionally economic settings — Freakonomics being the big example. Unfortunately, I don’t think a lot of the results in the book have held up super well (the crime and police presence paper and the crime and abortions paper being two examples). If you want a more full critique of the Levitt and Freakonomics (and contrarian thinking more broadly) see the linked Andrew Gelman essay.
I think it’s true that econonomists are more skeptical of causal identification than we were 10, 20, 30 years ago but I also think this is normal and good for a relatively young field. We got some tools in the 1990s that helped fix a lot of longstanding problems, we realized some of our tools have flaws and limitations, we updated those tools and now we have research we think is more credible. I fully expect this cycle to continue, which of course means some results we have today will end up being wrong. That’s par for the course and IMO healthy for research.
– https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3
– https://www.americanscientist.org/article/freakonomics-what-went-wrong
The article for those that can’t access it.
—
“Economics is a study of mankind in the ordinary business of life.” So starts Alfred Marshall’s “Principles of Economics”, a 19th-century textbook that helped create the common language economists still use today. Marshall’s contention that economics studies the “ordinary” was not a dig, but a statement of intent. The discipline was to take seriously some of the most urgent questions in human life. How do I pay my bills? What do I do for a living? What happens if I get sick? Will I ever be able to retire?
In 2003 the New York Times published a profile of Steven Levitt, an economist at the University of Chicago, in which he expressed a very different perspective: “In Levitt’s view,” the article read, “economics is a science with excellent tools for gaining answers but a serious shortage of interesting questions.” Mr Levitt and the article’s author, Stephen Dubner, would go on to write “Freakonomics” together. In their book there was little about the ordinary business of life. Through vignettes featuring cheating sumo wrestlers, minimum-wage-earning crack dealers and the Ku Klux Klan, a white-supremacist organisation, the authors explored how people respond to incentives and how the use of novel data can uncover what is really driving their behaviour.
Freakonomics was a hit. It ranked just below Harry Potter in the bestseller lists. Much like Marvel comics, it spawned an expanded universe: New York Times columns, podcasts and sequels, as well as imitators and critics, determined to tear down its arguments. It was at the apex of a wave of books that promised a quirky—yet rigorous—analysis of things that the conventional wisdom had missed. On March 7th Mr Levitt, who for many people became the image of an economist, announced his retirement from academia. “It’s the wrong place for me to be,” he said.
During his academic career, Mr Levitt wrote papers in applied microeconomics. He was, in his own self-effacing words, “a footnote to the ‘credibility revolution’”. This refers to the use of statistical tricks, such as instrumental-variable analysis, natural experiments and regression discontinuity, which are designed to tease out causal relationships from data. He popularised the techniques of economists including David Card, Guido Imbens and Joshua Angrist, who together won the economics Nobel prize in 2021. The idea was to exploit quirks in the data to simulate the randomness that actual scientists find in controlled experiments. Arbitrary start dates for school terms could, for instance, be employed to estimate the effect of an extra year of education on wages.
Where the Freakonomics approach differed was to apply these techniques to “the hidden side of everything”, as the book’s tagline put it. Mr Levitt’s work focused on crime, education and racial discrimination. The book’s most controversial chapter argued that America’s nationwide legalisation of abortion in 1973 had led to a fall in crime in the 1990s, because more unwanted babies were aborted before they could grow into delinquent teenagers. It was a classic of the clever-dick genre: an unflinching social scientist using data to come to a counterintuitive conclusion, and not shying away from offence. It was, however, wrong. Later researchers found a coding error and pointed out that Mr Levitt had used the total number of arrests, which depends on the size of a population, and not the arrest rate, which does not. Others pointed out that the fall in homicide started among women. No-fault divorce, rather than legalised abortion, may have played a bigger role.
Other economists, including James Heckman, Mr Levitt’s colleague in Chicago and another Nobel prizewinner, worried about trivialisation. “Cute”, was how he described the approach in one interview. Take a paper on discrimination in the “The Weakest Link”, a game show in which contestants vote to remove other contestants depending on whether they think they are costing them money by getting questions wrong (in the early portion of the game) or are competition for the prize pool by getting them right (later on). That provided a setting in which Mr Levitt could look at how observations of others’ competence interacted with racism and sexism. A cunning design—but perhaps of limited relevance in understanding broader economic outcomes.
At the heart of Mr Heckman’s critique was the idea that practitioners of such studies were focusing on “internal validity” (ensuring estimates of the effect of some change were correctly estimated) over “external validity” (whether the estimates would apply more generally). Mr Heckman instead thought that economists should create structural models of decision-making and use data to estimate the parameters that explained behaviour within them. The debate turned toxic. According to Mr Levitt, Mr Heckman went so far as to assign graduate students the task of tearing apart the Freakonomics author’s work for their final exam.
Neither man won. The credibility revolution ate its own children: subsequent papers often overturned results, even if, as in the case of those popularised by Freakonomics, they had an afterlife as cocktail-party anecdotes. The problem has spread to the rest of the profession, too. A recent study by economists at the Federal Reserve found that less than half of the published papers they examined could be replicated, even when given help from the original authors. Mr Levitt’s counterintuitive results have fallen out of fashion and economists in general have become more sceptical.
Yet Mr Heckman’s favoured approaches have problems of their own. Structural models require assumptions that can be as implausible as any quirky quasi-experiment. Sadly, much contemporary research uses vast amounts of data and the techniques of the “credibility revolution” to come to obvious conclusions. The centuries-old questions of economics are as interesting as they always were. The tools to investigate them remain a work in progress.