Essentialism and Traditionalism in Academic Research

Ryan Kyger1 and Blair Fix

Philosophy of science is about as useful to scientists as ornithology is to birds.

— attributed to Richard Feynman2

Most scientists don’t worry much about philosophy; they just get on with doing ‘science’. They run experiments, analyze data, and report results. And by so doing, they fall repeatedly into known philosophical pitfalls.

This essay is about two such pitfalls: essentialism and traditionalism.

‘Essentialism’ is the view that behind real-world objects lie ‘essences’ — a type of eternal category that you cannot observe directly but is nonetheless there. Racial categories are a common type of ‘essence’. To be racist is to attribute to different groups universal qualities that define them as people.3

Given the long history of racism, it’s clear that humans need little impetus to impose categories onto the world. Still, our instinct to categorize is not always bad. In fact, it’s a key part of science. Looking for patterns is how Dmitri Mendeleev created the periodic table. It’s how John Snow discovered that cholera was water-borne. And it’s how Johannes Kepler discovered the laws of planetary motion.

So if categorizing patterns can be helpful, what makes essentialism bad? To be essentialist, in our view, is to reify a category (or theory) into a ‘higher truth’. By so doing, you don’t use evidence to inform a theory. You use theory to interpret evidence … and you don’t consider that you could be wrong.

Like most human activities, ‘essentialism’ is a social affair. Yes, you can do it by yourself, but you’ll be called a ‘crank’. To be essentialist with prestige, you must be part of a tradition. You must think x because your teacher thought x. And because your teacher was prestigious, so are you. When we combine essentialism with traditionalism, we get a powerful recipe for killing science. We interpret the world through our preferred lens, and then reward ourselves for doing it.

In this essay, we’ll look at examples of essentialism and traditionalism in economics, biology, and statistics. But we’ll start with some Greek philosophy.

Plato’s spell

Plato of Athens was perhaps the first scholar to combine essentialism and traditionalism. Unsurprisingly, his goal was political.

Plato was born during the Athenian experiment with democracy — a period marked by turmoil, war, and famine. Perhaps because of this instability (and also because his family claimed royal blood), Plato disliked democracy. Instead, he was staunchly in favor of traditional hierarchical rule:

The greatest principle of all is that nobody … should be without a leader. … [Man] should teach his soul, by long habit, never to dream of acting independently, and to become utterly incapable of it.

— Plato of Athens, quoted by Karl Popper [3]

Given his reactionary politics, Plato was disturbed by the events that surrounded him. During his life, the aristocratic order was in flux. And so Plato searched for something constant on which to cling. He found this constant in his philosophy. Social change, Plato would theorize, was like entropy. Change was never progressive, but was instead an incessant force for corruption, decay and degeneration. Or at least, that’s Karl Popper’s reading of Plato.

In his book The Open Society and its Enemies, Popper lambastes Plato for both his politics and his philosophy [3]. As Popper sees it, Plato resolved his angst about social change by imposing onto the messy real world a higher plane of eternal truth. Plato called this higher plane the ‘Forms’ — a hidden realm in which real-world objects have a perfect and unchanging representation. Behind every real-world triangle, for instance, is the perfect ‘form’ of a triangle.

When applied to mathematics, this idea seems reasonable. An ideal triangle, we all know, is a three-sided shape whose internal angles sum to 180°. Although no real-world triangle has this property exactly, we can imagine one that does. This, Plato would say, is the ‘essence’ of a triangle.

Actually, it’s the definition. You see, in mathematics, we start with a definition and explore the consequences. An example: Euclid postulated that parallel lines never intersect. From this (and other definitions), he derived the rules of Euclidean geometry. From a definition came consequences. If you think like Plato, you’d say that Euclid discovered a higher plane of truth. But that’s a scientific fallacy. The problem is that there’s no guarantee that definitions (and their consequences) have anything to do with the real world.

Case in point: on the curved surface of the Earth, Euclidean geometry is flat out wrong (pun intended). On Earth, parallel lines can intersect (lines of longitude), which means that Euclid’s ‘higher plane of truth’ is invalid. Draw a triangle big enough and you’ll find that the angles sum to more than 180°.

Back to Plato. Karl Popper was unimpressed by Plato’s essentialism because he realized that it was a recipe for pseudoscience. According to Plato, ‘essences’ were accessible only through ‘intuition’. Popper ridiculed this idea, but was not the first to do so. Plato’s spell was broken during the Enlightenment when thinkers like John Locke, David Hume, and Immanuel Kant highlighted the importance of empirical knowledge [46]. To do science, they argued, you cannot simply impose ideas and definitions onto the world. Instead, you must use real-world observations to hold ideas in check.4

Of course, like essences, scientific models are idealizations of the real world. The key difference, though, is the attitude surrounding the idea. When science is done well, hypotheses are treated as provisional and incomplete. When new evidence comes along, a good scientist must remain open to revising or discarding their model. With Plato’s essences, it is the reverse. The essence is eternally true … the unquestionable insight of a great mind. Evidence is to be interpreted in light of the insight, never the reverse.

‘Essentialism’, as we see it, is the reification of a theory — a transformation from ‘provisional explanation’ to ‘timeless truth’. The change rarely happens overnight, but is instead reinforced by repetition. Over time, scientists tend to fall in love with their theories, buttressing them from contradictory evidence. Eventually a pet theory becomes a school of thought, passed down from teacher to student. If the tradition becomes ubiquitous, the theory becomes a ‘timeless truth’. Or so it appears to those under the spell.

In this essay, we look at three academic disciplines that are (at least in part) under the spell of essentialism and traditionalism. Obsessed with the idealized free market, the discipline of economics is the worst offender. But it is not the only one. Population biologists often interpret the world through the lens of an equilibrium model that bares little resemblance to reality. And in their haste for ‘rigor’, statisticians have enshrined subjective beliefs as received wisdom.

What unifies these practices, we argue, is the merger of essentialism and traditionalism. It is a potent combination for creating and perpetuating an ideology.

Essentialism and traditionalism in economics

Mainstream economics makes so many false claims that we could write a book debunking them. (Which is why Steve Keen did just that [7].) Although economics presents itself as a hard science, under the hood it is essentialist dogma, held in place by tradition.

Take, as an example, economists’ appeal to ‘naturalness’. According to neoclassical economics, there is a ‘natural’ rate of unemployment, and there are ‘natural’ monopolies. Even the distribution of income is supposedly ‘controlled by a natural law’ [8].

Now there is nothing wrong with appealing to natural law. Indeed, scientists do it all the time. But outside economics, the term has an unambiguous meaning: a ‘natural law’ is an empirical regularity with no known exception. The laws of thermodynamics are a prime example. Left alone, objects ‘naturally’ converge to thermodynamic equilibrium. Leave a hot coffee on the table and it will soon cool to room temperature. The outcome is the same today as it was yesterday. It is the same for you as it is for me. It is a ‘natural law’.

By documenting and explaining this empirical regularity, we are following the recipe laid out by Locke, Hume, and Kant. Observe the real world and try to explain consistent patterns. When economists appeal to ‘natural law’, however, they are doing something different. Take the so-called ‘natural’ rate of unemployment. If this rate were like the laws of thermodynamics, unemployment would gravitate towards a single value. Try as you might, it would be impossible to change unemployment from this ‘natural’ rate.

Needless to say, unemployment does not work this way. Instead, it fluctuates greatly, both in the short term, and over the long term. So when economists refer to the ‘natural’ rate of unemployment, they don’t mean an empirical regularity. They mean an essence. As Milton Friedman defined it, the ‘natural’ rate of unemployment is that which is “consistent with equilibrium in the structure of real wages” [9]. So whenever (and wherever) the labor market is in ‘equilibrium’, unemployment is at its ‘natural’ rate.

So how do you tell when the market is in ‘equilibrium’? Good question … nobody knows. That’s because market ‘equilibrium’ is not something economists observe. It is something economists imagine and then project onto the world. It is an essence.

To convince yourself that this is true, pick any economics textbook and search for the part where the authors measure market ‘equilibrium’. Find the section where they construct the ‘laws’ of supply and demand from empirical observations. Look for where they measure demand curves, supply curves, marginal utility curves, and marginal cost curves. Seriously, look for these measurements. You will not find them.5

You won’t find them because they are unobservable. These concepts are essences. The equilibrium-seeking free market is an idea that economists project onto the world, and then use to interpret events. Anything that fits the vision is ‘proof’ of the essence. Anything that seems contradictory is dismissed as a ‘distortion’.

And that brings us to economics education. The core content in Econ 101 has changed little over the last half century (if not longer). And that’s not because the ‘knowledge’ is secure. It’s because the content of Econ 101 is a tradition. The point of Econ 101 is to indoctrinate the next generation in the ‘essence’ of economics. This powerful combination of essentialism and traditionalism has made economics a “highly paid pseudoscience” [10].

Essentialism and traditionalism in population biology

Compared to economics, the foundations of evolutionary biology are on sound footing. Still, we feel that elements of biology appeal to essentialism. We’ll use population biology as an example.

Population biologists study gene frequency (or more properly, genotype frequency) within a group of organisms. Among humans, for instance, most people have brown(ish) eyes, while about 10% of people have blue eyes. The goal of population biology is to explain this proportion and to understand how and why it changes with time.

The foundational hypothesis in evolutionary biology is that organisms evolve. Curiously, then, many population biologists use a model of genotype frequency that excludes evolution. The model, known as the Hardy-Weinberg principle, outlines the conditions where the genotype frequency of a population will stay in equilibrium, thus the population is said to be in ‘genetic equilibrium’. To satisfy the model, a population must reproduce sexually, be infinitely large, mate randomly, produce the same number of offspring per parent, not mutate, not migrate, and not be subject to natural selection [11,12]. In other words, the modeled population must look nothing like what we find in the real world.6

Now, this unreality is not necessarily a problem if we are clear that we are doing a mathematical thought experiment. The problem, though, is that population biologists often use the Hardy-Weinberg model to interpret reality. They’ll run statistical tests to see if the Hardy-Weinberg model provides a ‘good fit’ to empirical data. If it does, they conclude that the population is in genetic equilibrium.

Can you spot the problem? Like the equilibrium seeking market, the Hardy-Weinberg principle is an ‘essence’. You cannot directly observe that a population is in genetic equilibrium any more than you can observe that a market is in equilibrium. And like the neoclassical model of the market, the assumptions behind the Hardy-Weinberg principle are systematically violated in the real world. So on the face of it, the model ought never be used.

Here, though, population biologists take a cue from economist Milton Friedman. In his essay ‘The Methodology of Positive Economics’, Friedman argues that you cannot test a theory by comparing its assumptions to reality [13]. Instead, you must judge the assumptions by the predictions they give. If the predictions are sound, says Friedman, so too are the assumptions. (For why this is a bad idea, see George Blackford’s essay ‘On the Pseudo-Scientific Nature of Friedman’s as if Methodology’ [14].)

Here’s how population biologists apply the Friedman trick. They take a model whose assumptions are known to be false in the real world and subject it to a statistical test. They obtain one of two possible results. If the test yields a value that is below a specified threshold, a population is said to be in ‘genetic equilibrium’. Conversely, if the test yields a value above a certain threshold, then a population is said to be in ‘genetic disequilibrium’. Population biologists believe either outcome to be reasonable. But this belief ignores the false assumptions of the model, and results in a failure to reject the model itself.

To reduce this reasoning to its most absurd, imagine a hypothesis that claims: ‘if average human height is greater than 0, height is in disequilibrium’. Now, suppose we employ a ‘special procedure’ to test this hypothesis. The procedure yields two possible results:

  1. Null result: the average human height is equal to 0. Conclusion: height is in equilibrium.
  2. Alternative result: the average human height is greater than 0. Conclusion: height is in disequilibrium.

With either result, we ‘demonstrate’ something about human height. Or rather, by applying a model that we know to be false, we fool ourselves into thinking so.

Like economists’ faith in the equilibrium-seeking free market, many population biologists have come to treat the Hardy-Weinberg principle as an essential truth. This belief then gets passed from teacher to student as a matter of ‘tradition’. Why use the model? Because your prestigious teacher did.

To be fair to population biologists, the appeal to tradition has never been as great as in economics. And today, a growing number of biologists use non-equilibrium models that do not depend on any of the Hardy-Weinberg assumptions [15]. That said, the enduring appeal of the Hardy-Weinberg model speaks to the ideological potency of essentialism and traditionalism.

Essentialism and traditionalism in statistics

There is an old saying that mathematics brought rigor to economics, but also mortis.7 We might say the same thing of statistics.

Now, on the face of it, this accusation seems unfair, since statistics is mathematics. But the fact is that statistics was developed as mathematical tool for scientists — a tool to help judge a hypothesis. Above all else, scientists want to know if their hypothesis is ‘correct’. The problem, though, is that making this judgment is inherently subjective. The evidence for (or against) a hypothesis is always contingent and incomplete. And so scientists must make a judgment call.

The purpose of statistics is to put numbers to this judgment call by quantifying uncertainty. It’s a useful exercise, but not one that removes subjectivity. Suppose that I find there is a 90% probability that a hypothesis is correct. I still need to make a judgment call about how to proceed. In other words, statistics can aid decisions, but cannot make them for us.

Unfortunately, in many corners of science, statistical tools have become reified as the thing they were never designed to be: a decision-making algorithm. Scientists apply the tools of (standard) statistics as though they were an essential truth, a ritualistic algorithm for judging a hypothesis.

Let’s have a look at the ritual. Suppose we have a coin, and we want to know if it is ‘balanced’, meaning the ‘innate’ probability of heads or tails is 50:50. We’ve put scare quotes around the word ‘innate’ here because it is an ‘essence’. You cannot observe the probability of heads or tails for a single toss. It is a mathematical abstraction that is forever beyond our grasp. In the real world, all that we can see is the long-run behavior of the coin.

In standard (‘frequentist’) statistics, we assume that the long-run frequency of the coin gives insight into its ‘innate’ probability of heads or tails. Here’s how the algorithm works. First, assume that the coin is ‘balanced’. Second, calculate the probability that a balanced coin would give the observed frequency (or greater) of heads. Third, choose a ‘critical value’ below which you reject your hypothesis that the coin is balanced.

If you’ve every taken an introductory stats course, you know this algorithm by heart. Now here’s the problem. The algorithm seems to remove the subjective element to evaluating a hypothesis. And yet it does not. Notice that after we compute our statistic (the p-value), we are still left with uncertainty. Suppose, for instance, that we tossed a coin several thousand times and got slightly more heads than tails. Suppose that the probability of a balanced coin giving this proportion of heads (or greater) is about 1 in 100. Given this information, we must still decide between two scenarios:

  1. The coin is balanced, but we have witnessed an improbable outcome.
  2. The coin is unbalanced, and we have witnessed a probable outcome.

The choice is a subjective one. To be fair, most statisticians will admit as much, noting that the choice for a ‘critical threshold’ (below which we reject the null hypothesis) is arbitrary. They will also admit that in many real-world scenarios, the assumptions behind the calculation of p-values are violated, meaning the value is meaningless.

Unfortunately, when null-hypothesis tests get applied by scientists (especially social scientists) these problems are forgotten. Instead, scientists appeal to tradition. Never mind that the threshold for ‘statistical significance’ is arbitrary. The traditional value is 5%. If your p-value is less than this magic value, your results are ‘significant’.

With this tradition in hand, we get a seemingly objective procedure for discovering the ‘truth’. But in reality, it is an essence — a series of “ad hoc algorithms that maintain the facade of scientific objectivity” [17]. These algorithms have had a devastating effect on science, since they basically give a recipe for how to get a ‘significant’ result:

  1. Play with your data until you get a p-value less than 5%.
  2. Publish.
  3. Get cited.
  4. Get tenure.
  5. (Never check if your results are valid.)

Fortunately, there is a growing movement to reform hypothesis testing. One option is to pre-register experiments to remove researchers’ ability to game statistics. Another option is to lower the ‘traditional’ level of statistical significance.

While we welcome both changes, we note that they do not solve the fundamental problem, which is that judging a hypothesis is always subjective. For that reason, we favor a transition to Bayesian statistics. We won’t dive into the details here, but in short, Bayesian statistics is up front about the subjective element of judging a hypothesis. In fact, when you use the Bayesian method, this subjectivity gets baked into the calculations (in what Bayesians call a ‘prior probability’).

Despite what we see as the clear advantages of Bayesian statistics, most students are still taught the ‘frequentist’ approach. Why? Because that is the tradition, which is taught as an essential truth.

Totems of Essentialism

In his satirical essay ‘Life Among the Econ’, Axel Leijonhufvud describes economics as if it were a ‘primitive’ society. The ‘Econ’, he observes, are a curious tribe with odd rituals and mysterious ‘totems’ to which they worship. The most important ‘totem’ consists of “two carved sticks joined together in the middle somewhat in the form of a pair of scissors” [18]. Leijonhufvud is, of course, describing the intersecting supply and demand curves with ‘market equilibrium’ at their center.

We think that describing this model as a ‘totem’ is appropriate. The totem is not meant to describe reality. It is meant to define it. The ‘Econ’ do not test the totem of the equilibrium-seeking free market. They use it as a ritual to justify social behavior. When ‘reality’ gets in the way, Leijonhufvud observes, one of two things may happen:

Either he [an Econ] will accuse the member performing the ceremony of having failed to follow ritual in some detail or other, or else defend the man’s claim that the gold is there by arguing that the digging for it has not gone deep enough.

— Axel Leijonhufvud [18]

In other words, the totem is ‘true’, the evidence be damned.

While the economic model of the free market is the most brazen totem, there are many others in modern science — models that have been elevated to a ‘higher truth’. We’ve highlighted two such models here: the Hardy-Weinberg model of genetic equilibrium, and the frequentist method of statistical hypothesis testing.

Curiously, these three totems have a similar appearance, as show below. All three are pleasingly symmetric, drawing the eye to the center.

Figure 1: Essentialist totems in economics, biology and statistics. Clockwise from top left: the neoclassical model of the equilibrium-seeking market, the Hardy-Weinberg model of genetic equilibrium, and the normal distribution — the ‘equilibrium’ behavior of infinitely many random samples.

On the face of it, this resemblance is odd, since the three models deal with unrelated topics. To the Platonist, it may seem that scientists have unearthed an ‘essential form’ to reality. What seems more likely to us, however, is that we have unearthed an aesthetic preference.

Many scientists believe that a good theory ought to be ‘beautiful’. But why should the laws of nature respect human aesthetics? If they always did, it would be astonishing. But as physicist Sabine Hossenfelder shows in her book Lost in Math, the appeal to aesthetics leads scientists astray more often than it leads them to the truth [19].

Appealing to aesthetics, then, is a bad way to do science. But it is a great way to ingrain an idea in the human psyche. Each year, millions of students are shown the central totem of economics — “two carved sticks joined together in the middle somewhat in the form of a pair of scissors”. The totem is pleasingly simple and symmetric — so much so that few students will ever forget it.

And that is the point.

Never mind that the totem of the equilibrium-seeking free market has no contact with reality. What matters is that the totem be memorable … easily imprinted on the psyche, easily imposed onto the world, and easily passed down to the next generation: an eternal truth to be promulgated without question.

When it comes to essentialist totems, economics is the low-hanging fruit. But as we’ve tried to demonstrate, essentialism and traditionalism thrive elsewhere in science. It is an old problem that cuts to the core of how humans think. As Plato’s enduring spell shows, all too often we find ideas more seductive than facts.


Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

patron_button


Stay updated

Sign up to get email updates from this blog.



This work is licensed under a Creative Commons Attribution 4.0 License. You can use/share it anyway you want, provided you attribute it to the authors (Ryan Kyger and Blair Fix) and link to Economics from the Top Down.


Notes

[Cover image: Raphael’s depiction of Plato, surrounded by a Lissajous curve.]

  1. Ryan Kyger is an independent researcher who cares deeply about the integrity of scientific research.
  2. Like so many famous quotes, there’s no evidence that Richard Feynman uttered these words. In 1991, Willis Harman attributed the ornithology phrase to an anonymous scientist. But the sentiment seems to have been borrowed from an earlier (1974) remark about aesthetics:

    … aesthetics is to artists what ornithology is to birds.

  3. On the link between essentialism and racism Laurie Wastell writes:

    Sadly, millions of years of evolution have made humans very good at being tribal; we have an in-group and an out-group, and we are all too adept at convincing ourselves that the out-group is inherently evil, dangerous or other. Essentialist thinking facilitates this hugely because it encourages us to see people as representing an abstract idea associated with their group rather than as an individual human …

    On that front, researchers have found that essentialist attitudes about race correlate strongly with explicit prejudice [1,2]. Our guess is that essentialism goes beyond explicit racism, and is actually a key part of all hierarchical class systems. When one class rules another, the ruling class is inevitably endowed with an imaginary set of superior traits. And the lower class is endowed with an imaginary set of inferior traits.

  4. Summarizing this Enlightenment skepticism of ‘pure reason’, Karl Popper writes:

    … pure speculation or reason, whenever it ventures into a field in which it cannot possibly be checked by experience, is liable to get involved in contradictions or ‘antinomies’ and to produce what [Kant] unambiguously described as ‘mere fancies’; ‘nonsense’; ‘illusions’; ‘a sterile dogmatism’; and ‘a superficial pretension to the knowledge of everything’. [3]

  5. For an accessible introduction to the problems with the neoclassical theory of free markets, we recommend Jonathan Nitzan’s video Neoclassical Political Economy: Skating on Thin Ice.
  6. It’s worth noting the question that led to the Hardy-Weinberg model. Simple intuition suggests that dominant alleles (like those for brown eyes) should drive to extinction recessive alleles (like those for blue eyes). If this intuition is correct, recessive alleles should never persist in a population. And yet in the real world they do. Why? The Hardy-Weinberg model demonstrates that under certain circumstances, your intuition is incorrect: recessive alleles can persist indefinitely.
  7. The rigor-mortis quotation is attributed, at various times, to both the ecological economist Kenneth Boulding and to the economic historian Robert Heilbroner. We can find no written record for Boulding’s statement. Heilbroner, on the other hand, said the following in 1979: “the prestige accorded to mathematics in economics has given it rigor, but, alas, also mortis” [16].

References

[1] Mandalaywala TM, Amodio DM, Rhodes M. Essentialism promotes racial prejudice by increasing endorsement of social hierarchies. Social Psychological and Personality Science 2018; 9: 461–9.

[2] Chen JM, Ratliff KA. Psychological essentialism predicts intergroup bias. Social Cognition 2018; 36: 301–23.

[3] Popper KR. The open society and its enemies. vol. 119. Princeton University Press; 2020.

[4] Locke J. An essay concerning human understanding. Kay & Troutman; 1847.

[5] Hume D. An enquiry concerning human understanding. Routledge; 2016.

[6] Kant I. Critique of pure reason. 1781. Modern Classical Philosophers, Cambridge, MA: Houghton Mifflin 1908: 370–456.

[7] Keen S. Debunking economics: The naked emperor of the social sciences. New York: Zed Books; 2001.

[8] Clark JB. The distribution of wealth. New York: Macmillan; 1899.

[9] Friedman M. The role of monetary policy. In:. Essential readings in economics, Springer; 1995, pp. 215–31.

[10] Levinovitz AJ. The new astrology. Aeon Magazine 2016.

[11] Edwards A. GH hardy (1908) and Hardy–Weinberg equilibrium. Genetics 2008; 179: 1143–50.

[12] Abramovs N, Brass A, Tassabehji M. Hardy-Weinberg equilibrium in the large scale genomic sequencing era. Frontiers in Genetics 2020; 11: 210.

[13] Friedman M. Essays in positive economics. Chicago: University of Chicago Press; 1953.

[14] Blackford G. On the pseudo-scientific nature of Friedman’s as if methodology. Real-World Economics 2016.

[15] Brandvain Y, Wright SI. The limits of natural selection in a nonequilibrium world. Trends in Genetics 2016; 32: 201–10.

[16] Heilbroner RL. Modern economics as a chapter in the history of economic thought. History of Political Economy 1979; 11: 192–8.

[17] Diamond GA, Kaul S. Prior convictions: Bayesian approaches to the analysis and interpretation of clinical megatrials. Journal of the American College of Cardiology 2004; 43: 1929–39.

[18] Leijonhufvud A. Life among the econ. Economic Inquiry 1973; 11: 327–37.

[19] Hossenfelder S. Lost in math: How beauty leads physics astray. Hachette UK; 2018.

9 comments

  1. I am a bit conflicted on the argument here. On the one hand, I feel your critique of dogmatism in the three fields you mentioned is fully warranted and excellent. On the other hand, I’m not sure it’s correct to view natural laws as empirical regularities without exceptions. After all, the perturbations in the orbit of Uranus did not lead scientists to throw out Newton’s laws (see the following interview with Chomsky for this and other examples of how respectable scientists — including Einstein — routinely tried to get rid of the data and keep the theory: https://www.youtube.com/watch?v=8RDB-S22_rA). I’m guessing you don’t regard all of this as instances of dogmatism.

    Also, Bhaskar made an argument (in his ‘A Realist Theory of Science’) to the effect that empirical regularities are neither necessary nor sufficient for a natural law. For him, this is due to the world’s being an open system and not a laboratory– the only place where you could have empirical regularities. He basically regarded natural laws as enduring mechanisms or causal powers which may be counteracted, or which may go unexercised or undetected. Now, in the social sciences, there is no laboratory to speak of. So while the natural sciences can at least have empirical regularities inside the lab, and virtually nowhere else, the social sciences don’t encounter empirical regularities without exceptions.

    Given the detachment of economics from empirical data, your emphasis on the importance of empirical data is understandable. My worry is, though, that you might be painting a bit of a positivist portrait of science here. And there would be nothing wrong with that too, had positivism not been shown to be wrong very long ago.

    • Hi Yigal,

      Thanks for the comments. Yes, perhaps defining ‘natural law’ as something with ‘no known exceptions’ is a bit too restrictive. Perhaps a better definition is an ‘overwhelming empirical regularity’.

      So ‘natural laws’ include things like ‘Kepler’s laws’ of planetary motion, and more recently, flat rotation curves in spiral galaxies (a tendency which conflicts with Newton’s theory of gravity).

      Don’t mistake this usage of ‘natural law’ for what physicists now call the ‘laws of physics’. The latter are theoretical postulates that physicists assume are inviolable. ‘Natural laws’, in contrast are observations that seem to hold. They don’t need to have (and often do not have) an explanation.

      What is important, though, is that these empirical regularities get used to build theories. As you correctly point out, what inevitably happens is that we find exceptions that don’t fit with the theory. Newton used Kepler’s law to formulate his theory of gravity. Later on, scientists found apparent aberrations.

      As Chomsky observes, rarely do scientists immediately throw out their theory. Instead, they look at ways to explain the observations but still keep the theory. For theories of gravity the go to way to rescue a theory is to look for missing mass. Given that astronomical objects can be difficult to see, this is clearly a reasonable thing to do. It worked spectacularly with Uranus.

      Such rescue attempts, however, don’t always pan out. When it was discovered that Mercury’s orbit deviated from Newton’s laws, scientists proposed that an unobserved planet call ‘Vulcan’ was responsible. But they never found the planet. Eventually Einstein developed a new theory of gravity that explained the problem, and ‘Vulcan’ got thrown in the dustbin of history.

      What is clear, from this history, is there is always uncertainty in science. Whether a theory is wrong or the evidence is incomplete is always a matter of judgment. But over time the balance of probability tends to slide in one direction.

      The theory of gravity still serves as a great example. In the 1970s, astronomers discovered that stars in galaxies rotated too quickly to be explained by Newtons laws. The obvious solution was that there was missing mass — what we now call ‘dark matter’. For the next 50 years, astronomers looked for this matter, but could not find it. The search still continues.

      A few scientists, though, became convinced that the problem was not missing matter. It was that Newton’s theory of gravity was wrong. They found that by modifying Newton’s laws, they didn’t need missing mass. What’s surprising is that these theories of modified Newtonian gravity (MOND) made many predictions that would later come true.

      Unfortunately, the dark-matter hypothesis has become the default assumption in many quarters — a theoretical essence that must be true. For a fascinating and thorough investigation of how this relates to the philosophy of science, see David Merritt’s book A Philosophical Approach to MOND. Also check out Stacy McGaugh’s blog Triton Station.

      Here’s my take. Basically, I think the instinct to rescue theories by damning the evidence is reasonable … at first. But as the evidence gets stronger, at some point it becomes untenable to maintain the prevailing theory. When and how this transition happens is difficult to predict. But what can we say is that it usually involves funerals. The old guard rarely abandon their pet theory, even if the evidence is overwhelming.

      Back to the social sciences. In human societies, few things are regular enough to warrant the term ‘natural law’. And its use here is especially pernicious because it implies that we cannot design societies differently. History has shown that to be almost always false.

      • I’ve been long interested in the philosophy of science. I read much philosophy of science while doing my PhD, even though it was directly relevant. However, I was taking a lot from actor-network theory (ANT), which was rooted in science and technology studies (STS), and that made frequent reference to the philosophy of science. I also read a lot of the works of Manuel Delanda, who popularized Deleuze, particularly as a philosopher of science and social science. His book “The New Philosophy of Society” was where I learned about the role of ‘essences’ in philosophy and, thereby, common perspectives on society.

        Along the way, I read Ian Hacking’s ‘Representing & Intervening.’ It offered a quick summary of the history of debates in the philosophy of science that was extremely useful. But even more useful was his emphasis that what scientists do is less about “representing reality” and more about intervening in it.

        Conjoined to STS and ANT is a domain of economic sociology that adheres to the “performativity thesis.” The primary text on the thesis is “Do Economists Make Markets?”, which is a title that helpfully conveys the central argument. In Hackian terms, economists think they are merely representing ‘The Economy’, when what they are actually doing is intervening in all sorts of relationships that we think of as ‘economic’, or which bear on the economic. That intervention includes the active process of defining the very existence of ‘The Economy.’

        One of the other philosophical informants of the performativity thesis is pragmatism. The short-hand I have for pragmatism and its perspective on truth is “what matters is not whether it is right or wrong, but does it work?” That immediately raises the question: Does it work for what?

        Within the natural sciences, “working” typically means that a theory can explain the phenomenon we’re trying to understand. As you describe above, Blair, scientists frequently end up in situations where a theory is not able to explain some phenomenon. So, they’ll add some element—like dark matter—to allow the theory to cover this unexpected and inexplicable result. In amongst all this you’ll have debates about whether the phenomena actually exists. While the unpredicted movement of distant galaxies has been firmly established, there are on-going debates in every discipline and sub-discipline about whether or not an empirical result expresses something that exists in reality or not. If it does not exist, then we don’t even need to add a kludge to the theory.

        However, whether or not a theory works extends beyond explaining or understanding phenomenon, which can be considered accurate (dare we say truthful?) representation. Working also means that creations based on the theory are likely to succeed. This is directly relevant to the issue of the theory of gravity. It has failed to explain some phenomenon. Yet, it still works for some interventions. I recall reading that when NASA scientists are calculating rocket trajectories and determining thrust requirements they can account for gravity using Newton’s formula. They do not need relativity. (Forgive any kack-handedness in this description. We are well outside my comfort zone).

        The truly disturbing aspect of economics is that it is much more about intervention and much less about representation, while denying the former. Yet, most formally trained economists still think of the theory they learned as a representation of an objectively existing thing, despite the fact that most will go on to work in positions that actively intervene in almost every social institution.

        The hard sciences spent centuries focused on describing phenomena. Natural philosophers looked for patterns and then derived some truly stunning theories, many of which were quickly superceded. The iterative process between theorizing and describing was powerful because a theory, even when eventually rejected, helped researchers look at phenomena in new and different ways. Marshall’s conception of equilibrium as expressed in the scissors diagram had much merit as a way to look at certain kinds of social interactions. However, it never should have been elevated to an essential truth of human relating. Marshall himself knew his model of the market was overly simplistic. He expected that it would be superceded as further research of actual markets progressed. Unfortunately, that was not to be. Far from it. His model of equilibrium, and the assumptions needed to make it work, got reified.

        At the same time, empirical study of markets, like that of Means in his work on ‘administered prices’, got sidelined to such a degree that we have more generalized understanding of the interior of the atom than we do of the pricing decisions made along supply chains.

        The development of a useful economics cannot just look to the natural sciences with hope or envy. Social scientific practices are much more interventionist, simply by being fully entngled in society. That brings different responsibilities to than do the natural sciences. The Sun does not change when we understand it to be comprised primarily of hydrogen rather than iron. But if we think markets seek an equilibrium, that will end up influencing the operation of institutions necessary for markets to actually exist. So, we’re back to the “not right or wrong, but does it work” assessment of economic theories, with the corollary of “does it work for what?”

        For me, the philosophy of science ultimately delivers us into philosophy proper. While worshippers of ‘Science’ often mock philosophy as no longer necessary (*cough* Neil deGrasse Tyson *cough*), their naivety is dangerous because it contends that we have, or that we can, achieve unmediated access to ‘Truth’. Even in the natural sciences the blinkered perspective on truth rooted in representation is dangerous, as it provides cover to interventions that are made possible by science: thalidomide, chlorofluorocarbons, biological weapons. Ultimately, there is no escaping ethics. And ethics needs philosophy like a human needs water.

      • Well said. I especially like the distinction between ‘representing’ and ‘intervening’. I have not read Ian Hacking, but will add it to my list.

        A few other things. Yes, you’re correct that NASA uses Newtonian mechanics for rocket science. It works beautifully. The only instance where we need general relativity (on Earth) is for GPS, which works by triangulating radio signals. This requires very sensitive measures of time that need to correct for the time dilation that happens near a massive body (Earth).

        About philosophy. There a numerous famous scientists who have dismissed the philosophy of science as unnecessary. Stephen Hawking is perhaps the most famous. Today, many string theorists echo the sentiment, largely because they want to downplay the fact that they’re now doing metaphysics. Their theories make no testable predictions.

        Back to economics. I often wonder what economics would look like if it were a ‘progressive’ rather than ‘degenerative’ research program. I actually find it hard to imagine, largely because economics is so central to shaping social behavior.

        As soon as an economics theory becomes popular, it becomes the de facto ideology. This happened from the beginning with neoclassical economics. It happened to Marxism after the communist revolutions of the 20th century. I fear that if the theory of capital as power became popular, the same thing would happen.

        The one thing I do admire about neoclassical economics is the tenacity to find the exact assumptions required for the theory to work. It is a testament to human stubbornness, since at every turn the required assumptions were absurd. And Milton Friedman’s inversion (assumption don’t matter) was the icing on the cake — the ultimate f-you to reality.

        The result was permanent intervention in the ‘economy’ to promote the ‘natural laws’ of the market which (despite what the theory claimed) did not to occur on their own.

      • “The result was permanent intervention in the ‘economy’ to promote the ‘natural laws’ of the market which (despite what the theory claimed) did not to occur on their own.”

        Great summation of this process, Blair!

  2. There is a lot of confusion in economics and social sciences in general. There are a number of issues:

    1. There is an all-other-things-remain-equal assumption behind every economic theory which never holds true in reality.
    2. Most economic models have some merit and explain some phenomena, often under certain circumstances.
    3. Predictions have limited value for two reasons:
    a. small changes in initial conditions can completely alter the outcome (chaos theory);
    b. knowledge of the future will change the future, for instance, if I know I am going to have a car accident tomorrow, I will stay at home tomorrow.

  3. comments:

    1. i hadn’t seen ref 15 on nonequilibrium departures from h-l equilibrium, but my survey of pop geneticists has been that it has been concerned with this since 1960’s if not before.
    in a way your characterization of pop gens’ views of h-l is like u.s. republicans saying all democrats are in kkk, and republicans are anti-racists, because lincoln was a republican and early kkk were democrats.

    of course intro textbooks often don’t deal with this, just as intro econ texts don’t deal with ‘market imperfections’ etc. much—and this does become dogma/ideology. people learn and teach caricatures.

    in daily life you end up with things i’ve heard—eg a highly sucessful school friend of a relative committed suicide because it was ‘genetic’; as are obesity, other addictions, depression, poverty and wealth.

    2. chomsky’s own linguistics theory is essentialist turned traditionalist/dogma but losing adherents—people couldn’t find the ‘language organ’ —he has many terns for this essence found nowhere else in nature—–and even his empirical studies on why it must be there have been sort of contradicted by other ones.

    turns out babies do seem to learn aspects of language—you dont just add water and end up with shakespeare.

    despite views of top behavioral geneticists who claim environment is irrelevant for education, parents still try to send kids to ‘good schools” even if they all have the same computers.

    —————–

    its sort of like mond/dark matter. you can put your essence wherever you want. einstein even ‘revised’ his rejection of the ether—said in GR it just has a different name and is slightly different..

    -in many cases there’s no reason to prefer any specific hidden variable or essence.

    3. i view bayesianism as a subset of frequentism—your prior is your revised essence.

    4. lawss of thermo have as many exceptions as h-l equn.

Leave a Reply