Let’s Celebrate Being Wrong

In science, being wrong is call for a celebration.

Now that you think I’m crazy, let me explain. Science is about the search for truth. Logically, this means we should celebrate when we are right. But here’s the catch. How do we know when we are right? We can’t.

Science is about conditional truth. We have ideas that we think are right because they have not yet been proven wrong.

Science is a bit like evolution. Of all the species that have ever existed, the vast majority went extinct. The best we can say of living species is that they are not yet extinct. The same goes for scientific theories. Of all the hypotheses ever proposed, the vast majority are wrong. The ones that have survived are not yet wrong. (The bad ones are not even wrong. More on this in a bit.)

So when we have an idea that turns out to be wrong, we should celebrate. It means we know we are on the wrong path. Time to ditch our idea and try another one. This is how science should work.

The problem is that this is uncomfortable. Being wrong is not something we like to admit. Even the best scientists can have trouble admitting their failures. Take Albert Einstein. His theory of general relativity predicted something called gravitational waves. These are waves in the fabric of space and time that are caused by the motion of massive objects. One of the recent triumphs of empirical science is that we have detected these waves. So we’re quite sure they exist.

But in 1936, Einstein wrote a paper in which he argued that gravitational waves do not exist. The paper was rejected during peer review. Einstein was furious. He would later admit that the peer reviewer was correct and that his own arguments were wrong. But it took some time for emotions to subside. Read the story here. It’s a fascinating case study of science at work.

If a scientist like Einstein was unaware of his own mistakes (at least at first), it means the rest of us (with far lesser minds) must be extra skeptical of our ideas. As a species, we’re prone to self delusion about our own competence. This may be an evolutionary adaptation. When you are running from a lion or battling an enemy, it’s best not to wallow in your mistakes. That’s a surefire way to die. It’s far better to be overconfident in the moment and make decisions with gusto. Reflection can come later.

This adaptation may explain a curious part of our psychology. We’re attracted to people who are confident. The number one trick of the con artist is to exude a calm, easy confidence while boldly lying to your face. This confidence puts you at ease — it makes you trust them. The con artist is the polar opposite of a good scientist. A good scientist should be racked with doubt. But a con artist must be supremely confident. (If you are interested in the psychology of the con, I highly recommend Maria Konnikova’s book The Confidence Game).

On the spectrum between extreme confidence and extreme doubt, politicians come somewhere in between the con artist and the scientist. OK, most politicians are pretty close to con artists (the angry orange man in the Oval Office comes to mind). Think of political debates where the moderator asks, “Describe a time when you were wrong.” Has a politician ever given a straight answer to this question? Doubtful. They usually find a way to avoid the question and emphasize their strengths. Confidence, not skepticism (or self doubt), is what we seem to look for in leaders.

The secret to supreme confidence

Let me tell you the secret to supreme confidence. It’s simple. Your ideas must be immune to evidence. When your ideas have this property, they can never be wrong. By extension, you can never be wrong! This means you can act with confidence. There is no need for self-doubt or skepticism. You have the answers! The world needs to listen to you!

It’s no accident that this type of thinking is the hallmark of political ideologies. They cannot be wrong. This property is what makes ideologies so attractive to us. We can be supremely confident in our ideology because there is no conceivable way that it could be proven wrong. So if you are a scientist and you find that you’re rarely wrong, you have a problem. It probably means you’re not doing science. You’re doing politics.

But be in good spirits. You’re not alone. Many (if not most) economists have deluded themselves in the same way. Most of their ideas can never be wrong. That’s why economists are so confident.

Let’s look at an example. Utility maximization is probably the most important example of ideology masquerading as science. Neoclassical economists propose that humans are rational actors. When confronted with any decision, humans choose the path that maximizes their utility.

Economists who buy into this idea find it immensely seductive. It explains everything about human behaviour! What a stunning achievement of science. Except it’s not.

The catch is that utility maximization is completely immune to evidence. It cannot be wrong. Since utility is unobservable, there is no conceivable way to show that humans do not maximize utility. So neoclassical microeconomics is an ideology masquerading as science. You can use utility maximization to justify any claim you like. That’s a nice property to have in an ideology. You can preach the miracles of the free market with confidence.

Not convinced by this critique? Let’s use the same reasoning and apply it to thermodynamics. Suppose we have a theory that explains the internal combustion engine in terms of invisible, little green people. Inside every engine is an invisible person that ignites the fuel at the correct time. This theory is a towering achievement because it explains how all internal combustion engines work.

But it is conveniently immune to evidence. Since the little green people are invisible, we can never observe them. We can never show that the green people are not the cause of the fuel ignition. What a shame.

If we look closely at our ideologies (neoclassical economics, Marxism, all religions) we find invisible green people everywhere. Ideology and superstition preserve these green people for future generations. We fool ourselves over and over. Science should be about killing invisible green people. Science needs to purge ideas that cannot be wrong.

Other people are wrong a lot … but not me

It’s easy to find problems in other people’s ideas. It’s far harder to find problems in our own theories. I think we need to train ourselves to do it. This should be a basic part of science education.

I feel fortunate to have had Jonathan Nitzan as a teacher. Jonathan fondly tells stories of interviewing prospective candidates in the department of Political Science at York University. His favourite question: “Give an example of when you were wrong.” The candidate’s response, according to Jonathan, is usually complete surprise. Few give concrete examples.

I’m probably lucky that I have never had Jonathan Nitzan as an interviewer. But his story stays with me. Every now and then I ask myself “name a time when you were wrong”. I admit it’s deceptively hard to do. But it’s an essential exercise for maintaining skepticism about your own work. As Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool”.

So let me practice what I preach. Here is a time when I was wrong.

Some years ago, I was studying the firms owned by members of the Forbes 400. These are the 400 richest individuals in the United States. The Forbes 400 list provides the companies from which these individuals earned their wealth.

As a little research project, I tracked down the employment size of these companies. I was interested in how the size of these Forbes 400 firms related to the size of firms associated with the general population. Here was my initial result, posted on the capitalaspower.com forum in 2015:

Billionaire
The size distribution of firms owned by Forbes 400 individuals. I compare this to the US firm size distribution. In hindsight, this comparison is wrong.

In this figure, I compare the size distribution of firms owned by Forbes 400 members to the size distribution of all US firms. My conclusion was that Forbes 400 members owned firms that are far larger than the firms associated with the general US public.

But in hindsight, I made an embarrassing error. Can you spot it? If you can’t, it makes me feel better. Because it took me years to figure out my mistake.

Let’s begin with the Forbes 400 firms. These are firms that come from sampling individuals. We track down the 400 richest people, then we go and look at the firms they own. We plot the size distribution of these Forbes 400 firms.

My intention was to compare these firms to the ones associated with the average person. The correct way to do this is to sample individuals from the general US public. We then record the size of firm associated with each individual. We compare this firm size distribution to the Forbes 400 distribution.

But this is not what I did in the figure above. Instead I sampled from the firm population. I reported the size distribution of all US firms. I then compared this to the firm size distribution associated with Forbes 400 members. But this was a mistake. Sampling firms is completely different than sampling individuals and recording the firm size associated with each person. The analysis in the figure above is just wrong.

Years later I realized my mistake. To make the correct comparison, we need to find the size distribution of firms associated with the general US public. As far as I know, there is no raw data on this distribution. But we can estimate it from the firm size distribution. I won’t go into the details here. The derivation is in the supplementary materials of a recent paper.

Here is the correct comparison. The figure below again shows the size distribution of firms associated with Forbes 400 individuals. It also shows the ‘Execucomp 500’. These are the firms associated with the 500 highest paid executives in the United States (according to the Execucomp database). I compare these firm size distributions to a model. I won’t go into the model here, but it’s about hierarchy.

forbes
Size distribution of firms associated with the Forbes 400 and Execucomp 500. The null effect is what we expect if we sampled individuals from the general population. (Source)

What is important here is the ‘null-effect’ — the dotted line. This is the size distribution of firms that we expect if we sampled individuals from the US public and recorded the size of firms that they are associated with. This distribution is uniform when plotted on a log scale. Compare this to my original analysis on the capitalaspower.com forum. It’s very different, isn’t it.

Fortunately (for me) the original conclusion still stands. Forbes 400 members own firms that are larger than those associated with the general population. But this difference is far less spectacular than my original analysis suggested. The original results were plain wrong. It’s embarrassing, but also cause for celebration. It meant I was doing science.

Of course, I’m assuming that my new results are correct. But if I heed the advice I just gave above, I should admit that they are not yet wrong.


Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, please consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

patron_button


Stay updated

Sign up to get email updates from this blog.



This work is licensed under a Creative Commons Attribution 4.0 License. You can use/share it anyway you want, provided you attribute it to me (Blair Fix) and link to Economics from the Top Down.


Leave a Reply