Book Summary: Rationality – What It Is, Why It’s Scarce, and How to Get More

Rationality (2021) explores the faculty that sets us apart from other species: reason. The ability to think rationally drives individual and social progress. It allows us to attain our goals and create a fairer world. But rationality isn’t just something we do as individuals – it also sustains our best institutions.

Book Summary: Rationality - What It Is, Why It's Scarce, and How to Get More

Content Summary

Who is it for?
What’s in it for me? Learn how reason really works.
Rationality is a means to an end.
Rationality helps you decide between passions.
Ignorance and self-constraint can be rational choices.
Science applies rationality to the real world.
Institutions make us less partial – and more rational.
Punishing people for their own good creates a more rational commons.
Our most important moral idea is compelling because it’s rational.
About the author

Who is it for?

  • Would-be rationalists
  • Philosophical thinkers
  • Anyone who loves big ideas

What’s in it for me? Learn how reason really works.

It’s safe to assume that no human has ever been perfectly rational, but the conviction that objective truth exists has allowed us to create rules that help us approach it.

When you follow those rules, the author argues, you can change the world.

It’s rationality, for example, that helped us reach the moon, eradicate diseases like smallpox, invent computers, and, more recently, develop a vaccine against a deadly global pandemic in just one year.

So what are those rules – how does rationality work and how can you cultivate it? That’s exactly what we’ll be exploring in these summary!

Along the way, you’ll learn

  • why ignorance can be a rational choice;
  • what counting horse teeth can teach us about science; and
  • why our fear of being suckers erodes public services.

Rationality is a means to an end.

Let’s start at the beginning: what is rationality? Dictionaries tell us that rational means “having reason.” And reason comes from the Latin word ratio, which means – you guessed it – “reason.”

So, if etymology leads us in circles, could philosophy help? Philosophers state that rationality is the ability to use knowledge to attain goals. Better?

Let’s break that down. The term knowledge refers to justified true beliefs. We wouldn’t credit someone with being rational if they knowingly acted on false beliefs – like looking for a misplaced wallet someplace they knew it couldn’t be. But there’s more to rationality than simply thinking true thoughts like “1 + 1 = 2.” It also helps us do things.

The key message here is: Rationality is a means to an end.

In 1890, the American philosopher William James wrote an essay on the difference between rational beings and nonrational entities.

James observed that if you scatter iron filings on a table and place a magnet near them, they fly toward the magnet and stick to its surface. But cover the magnet with a card and they press against its surface – it never occurs to them to bypass the card and come into direct contact with the object that attracts them. Now consider Shakespeare’s Romeo and Juliet.

Juliet is Romeo’s “magnet.” When there’s no obstacle in his path, he moves toward her in a straight line just as the filings moved toward the real magnet. But here’s the difference. When Romeo finds his path blocked, he alters course. Romeo and Juliet don’t remain on either side of this obstacle, “idiotically pressing their faces against its opposite sides like the magnet and the filings with the card.”

In the play, Romeo and Juliet use their knowledge of the world to overcome hurdles. Romeo scales walls to touch Juliet’s lips, and the couple scheme to deceive their hostile families.

For James, this is what sets nonrational entities and rational beings apart. Iron filings move in a straight line toward their goal. But their path to that goal is fixed – that’s why the card impedes them so easily. It’s the other way round for rational beings. Romeo and Juliet’s desired end – being together – is fixed, but they’re highly flexible about how they achieve that goal.

This is human rationality in action: when one path is blocked, we can always try another.

Rationality helps you decide between passions.

We’ve just argued that reason is a means to an end. But where do the goals which it helps us pursue come from? One answer comes from the eighteenth-century Scottish philosopher David Hume.

Goals, Hume claimed, come from the passions. That is, from desires, drives, and emotions like love, anger, pride, envy, and fear. Reason, meanwhile, is the “slave of the passions.”

Although it may look like it, this isn’t an argument for behaving irrationally.

Hume was simply making the point that reason can’t tell us which goals we should pursue. Logically speaking, they’re neither rational nor irrational – they’re arational.

But sometimes, we have to choose between conflicting goals. That’s when we call upon reason.

The key message here is: Rationality helps you decide between passions.

If people only wanted one thing, life would be simple: hedonists could eat, drink, and love with carefree abandon and the ambitious could pursue fame and fortune without a thought for their kids or fellows.

Of course, life isn’t like that. We crave pleasure and comfort, but we also want to be healthy, liked, and to have flourishing children. Too much cake makes us fat. No one wants to work or be friends with a ruthless Machiavellian. Unattended kids get into trouble and cause headaches.

In short, goals sometimes clash – you can’t always get what you want. But how can you decide which goals to pursue and which to abandon? Enter rationality.

Reason helps us prioritize by giving us a yardstick to compare the relative worth of goals over time.

Take the trade-off between hedonism and health. One of the best cases for prioritizing short-term pleasure can be found in a cartoon published in the New Yorker. The problem with prolonging your life, a man sitting at a bar proclaims, is that “all the extra years come at the end, when you’re old.”

But our knowledge of the world tells us that, say, eating healthy and exercising doesn’t just extend life expectancy – it also keeps us in good shape, which means we’re more likely to enjoy those extra years. That’s likely to give us much more satisfaction. Health, then, looks to be a better goal than hedonism.

It’s the same with ambition. Having sharp elbows can advance careers in the short term, but it also alienates people on whom you may later rely for help. When you stop to think about your goals, you usually realize that your future self will thank you for making wise choices today.

Ignorance and self-constraint can be rational choices.

Knowing something doesn’t guarantee that you’ll be rational about it.

Often, your willpower just isn’t up to the task of resisting temptation. Take it from the Odyssey, an epic poem composed in ancient Greece some 2,000 years ago.

To return home, the hero, Odysseus, must sail past an island inhabited by sirens – mythical creatures whose enchanting songs lure sailors onto jagged rocks which sink their ships and drown their crews.

Luckily for Odysseus, a sorceress tells him how he can avoid succumbing to this fatal temptation: he must tie himself to his ship’s mast and plug his sailors’ ears with wax.

Odysseus follows her advice and survives the passage. It’s a strategy we’d do well to learn from.

The key message here is: Ignorance and self-constraint can be rational choices.

One way of resisting temptation is to prevent yourself from ever acting on it.

For example, It’s much easier to resist the siren songs of unhealthy snacks if you go shopping after you’ve eaten. Similarly, you can’t spend money you know you should be saving if you’ve told your employer to set aside a portion of your paycheck for retirement. This kind of Odyssean self-control isn’t about willpower – it’s about tying yourself to the metaphorical mast.

Odysseus’s sailors didn’t even hear the sirens through their wax earplugs. At first glance, this looks like an odd tactic. Isn’t knowledge power, after all? Surely it’s better to know something than not, since you can always decide not to act on it anyway? Paradoxically, it’s sometimes rational to choose ignorance.

One example is the decision for people not to find out whether they’ve inherited a dominant gene for an incurable disease from a parent. This knowledge won’t prevent them from developing the disease, but it will cast a shadow over the rest of their lives. And banks put up notices informing would-be robbers that staff don’t know the combinations to safes: no one can reveal what they don’t know, so there’s no point in threatening them. Ignorance, in short, can protect us from harm.

It can also counteract bias. That’s why jury members are forbidden from seeing inadmissible evidence gleaned from forced confessions or hearsay. Good scientists also ring-fence their work against partiality by conducting double-blind studies in which they’re kept in the dark about which patients received a drug and which a placebo. In both cases, ignorance helps keep people objective.

Science applies rationality to the real world.

We can make different kinds of statements about the world.

Take the claim that “All bachelors are unmarried.”

Is this statement true? Logic says it must be – a bachelor can’t be married, after all, since the concept refers to an unmarried adult male. This statement is thus unfalsifiable: there’s no way of disproving it.

Other kinds of statements are falsifiable. “All bachelors are unhappy,” for instance, is an empirical claim. To determine its truth, you have to get out of your armchair and ask flesh and blood bachelors whether they’re happy or not. If you find one contented bachelor, you’ve disproven, or “falsified,” the statement.

Logic helps us unpack the first kind of statement but we need science to verify the second.

The key message here is: Science applies rationality to the real world.

In the year 1432, a group of English monks began arguing about how many teeth there are in a horse’s mouth. Their debate, which was as learned as it was ill-tempered, raged for two weeks.

One faction appealed to the works of Aristotle to argue that it must be 30; a second camp refuted this by citing obscure ancient texts which proved that it was 50. A third group settled on 45, a conclusion which they said both Plato and the Bible supported. Finally, on the 14th day, a young friar spoke up. “Why don’t we go outside,” he suggested, “and look at a horse’s mouth?”

This story, which is commonly attributed to the sixteenth-century English philosopher and scientist Francis Bacon, is most likely fictitious. But it’s easy to see why it stuck around over the centuries – it’s a vivid illustration of the differences between Bacon’s outlook and that of the scholastics.

This second group, which consisted of Church-educated intellectuals, believed that only logical models of the world derived from trusted texts could help us understand the world. Bacon, like the young friar, believed that you have to go outside and start counting horse teeth.

If you’re not gathering empirical evidence, you can’t escape “superstition.” Today, we mostly use a different term – confirmation bias, the tendency to notice and recall things that confirm our theories and ignore those that don’t. That distinction remains crucial to our own scientific rationality. How do you tell science and pseudoscience apart? For most scientists, it’s a question of falsifiability. Are you looking for evidence that could falsify your hypothesis or are you cocooning yourself in unfalsifiable theories?

Institutions make us less partial – and more rational.

According to the American psychologist David Myers, monotheism, the belief in a single God, is based on two claims. The first is that there is a God. The second is that neither of us is that God.

Okay, but what does that have to do with rationality?

Well, rationalism, the belief in objective truth, has a similar structure. It states, firstly, that there is an objective truth and, secondly, that neither you nor I know it.

Rationality, then, is anything but an arrogant claim to know it all – it’s an aspiration.

No mortal can say that they’ve attained objective truth, but the conviction that it’s “out there” helps us develop rules which get us closer to it as a society than we could as individuals.

The key message here is: Institutions make us less partial – and more rational.

In 1788, James Madison wrote that if humans were perfect, government wouldn’t be necessary. But we’re not, which is why the American statesman saw human nature as a political problem.

We’re selfish and ambitious, and often overly partial to our own needs and blind to our neighbors’. We can also be vicious: left to our own devices, we may end up trampling on others to get ahead.

Madison’s answer to this problem wasn’t to suppress human nature – it was to create a political system which worked with its grain. “Ambition,” in his words, “must be made to counteract ambition.” Let people be selfish and ambitious, but create a system of checks and balances to prevent any single individual or faction from tyrannizing others.

Institutional checks and balances don’t just prevent political tyranny – they also stop flawed individuals imposing their follies on the rest of us. Take the adversarial system in law, which pits lawyer against lawyer and leaves decision-making to impartial juries and judges. Anonymous peer review plays a similar role in academia, ensuring that ideas are analyzed on merit rather than through the lens of grudges and rivalries. In the public sphere, freedom of speech ensures that both popular ideas, which are often wrong, and unpopular ones, which are often right, get a fair hearing.

To paraphrase Madison, if humans were perfectly rational, these institutions wouldn’t be necessary. Since no mortal has a direct line to objective truth, we need them. The more we disagree with one another, the likelier it is that at least one of us is right.

Punishing people for their own good creates a more rational commons.

If you want a better view of the stage at a concert, it makes sense to stand up. But that obscures the view of others, so they also get up. Soon, everyone is on their feet and no one has a better view.

Arms races follow the same logic. If one country spends lots of money developing long-range missiles, it becomes rational for its hostile neighbor to do the same thing, leaving both countries poorer.

Both of these cases illustrate one of the paradoxes of rationality. When we all act rationally, the outcome can be worse for everybody – a phenomenon known as the tragedy of the commons.

This isn’t an inescapable fact of nature – it can be solved by creating the right kind of rules.

The key message here is: Punishing people for their own good creates a more rational commons.

As members of communities, we benefit from public goods like roads, sewers, and schools. These goods are a kind of commons: everyone can access them and we’re all responsible for their upkeep.

But as individuals, we benefit more if we can use these goods while letting others pay for them. The rational choice, in other words, is to free ride.

If everyone makes this choice, our community won’t have any money to maintain its public goods. No one desires this outcome, but no one wants to pay taxes when others don’t – that’s a sucker’s payoff.

This is a lose-lose scenario – it’ll leave us all worse off. So how can we solve this dilemma? Let’s take a look at a lab experiment used by economists and psychologists.

Participants are given a sum of money and then offered a chance to throw some of it into a communal pot. For every dollar they chip in, the experimenter adds another dollar. Collectively, participants’ best bet is to maximize their contributions. As individuals, they’re better off hoarding their cash and letting others contribute their money. And that’s the strategy participants typically choose.

Once others notice what they’re up to, they also stop contributing to the pot – unless the experimenter gives them the option of fining free riders, in which case contributions stay high and everyone wins.

It’s the same in the real world.

When we know that rule-breakers will be punished, we’re much happier paying our taxes. It’s not just that we don’t want to end up in jail ourselves – we also hate the idea of being suckers!

Our most important moral idea is compelling because it’s rational.

As we’ve seen, secular laws can force us to take care of the public good. Many believe religious laws do something even more important: they force us to be moral.

That argument has been around for a long time – so long, in fact, that it was familiar to Plato 2,400 years ago. But does it hold up? The Greek philosopher didn’t think so.

If something is moral simply because God commands it, Plato argued, there’s no reason for God’s commandments – they’re whims.

But if God does have reasons for his commandments – that is, if he commands something because it’s moral – it’s unclear why we can’t skip the middleman and appeal directly to those reasons.

Plato’s conclusion? Morality can be grounded in reason.

The key message here is: Our most important moral idea is compelling because it’s rational.

Humans are selfish and ambitious: we desire what’s good for us even if it harms others’ interests.

But we’re also social animals. We live in societies and depend on others to help us when we’re in need and refrain from harming us for no good reason. How, then, can we get along with one another?

Well, first off, we’ll need to have a rational conversation and agree on some rules.

Now, nothing is more fatal to reasoning than inconsistency. If a set of beliefs contains a contradiction, it can be used to deduce anything and everything. In short, it’s a recipe for anarchy.

Imagine that I defend my right to rob you while insisting you don’t rob me. This “rule” is contradictory. Everyone is simultaneously an “I” for themselves and a “you” for someone else. That means any argument which says that I can do something you can’t because I’m me and you’re not, is nonsensical.

At this point, we’re likely to agree that our rules must apply to all equally. Reason has thus led us to humanity’s most compelling moral idea – the Golden Rule of treating others as you’d wish to be treated. Put differently, If I wouldn’t like to be robbed by you, I shouldn’t rob you.

Every major world religion, from Hinduism and Buddhism to Confucianism, Christianity, Judaism, and Islam has its own variant of this rule. It’s also the concept we intuitively reach for when we want to teach our children about morality. “How would you like it,” we ask, “if she did that to you?”


Rationality is a tool that allows us to pursue our goals in life. It also helps us prioritize our aims by comparing short- and long-term benefits. Reason is paradoxical, though. In some situations, ignorance is a more rational choice than knowledge. In others, the worst outcome is a result of everyone being rational about their self-interest and neglecting the common good. It’s because of these paradoxes that we embed rationality in institutions. When our rules force us to be reasonable, our lives are much better and fairer.

About the author

Steven Pinker is the Johnstone Professor of Psychology at Harvard University and an award-winning author. A member of the National Academy of Sciences, he’s been named one of Time’s 100 Most Influential People and one of Foreign Policy’s 100 Leading Global Thinkers. His previous books include Enlightenment Now and The Better Angels of Our Nature.