Contra Hanson on Punishment


Hanson asks “why should we punish something only moderately”. He gives

Some possible explanations:

  1. People like the symbolism of being against things they don’t really want to stop. It is more about wanting to look like the sort of person who doesn’t fully approve of such things.
  2. Having more rules that are only weakly enforced allows the usual systems more ways to arbitrarily punish some folks via selective enforcement. You might like this if you share such system’s tastes re who to arbitrarily punish. Or if you want to signal submission to authorities who want to use such power.
  3. If these things were actually legal and licit, people might sometimes publicly suggest that you are engaging in them. But if they are illicit or illegal, there’s a norm against accusing someone of doing them without substantial evidence. So if you want to discourage others from lightly accusing you of such things, you may want those activities to be officially disapproved, even if you don’t actually want to discourage them.
  4. We mainly want these norms and laws to help us deal with some disliked “criminal class” out there, a class that we don’t actually interact with much. So when we see real cases in our familiar word, they seem like they are not in that class, and thus we don’t want our norms or laws to apply to them. We only want less enforcement for folks in our world.
  5. What else?

1, 2, and 4 seem plausible to me: 1 is the classic Hansonian signalling argument, while 2 and 4 should be familiar to anyone with the concept that police are something suburbanites hire to keep down lower-class African-Americans.

However, Hanson neglects the most obvious explanation: people don’t commit crime deterministically. We could choose to think that individuals weigh the costs and benefits of an action, and then take it if the benefits outweigh the costs. However, a more plausible model is that, whenever we make a decision, there’s an element of whimsy, emotion, and circumstantial impact on our choice from other aspects of our environment. Did you enjoy your breakfast? Did you fight with your mother? These all impact your decisions in ways that aren’t remotely rational.

Now, these random influences don’t tend to make us do things we would otherwise never do: I’m not going to deliberately throw my laptop out of my window. But they can exert a strong influence on decisions where we are uncertain: do I read this book or that one? Do I stay home or go to that party? Or even cases where we wouldn’t normally do something, but feel inspired: I paused to listen to a good rendition of Amazing Grace, on my way home from work, so I feel inspired, positive, and clean a common area unasked for. Or perhaps, when my friend asks me to be his getaway driver, I say yes, because I don’t conceive of myself as doing anything harmful, my boyfriend broke up with me earlier today, and I don’t want to turn down a friend. In my model, crime has an error function.



If it’s positive, you commit the crime. Perhaps in an ideal model we could fully account for all possible sources of error, but in practice we tend to have levers like “increase likelihood of being caught” and “lengthen prison stays” and don’t have access to “make sure that nobody ever breaks up with anybody else” (which could, it’s worth noting, have other undesirable effects). I don’t think that e is normal: more likely it has a long right tail. Most people don’t commit crimes on normal days (aside from your three felonies a day).

If this is a reasonable model, then criminal policy should strike a balance between deterrence and over-punishment of people who happened to have a high-error day, for efficiency and fairness reasons. A long right tail means that doubling total punishment can have a rather small impact on likelihood of committing a crime, as this (very rough) graph demonstrates.

Moving from x1 to x2 requires doubling the likelihood of capture (or equivalent changes), but only decreases the likelihood of crime from 50% to 33%. That might be correct as a tradeoff, but it’s obvious that there is some point at which marginal punishment, even though it will deter marginal crime, mostly increases punishments for those who were already going to commit the crime. Given that putting someone in prison is more expensive than providing food, shelter, and a college education, increasing imprisonment of people who were already going to commit a crime is a substantial cost.

On the other side, people often have a fairness intuition. It is unfair if, for the same action, one person is handed a heavy punishment and the other a light one. It would be unfair if, after sentencing someone to jail, we flipped a coin: heads we double the sentence, tails we let them go.

As such, if people are frequently given the option to commit or not commit crime, and they repeatedly choose crime, this suggests that error was not the cause of their actions.  If your actions were caused by the error term (your meds interacted badly with what you didn’t realize was grapefruit; you didn’t get any sleep last night because the person upstairs held a loud party), it is unfair to punish you, at least very much. Combining this with a belief in inherent criminality being a dominant factor in propensity to commit crime has led to three-strikes laws. Similarly, your first drunk driving or speeding ticket, in every jurisdiction I’m aware of, is treated substantially more leniently than your fifth. The first could have been because you rushed your partner to a hospital and happened to get caught: your fifth is probably because you’re just speeding on a regular basis.

Finally, note that I have the criminal consider log(PrisonTerm) as their cost, but society has to pay linearly for each year. log(PrisonTerm) may not be the correct model, but there’s no question that the cost of prison for the imprisoned is non-linear. Increasing marginal costs imply that we should, at some point, stop. I believe that there is substantial evidence showing that we imprison people far longer than it makes sense to, but I won’t defend that point here.

Finally, we apply different levels of punishment to different crimes, and for good reasons. One is marginal deterrence. Scott Alexander says it well:

Chen Sheng was an officer serving the Qin Dynasty, famous for their draconian punishments. He was supposed to lead his army to a rendezvous point, but he got delayed by heavy rains and it became clear he was going to arrive late. The way I always hear the story told is this:

Chen turns to his friend Wu Guang and asks “What’s the penalty for being late?”

“Death,” says Wu.

“And what’s the penalty for rebellion?”

“Death,” says Wu.

“Well then…” says Chen Sheng.

And thus began the famous Dazexiang Uprising, which caused thousands of deaths and helped usher in a period of instability and chaos that resulted in the fall of the Qin Dynasty three years later.

The moral of the story is that if you are maximally mean to innocent people, then eventually bad things will happen to you. First, because you have no room to punish people any more for actually hurting you. Second, because people will figure if they’re doomed anyway, they can at least get the consolation of feeling like they’re doing you some damage on their way down.

Yes, being late is bad and something we want to deter. But once you’ve used your strongest possible deterrent, you don’t have any other options to incentivize marginal compliance. As such, we don’t want to use the death penalty on illegal immigration: this would make the marginal cost of all other crime zero, which would be bad.

The other is spread. It is costly to society to imprison people: we have limited total resources for deterrence, which must be spread over all possible bad things. Police officers and jail cells focused on illegal immigrants are ones that aren’t focused on speeding drivers. Arpiao’s famous reign of terror led to substantially increased violent crime: focusing on one sort of crime (illegal immigration) meant focusing less on everything else. When we reduce punishment and enforcement efforts of one crime, we’re (typically) increasing resources devoted to everything else. If I think that we are over-punishing a particular crime, and focusing too many resources on it, I should push for lowered punishment.



Short Review: The Big Nine

Overall: Not strongly recommended to anyone familiar with artificial intelligence and futurism. Possibly useful to see what other people are reading.

Analysis: Present seemed relatively accurate. Small econ errors (trash collection is both rivalrous and excludable, as is clear to anyone who thinks about them.) Forecasted future is overly specific and confident. Causes for different futures are implied to be largely about how aggressively we check China, and how left-wing on social/data issues we are.

Politics: Very left wing on social issues. If you’re familiar with the AI bias literature, there’s not much new there. Explicitly and surprisingly nationalist, and calls China “Communist” for some reason I don’t understand. Pitched very strongly at existing large US tech companies, trying to get them on board with her vision.