What Do King Solomon and David Lee Roth Have in Common - Think Like a Freak - Steven D. Levitt, Stephen J. Dubner

Think Like a Freak - Steven D. Levitt, Stephen J. Dubner (2014)

Chapter 7. What Do King Solomon and David Lee Roth Have in Common?

King Solomon built the First Temple in Jerusalem and was known throughout the land for his wisdom.

David Lee Roth fronted the rock band Van Halen and was known throughout the land for his prima-donna excess.

What could these two men conceivably have in common? Here are a few possibilities:

1. Both of them were Jewish.

2. They both got a lot of girls.

3. They both wrote the lyrics to a number-one pop song.

4. They both dabbled in game theory.

As it happens, all four of these statements are true. Some confirmatory facts:

1. David Lee Roth was born into a Jewish family in Bloomington, Indiana, in 1954; his father, Nathan, was an ophthalmologist. (It was while preparing for his bar mitzvah that David learned to sing.) King Solomon was born into a Jewish family in Jerusalem, circa 1000 BCE; his father, David, had also been king.

2. David Lee Roth “slept with every pretty girl with two legs in her pants,” he once said. “I even slept with an amputee.” King Solomon “loved many foreign women,” according to the Bible, including “seven hundred wives, princesses, and three hundred concubines.”

3. David Lee Roth wrote the lyrics for most of Van Halen’s songs, including its sole number-one hit, “Jump.” King Solomon is thought to have authored some or all of the biblical books Proverbs, Song of Songs, and Ecclesiastes. The folk singer Pete Seeger used several verses from Ecclesiastes as lyrics to his song “Turn! Turn! Turn!”—which, when recorded by the Byrds in 1965, became a number-one hit.*

4. One of the most famous stories about each man involves a clever piece of strategic thinking that anyone who wishes to think like a Freak would do well to mimic.

Solomon, a young man when he inherited the throne, was eager to prove his judgment was sound. He was soon given a chance to do that when two women, prostitutes by trade, came to him with a dilemma. The women lived in the same house and, within the space of a few days, had each given birth to a baby boy. The first woman told the king that the second woman’s baby died, and that the second woman “arose at midnight, and took my son from beside me … and laid the dead child in my bosom.” The second woman disputed the story: “Nay; but the living is my son, and the dead is thy son.”

One of the women was plainly lying, but which one? How was King Solomon supposed to tell who was the mother of the living child?

“Fetch me a sword,” he said. “Divide the living child in two, and give half to the one, and half to the other.”

The first woman begged the king to not hurt the baby, and instead give it to the second woman.

The second woman, however, embraced the king’s solution: “It shall be neither mine nor thine,” she said. “Divide it.”

King Solomon promptly ruled in favor of the first woman. “Give her the living child,” he said. “She is the mother thereof.” The Bible tells us that “all Israel heard of the judgment” and they “saw that the wisdom of God was in him, to do justice.”

How did Solomon know the true mother?

He reasoned that a woman cruel enough to go along with his baby-carving plan was cruel enough to steal another’s child. And, further, that the real mother would rather give up her child than see it die. King Solomon had set a trap that encouraged the guilty and the innocent to sort themselves out.*

As clever as that was, David Lee Roth may have been a bit cleverer. By the early 1980s, Van Halen had become one of the biggest rock-and-roll bands in history. They were known to party particularly hard while on tour. “[N]o matter where Van Halen alights,” Rolling Stone reported, “a boisterous, full-blown saturnalia is bound to follow.”

The band’s touring contract carried a fifty-three-page rider that laid out technical and security specs as well as food and beverage requirements. On even calendar days, the band was to be served roast beef, fried chicken, or lasagna, with sides of Brussels sprouts, broccoli, or spinach. Odd days meant steak or Chinese food with green beans, peas, or carrots. Under no circumstances was dinner to be served on plastic or paper plates, or with plastic flatware.

On page 40 of the exhaustive Van Halen rider was the “Munchies” section. It demanded potato chips, nuts, pretzels, and “M&M’s (WARNING: ABSOLUTELY NO BROWN ONES).”*

What was up with that? The nut and chip requests weren’t nearly so nitpicky. Nor the dinner menu. So why the hang-up with brown M&M’s? Had someone in the band had a bad experience with them? Did Van Halen have a sadistic streak and take pleasure in making some poor caterer hand-sort the M&M’s?

When the M&M clause was leaked to the press, it was seen as a classic case of rock-star excess, of the band “being abusive of others simply because we could,” Roth said years later. But, he explained, “the reality is quite different.”

Van Halen’s live show was an extravaganza, with a colossal stage set, booming audio, and spectacular lighting effects. All this equipment required a great deal of structural support, electrical power, and the like. But many of the arenas they played were outdated. “[T]hey didn’t have even the doorways or the loading docks to accommodate a super-forward-thinking, gigantor, epic-sized Van Halen production,” Roth recalled.

Thus the need for a fifty-three-page rider. “Most rock-and-roll bands had a contract rider that was like a pamphlet,” Roth says. “We had one that was like the Chinese phone book.” It gave point-by-point instructions to ensure that the promoter at each arena provided enough physical space, load-bearing capacity, and electrical power. Van Halen wanted to make sure no one got killed by a collapsing stage or a short-circuiting light tower.

But every time the band pulled into a new city, how could they be sure the local promoter had read the rider and followed all the safety procedures?

Cue the brown M&M’s. When Roth arrived at the arena, he’d immediately go backstage to check out the bowl of M&M’s. If he saw brown ones, he knew the promoter hadn’t read the rider carefully—and that “we had to do a serious line check” to make sure the important equipment had been properly set up.

He also made sure to trash the dressing room if there were no brown M&M’s. This would be construed as nothing more than rock-star folly, thereby keeping his trap safe from detection. But we suspect he enjoyed it all the same.

And so it was that David Lee Roth and King Solomon both engaged in a fruitful bit of game theory—which, narrowly defined, is the art of beating your opponent by anticipating his next move.

There was a time when economists thought that game theory would take over the world, helping to shape or predict all sorts of important outcomes. Alas, it proved to be not nearly as useful or interesting as promised. In most cases, the world is too complicated for game theory to work its supposed magic. But again, thinking like a Freak means thinking simply—and as King Solomon and David Lee Roth showed, a simple version of game theory can work wonders.

As disparate as their settings were, the two men faced a similar problem: a need to sift the guilty from the innocent when no one was stepping forward to profess their guilt. In economist-speak, there was a “pooling equilibrium”—the two mothers in Solomon’s case, and all the tour promoters in Van Halen’s case—that needed to be broken down into a “separating equilibrium.”

A person who is lying or cheating will often respond to an incentive differently than an honest person. How can this fact be exploited to ferret out the bad guys? Doing so requires an understanding of how incentives work in general (which you gained in the last chapter) and how different actors may respond differently to a given incentive (as we’ll discuss in this one). Certain tools in the Freak arsenal may come in handy only once or twice in your lifetime. This is one such tool. But it has power and a certain elegance, for it can entice a guilty party to unwittingly reveal his guilt through his own behavior.

What is this trick called? We have scoured history books and other texts to find a proper name for it, but came up empty. So let’s make up something. In honor of King Solomon, we’ll treat this phenomenon as if it were a lost proverb: Teach Your Garden to Weed Itself.

Imagine you’ve been accused of a crime. The police say you stole something or beat up someone or perhaps drunkenly drove your vehicle through a park and mowed down everyone in sight.

But the evidence is murky. The judge assigned to your case does her best to figure out what happened, but she can’t be sure. So she comes up with a creative solution. She decrees that you will plunge your arm into a cauldron of boiling water. If you come away unhurt, you will be declared innocent and set free; but if your arm is disfigured, you will be convicted and sent to prison.

This is precisely what happened in Europe for hundreds of years during the Middle Ages. If the court couldn’t satisfactorily determine whether a defendant was guilty, it turned the case over to a Catholic priest who would administer an “ordeal” that used boiling water or a smoking-hot iron bar. The idea was that God knew the truth and would miraculously deliver from harm any suspect who had been wrongly accused.

As a means of establishing guilt, how would you characterize the medieval ordeal?

1. Barbaric

2. Nonsensical

3. Surprisingly effective

Before you answer, let’s think about the incentives at play here. Picture a shepherd living in the north of England some one thousand years ago. We’ll call him Adam. He has a next-door neighbor, Ralf, who is also a shepherd. The two of them don’t get along. Adam suspects that Ralf once stole a few of his sheep. Ralf spreads word that Adam packs his wool bales with stones to drive up their weight at market. The two men regularly quarrel over rights to a communal grazing meadow.

One morning, Ralf’s entire flock of sheep turns up dead, apparently poisoned. He promptly accuses Adam. While Adam may indeed have an incentive to kill Ralf’s flock—less wool from Ralf means a higher price for Adam—there are certainly other possibilities. Maybe the sheep died of disease or a natural poison. Maybe they were poisoned by a third rival. Or perhaps Ralf poisoned the sheep himself in order to get Adam sent to prison or fined.

Evidence is collected and brought before the court, but it is hardly conclusive. Ralf claims he spotted Adam lurking near his flock the night before the incident, but given the rivals’ acrimony, the judge wonders if Ralf is lying.

Imagine now that you are the judge: How are you supposed to determine whether Adam is guilty? And imagine further that instead of one such case, there are 50 Adams before the court. In each instance, the evidence is too weak to convict, but you also don’t want to set a criminal free. How can the innocent be weeded from the guilty?

By letting the garden weed itself.

The judge gives each Adam two choices. He can either plead guilty or submit to a trial by ordeal, putting his fate in God’s hands. From our modern perspective, it’s hard to imagine an ordeal as an effective way to separate the guilty from the innocent—but was it?

Let’s take a look at the data. The economist Peter Leeson, whose research has covered topics like Gypsy law and pirate economics, did just that. One set of church records from thirteenth-century Hungary included 308 cases that entered the trial-by-ordeal phase. Of these, 100 were aborted before producing a final result. That left 208 cases in which the defendant was summoned by a priest to the church, climbed the altar, and—after his fellow congregants were ushered in to observe from a distance—was forced to grab hold of a red-hot iron bar.

How many of those 208 people do you think were badly burned? All 208? Don’t forget, we’re talking about red-hot iron here. Maybe 207 or 206?

The actual number is 78. Which means that the remaining 130—nearly two-thirds of the defendants who underwent the ordeal—were miraculously unharmed and thereby exonerated.

Unless these 130 miracles in fact were miracles, how can they be explained?

Peter Leeson thinks he knows the answer: “priestly rigging.” That is, a priest somehow tinkered with the setup to make the ordeal look legitimate while ensuring that the defendant wouldn’t be disfigured. This wouldn’t have been difficult, since the priest had ultimate control over the entire situation. Maybe he swapped out the red-hot bar of iron for a cooler one. Or, when using the boiling-water ordeal, maybe he dumped a pail of cold water into the cauldron before the congregants entered the church.

Why would a priest do this? Was he simply exercising a bit of human mercy? Did he perhaps accept bribes from certain defendants?

Leeson sees a different explanation. Let’s think back to those 50 Adams on which the court is undecided. We’ll assume that some are guilty and some innocent. As noted earlier, a guilty person and an innocent one will often respond to the same incentive in different ways. What are the guilty Adams and the innocent Adams thinking in this case?

A guilty Adam is probably thinking something like this: God knows I am guilty. If, therefore, I undergo the ordeal, I will be horribly scalded. Not only will I then be imprisoned or fined, but I’ll spend the rest of my life in pain. So perhaps I should go ahead and confess my guilt in order to avoid the ordeal.

And what would an innocent Adam think? God knows I am innocent. I will therefore undergo the ordeal, since God would never allow this fiery curse to harm me.

So the belief that God would intervene in their trial by ordeal, Leeson writes, “created a separating equilibrium in which only innocent defendants were willing to undergo ordeals.” This helps explain why 100 of the 308 ordeals were aborted: the defendants in these cases settled with the plaintiff—presumably, at least in many instances, because the defendant was guilty and figured he’d be better off accepting his punishment without the additional penalty of being burned.

And what about our shepherd Adam? Let’s say for the sake of argument that he did not poison Ralf’s flock and was framed by Ralf. What would Adam’s fate be? By the time he stood in the church before the bubbling cauldron, praying for mercy, the priest would likely have reckoned that Adam was innocent. So he’d rig the ordeal accordingly.

Let’s not forget that 78 defendants in this data set were scalded and then fined or sent to prison. What happened in those cases?

Our best explanation is that either (1) the priests believed these defendants really were guilty; or (2) the priests had to at least keep up appearances that a trial by ordeal really worked, or else the threat would lose its power to sort the innocent from the guilty—and so these folks were sacrificed.

We should also note that the threat would lose its power if the defendants didn’t believe in an all-powerful, all-knowing God who punished the guilty and pardoned the innocent. But history suggests that most people at the time did indeed believe in an all-powerful, justice-seeking God.

Which leads us to the most bizarre twist in this bizarre story: if medieval priests did manipulate the ordeals, that might make them the only parties who thought an all-knowing God didn’t exist—or if he did, that he had enough faith in his priestly deputies to see their tampering as part of a divine quest for justice.

You too can play God once in a while if you learn to set up a self-weeding garden.

Let’s say you work for a company that hires hundreds of new employees each year. Hiring takes a lot of time and money, especially in industries in which workers come and go. In the retail trade, for instance, employee turnover is roughly 50 percent annually; among fast-food workers, the rate can approach 100 percent.

So it isn’t surprising that employers have worked hard to streamline the application process. Job seekers can now fill out online applications in twenty minutes from the comfort of their homes. Great news, right?

Maybe not. Such an easy application process may attract people with only minimal interest in the job, who look great on paper but aren’t likely to stick around long if hired.

So what if employers, rather than making the application process ever easier, made it unnecessarily onerous—with, say, a 60- or 90-minute application that weeds out the dilettantes?

We’ve pitched this idea to a number of companies, and have gotten exactly zero takers. Why? “If we make the application process longer,” they say, “we’ll get fewer applicants.” That, of course, is exactly the point: you’d immediately get rid of the applicants who are more likely to not show up on time or quit after a few weeks.

Colleges and universities, meanwhile, have no such qualms about torturing their applicants. Think about how much work a high-school student must do to even be considered for a spot at a decent college. The difference in college and job applications is especially striking when you consider that a job applicant will be getting paid upon acceptance while a college applicant will be paying for the privilege to attend.

But this does help explain why a college degree remains so valuable. (In the United States, a worker with a four-year degree earns about 75 percent more than someone with only a high-school degree.) What sort of signal does a college diploma send to a potential employer? That its holder is willing and able to complete all sorts of drawn-out, convoluted tasks—and, as a new employee, isn’t likely to bolt at the first sign of friction.

So, absent the chance to make every job applicant work as hard as a college applicant, is there some quick, clever, cheap way of weeding out bad employees before they are hired?

Zappos has come up with one such trick. You will recall from the last chapter that Zappos, the online shoe store, has a variety of unorthodox ideas about how a business can be run. You may also recall that its customer-service reps are central to the firm’s success. So even though the job might pay only $11 an hour, Zappos wants to know that each new employee is fully committed to the company’s ethos. That’s where “The Offer” comes in. When new employees are in the onboarding period—they’ve already been screened, offered a job, and completed a few weeks of training—Zappos offers them a chance to quit. Even better, quitters will be paid for their training time and also get a bonus representing their first month’s salary—roughly $2,000—just for quitting! All they have to do is go through an exit interview and surrender their eligibility to be rehired at Zappos.

Doesn’t that sound nuts? What kind of company would offer a new employee $2,000 to not work?

A clever company. “It’s really putting the employee in the position of ‘Do you care more about money or do you care more about this culture and the company?’ “ says Tony Hsieh, the company’s CEO. “And if they care more about the easy money, then we probably aren’t the right fit for them.”

Hsieh figured that any worker who would take the easy $2,000 was the kind of worker who would end up costing Zappos a lot more in the long run. By one industry estimate, it costs an average of roughly $4,000 to replace a single employee, and one recent survey of 2,500 companies found that a single bad hire can cost more than $25,000 in lost productivity, lower morale, and the like. So Zappos decided to pay a measly $2,000 up front and let the bad hires weed themselves out before they took root. As of this writing, fewer than 1 percent of new hires at Zappos accept “The Offer.”

The Zappos weeding mechanism is plainly different from those employed by medieval priests, David Lee Roth, and King Solomon. In this case, Zappos is operating with utter transparency; there is no trick whatsoever. The other cases are all about the trick. It is the trick that makes one party reveal himself, unaware that he is being manipulated. The Zappos story therefore may strike you as more virtuous. But using a trick is—let’s be honest—more fun. Consider the case of a secret bullet factory in Israel.

After World War II, the British government declared it would relinquish its rule of Palestine. Britain was depleted from the war and weary of refereeing the fractious co-existence of Arabs and Jews.

For the Jews living in Palestine, it seemed inevitable that a war with their Arab neighbors would break out as soon as the British left. So the Jewish paramilitary group Haganah began to stockpile arms. Guns were not in terribly short supply—they could be smuggled in from Europe and elsewhere—but bullets were, and it was illegal to manufacture them under British rule. So the Haganah decided to build a clandestine bullet factory on a hilltop kibbutz near Rehovot, some fifteen miles from Tel Aviv. Its code name: The Ayalon Institute.

The kibbutz had a citrus grove, a vegetable farm, and a bakery. The Institute would be located in the secret basement of a laundry building. The laundry was meant to drown out the noise of bullet-making and provide a cover story: kibbutz workers reported there for work and then, pushing aside one of the huge washing machines, descended a ladder to the factory below. Using equipment bought in Poland and smuggled in, the Institute began cranking out 9-millimeter bullets for the Sten submachine gun.

The bullet factory was so secret that women who worked there weren’t allowed to tell their husbands what they were doing. The operation had to be hidden from not only the Arabs but the British too. This was especially tricky since British soldiers stationed nearby liked to have their laundry done at the kibbutz. They also dropped by to socialize—some of the kibbutzniks had fought alongside the British during World War II, as members of the Jewish Brigade.

Already there had been one close call: a British officer showed up just as a bullet-making machine was being lowered through the floor into the factory. “The fellows escorted him into the dining hall, served him beer, and we managed to get the machine down, close the opening, and conceal it,” the former plant manager recalled.

Still, they were rattled. Had the British officer not been tempted by a glass of beer, the Institute likely would have been shut down, its ringleaders sent to prison. They needed to protect against another surprise visit.

The solution, the story goes, was in the beer. The British officers had complained that the beer at the kibbutz was too warm; they preferred it chilled. Their Jewish friends, eager to please, made a proposal: The next time you plan to visit, call us beforehand and we will put some beer on ice for you. Done and done! According to kibbutz legend at least, this warm-beer alarm worked like a charm: the British officers never again pulled a surprise visit to the factory, which went on to produce more than two million bullets for use in Israel’s War of Independence. The kibbutzniks had cannily appealed to the Brits’ narrow self-interest in order to satisfy their own much broader one.

There are plainly a variety of ways to teach a garden to weed itself (or, if you prefer, to create a separating equilibrium). The secret bullet factory and Zappos each dangled some bait—cold beer in one case, $2,000 in the other—that helped sort things out. The priestly ordeals relied on the threat of an omniscient God. David Lee Roth and King Solomon, meanwhile, each had to make themselves look bad in order to flush out the truth—Roth by posing as an even bigger prima donna than he was and Solomon by suggesting he was a bloodthirsty tyrant, eager to settle a maternity dispute by hacking the baby to pieces.

The method notwithstanding, seducing people to sort themselves into different categories can be all sorts of useful. It can also be extraordinarily profitable. Consider the following e-mail:

Dear Sir/Madam, TOP SECRET:

I am one of the officials in the Energy management board in Lagos, Nigeria. I got your information in a business directory from the Chamber of Commerce and Industries when I was searching for a RELIABLE, HONEST, AND TRUSTWORTHY person to entrust this business with.

During the award of a contract to bring Electrification to Urban centres, a few of my colleagues and I had inflated the amount of this contract. The OVER-INVOICED AMOUNT is being safeguarded under our custody.

However, we have decided to transfer this sum of money, $10.3 million USA Dollars, out of Nigeria. Hence, we seek for a reliable, honest and not greedy foreign partner whom we shall use his or her account to transferring the fund. And we agreed that THE ACCOUNT OWNER SHALL BENEFIT 30% of the total amount of money.

If you are capable to handle the transaction without hitches and flaws, then we have confidence in the deal. Please, make it TOP SECRET and avoid every channel of implicating us here thereby endanger our career.

If this is of interest to you please do contact me immediately through this email address for more details and for easier communication.

Have you ever received an e-mail like this? Of course you have! There is probably one worming its way into your in-box at this very moment. If not from a government official, it purports to be from a deposed prince or a billionaire’s widow. In each case, the author has the rights to millions of dollars but needs help extracting it from a rigid bureaucracy or uncooperative bank.

That’s where you come in. If you will send along your bank-account information (and perhaps a few sheets of blank letterhead from said bank), the widow or prince or government official can safely park the money in your account until everything is straightened out. There is a chance you will need to travel to Africa to handle the sensitive paperwork. You may also need to advance a few thousand dollars to cover some up-front fees. You will of course be richly rewarded for your trouble.

Does such an offer tempt you? We hope not. It is a stone-cold scam, variations of which have been practiced for centuries. An early version was known as the Spanish Prisoner. The scammer pretended to be a wealthy person who’d been wrongly jailed and cut off from his riches. A huge reward awaited the hero who would pay for his release. In the old days, the con was played via postal letter or face-to-face meetings; today it lives primarily on the Internet.

The generic name for this crime is advance-fee fraud, but it is more commonly called the Nigerian letter fraud or 419 fraud, after a section of the Nigerian criminal code. While advance-fee fraud is practiced in many places, Nigeria seems to be its epicenter: more e-mail scams of this sort invoke Nigeria than all other countries combined. Indeed, the connection is so famous that if you type “Nigeria” into a search engine, the auto-fill function will likely supply you with “Nigerian scam.”

Which might lead you to wonder: If the Nigerian scam is so famous, why would a Nigerian scammer ever admit he is from Nigeria?

That was the question Cormac Herley asked himself. Herley is a computer scientist at Microsoft Research who has long been interested in how fraudsters abuse technology. In a previous job, at Hewlett-Packard, one of his concerns was that increasingly sophisticated desktop printers could be used to counterfeit money.

Herley hadn’t thought much about the Nigerian scam until he heard two people mention it from opposite angles. One talked about the millions or even billions of dollars the scammers earn. (Firm numbers are hard to come by, but Nigerian scammers have been successful enough for the U.S. Secret Service to set up a task force; one California victim lost $5 million.) The other person noted how stupid these Nigerians must be to send out letters full of such outlandish stories and leaps of illogic.

Herley wondered how both of these statements could be true. If the scammers are so dumb and their letters so obviously a scam, how could they be successful? “When you see an apparent contradiction,” he says, “you start digging, see if you can figure out a mechanism by which it does make sense.”

He began to examine the scam from the scammers’ perspective. For anyone wishing to commit fraud, the Internet has been a wondrous gift. It makes it easy to obtain a huge batch of e-mail addresses and instantaneously send out millions of bait letters. So the cost of contacting potential victims is incredibly low.

But converting a potential victim into a real one will require a good deal of time and effort—typically a long series of e-mails, perhaps some phone calls, and ultimately the bank paperwork.

Let’s say for every 10,000 scam e-mails you send, 100 people take the initial bait and write back. The 9,900 who trashed your e-mail haven’t cost you anything. But now you start to invest significantly in those 100 potential victims. For every one of them who wises up or gets scared off or simply loses interest, your profit margin decreases.

How many of these 100 will end up actually paying you? Let’s say one of them goes all the way. The other 99 are, in the parlance of statistics, false positives.

Internet fraud is hardly the only realm haunted by false positives. Roughly 95 percent of the burglar alarms that U.S. police respond to are false alarms. That makes for a total of 36 million false positives a year, at a cost of nearly $2 billion. In medicine, we rightly worry about false negatives—a fatal ailment, for instance, that goes undetected—but false positives are also a huge problem. One study found an astonishingly high rate of false positives (60 percent for men, 49 percent for women) among patients who were regularly screened for prostate, lung, colorectal, or ovarian cancer. One task force went so far as to argue that ovarian screening for healthy women should be eliminated entirely since it’s not very effective to begin with and because false positives lead too many women “to unnecessary harms, such as major surgery.”

One of the most disruptive false positives in recent memory occurred in Cormac Herley’s own field of computer security. In 2010, the McAfee antivirus software identified a malevolent file on vast fleets of computers running Microsoft Windows. It promptly attacked the file, either deleting or quarantining it, depending on how a given computer was configured. Only one problem: the file wasn’t malevolent—and, in fact, was a key component of the Windows start-up function. The antivirus software, by falsely attacking a healthy file, sent “millions of PC’s into never-ending reboot cycles,” says Herley.

So how can a Nigerian scammer minimize his false positives?

Herley used his mathematical and computing skills to model this question. Along the way, he identified the most valuable characteristic in a potential victim: gullibility. After all, who else but a supremely gullible person would send thousands of dollars to a faraway stranger based on a kooky letter about some misbegotten fortune?

How can a Nigerian scammer tell, just by looking at thousands of e-mail addresses, who is gullible and who is not? He can’t. Gullibility is in this case an unobservable trait. But, Herley realized, the scammer can invite the gullible people to reveal themselves. How?

By sending out such a ridiculous letter—including prominent mentions of Nigeria—that only a gullible person would take it seriously. Anyone with an ounce of sense or experience would immediately trash an e-mail like this. “The scammer wants to find the guy who hasn’t heard of it,” Herley says. “Anybody who doesn’t fall off their chair laughing is exactly who he wants to talk to.”

Here’s how Herley put it in a research paper: “The goal of the e-mail is not so much to attract viable users as to repel the non-viable ones, who greatly outnumber them… . A less-outlandish wording that did not mention Nigeria would almost certainly gather more total responses and more viable responses, but would yield lower overall profit… . [T]hose who are fooled for a while but then figure it out, or who balk at the last hurdle, are precisely the expensive false positives that the scammer must deter.”

If your first instinct was to think that Nigerian scammers are stupid, perhaps you have been convinced, as Cormac Herley was, that this is exactly the kind of stupid we should all aspire to be. Their ridiculous e-mails are in fact quite brilliant at getting the scammers’ massive garden to weed itself.

That said, these men are crooks and thieves. As much as one might admire their methodology, it’s hard to celebrate their mission. And so now that we understand how their game works, is there a way to turn their methodology against them?

Herley believes there is. He notes with approval a small online community of “scambaiters” who intentionally engage Nigerian scammers in time-wasting e-mail conversations. “They do this mostly for bragging rights,” he says. Herley would like to see this effort broadened by automation. “What you want is to build a chatbot,” he says, “a computer program that can have a conversation with you. There are examples out there—there’s a chatbot psychotherapist, for instance. You’d want to build something that engages the scammer on the other side, pulls him in a bit. You don’t need to keep him talking for 20 round-trip e-mails, but if every time he has to put in some effort, that’d be great.”

In other words, Herley would like to see a smart computer programmer pretend to be dumb in order to outwit a smart scammer who is also pretending to be dumb in order to find a victim who is, if not dumb, then extremely gullible.

Herley’s chatbot would flood a scammer’s system with false positives, making it virtually impossible to pick out a real victim. You might think of it as carpet-bombing the scammers’ gardens with millions upon millions of weeds.

We too thought it might be nice to attack some bad guys before they were able to attack innocent people.

In SuperFreakonomics, published in 2009, we described an algorithm that we built with a fraud officer at a large British bank. It was designed to sift through trillions of data points generated by millions of bank customers to identify potential terrorists. It was inspired by the irregular banking behavior of the 9/11 terrorists in the United States. Among the key behaviors:

They tended to make a large initial deposit and then steadily withdraw cash over time, with no steady replenishment.

Their banking didn’t reflect normal living expenses like rent, utilities, insurance, and so on.

Some of them routinely sent or received foreign wire transfers, but the amount inevitably fell below the reporting limit.

Markers like these are hardly enough to identify a terrorist, or even a petty criminal. But by starting with them, and culling more significant markers from the British banking data, we were able to tighten the algorithm’s noose.

And tight it had to be. Imagine that our algorithm turned out to be 99 percent accurate at predicting that a given bank customer was connected to a terrorist group. That sounds pretty good until you consider the consequences of a false-positive rate of 1 percent in a case like this.

Terrorists are relatively rare in the United Kingdom. Let’s say there are 500 of them. An algorithm that is 99 percent accurate would flush out 495 of them—but it would also wrongly identify 1 percent of the other people in the data. Across the entire population of the U.K., roughly 50 million adults, that would translate into some 500,000 innocent people. What would happen if you hauled in half a million non-terrorists on terrorism charges? You could brag all you wanted about how low a false-positive rate of 1 percent is—just look at the false positives that Nigerian scammers have to deal with!—but you’d still have a lot of angry people (and, likely, lawsuits) on your hands.

So the algorithm had to be closer to 99.999 percent accurate. That’s what we strove for as we loaded the algorithm with marker upon marker. Some were purely demographic (known terrorists in the U.K. are predominantly young, male, and, at this point in history, Muslim). Others were behavioral. For instance: a potential terrorist was unlikely to withdraw money from an ATM on a Friday afternoon, during Muslim prayer services.

One marker, we noted, was particularly powerful in the algorithm: life insurance. A budding terrorist almost never bought life insurance from his bank, even if he had a wife and young children. Why not? As we explained in the book, an insurance policy might not pay out if the holder commits a suicide bombing, so it would be a waste of money.

After several years of tightening and tweaking, the algorithm was unleashed on a mammoth trove of banking data. It ran all night on the bank’s supercomputer so as to not disturb regular business. The algorithm seemed to work pretty well. It generated a relatively short list of names that we were quite sure included at least a handful of likely terrorists. The bank gave us this list in an envelope protected by a wax seal—privacy law prevented us from seeing the names—and we in turn met with the head of a British national-security unit to hand him the envelope. It was all very James Bond-y.

What happened to the people on that list? We’d like to tell you, but we can’t—not because of national-security issues but because we have no idea. While the British authorities seemed happy to take our list of names, they didn’t feel compelled to let us tag along when—or if—they went knocking on suspects’ doors.

That would seem to be the end of the story. But it’s not.

In SuperFreakonomics, we described not only how the algorithm was built but how a would-be terrorist could escape its reach: by going down to the bank and buying some life insurance. The particular bank we’d been working with, we noted, “offers starter policies for just a few quid per month.” We called further attention to this strategy in the book’s subtitle: Global Cooling, Patriotic Prostitutes, and Why Suicide Bombers Should Buy Life Insurance.

Upon arrival in London for a book tour, we found the British public did not appreciate our giving advice to terrorists. “I’m not sure why we’re telling the terrorists this secret,” wrote one newspaper critic. Radio and TV interviewers were less polite. They asked us to explain what sort of idiot would go to the trouble of building a trap like this only to explain precisely how to evade it. Plainly we were dumber than even a Nigerian scammer, vainer than David Lee Roth, more bloodthirsty than King Solomon.

We hemmed, we hawed, we rationalized; occasionally we hung our heads in contrition. But we were smiling on the inside. And we got a little happier every time we were blasted for our stupidity. Why?

From the outset of the project, we recognized that finding a few bad apples out of millions would be difficult. Our odds would improve if we could somehow trick the bad apples into revealing themselves. That is what our life-insurance scam—yes, it was a scam all along—was meant to accomplish.

Do you know anyone who buys life insurance through their bank? No, we don’t either. Many banks do offer it, but most customers use banks for straight banking and, if they want insurance, they buy it through a broker or directly from an insurer.

So as these American idiots were being skewered in the British media for giving advice to terrorists, what kind of person suddenly had a strong incentive to run out and buy life insurance from his bank? Someone who wanted to cover his tracks. And our algorithm was already in place, paying careful attention. Having learned from the great minds described in this chapter, we laid out a trap designed to ensnare only the guilty. It encouraged them to, in the words of King Solomon, “ambush only themselves.”