The End of Average: How We Succeed in a World That Values Sameness - Todd Rose (2016)


An individual is a high-dimensional system evolving over place and time.



By the mid-2000s, Google was already well on its way to becoming an era-defining Internet juggernaut and one of the most innovative and successful corporations in history. To sustain its extraordinary levels of growth and innovation, Google had a voracious appetite for talented employees. Fortunately, the company was flush with cash, and the combination of high salaries, generous perks, and the chance to work on innovative products made Google one of the world’s most desirable places to work.1 By 2007, it was receiving a hundred thousand job applications every month, ensuring that Google would have its choice of top talent—as long as it could figure out how to identify that top talent.2

At first, Google made its hiring decisions the same way that most Fortune 500 companies did: by looking at each job applicant’s SAT scores, academic GPA, and diploma and hiring those applicants who ranked at the very top.3Before long, the Google campus at Mountain View was full of employees with near-flawless SATs, valedictorian-level grades, and advanced degrees from the likes of Caltech, Stanford, MIT, and Harvard.4

Ranking individuals on a handful of metrics—or even a single metric—is not only common practice when recruiting new employees, it is the most prevalent method of evaluating existing employees, too.5 In 2012, Deloitte, the largest professional services firm in the world, assigned every one of its more than sixty thousand employees a numerical rating based on their performance on work projects, and then at the end of the year at a “consensus meeting” these project ratings were combined into a final rank ranging from 1 to 5. Each employee, in other words, was evaluated using a single number. It is hard to imagine a more straightforward method for comparing employees’ value than assessing them on a simple, one-dimensional scale.6

According to the Wall Street Journal, an estimated 60 percent of Fortune 500 firms still used some form of single-score ranking systems to evaluate employees in 2012.7 Perhaps the most extreme version of these systems is what is known as “forced ranking,” a method pioneered by General Electric in the 1980s where it was known as “rank and yank.”8 In a forced ranking system, employees are not merely ranked on a one-dimensional scale; a certain predetermined percentage of employees must be designated as above average, a certain percentage must be designated as average—and a certain percentage must be designated as below average. Those employees assigned to the top ranks receive bonuses and promotions. Those at the bottom receive warnings or, in some cases, are simply let go.9 By 2009, 42 percent of large companies were using forced ranking systems, including Microsoft, whose well-publicized version was known as “stack ranking.”10

Of course, it’s easy to understand why so many businesses have adopted single-score systems for hiring and performance evaluation: they are easy and intuitive to use and they carry the imprint of objectiveness and mathematical certitude. If an applicant is ranked higher than average, then hire her or reward her. If she ranks lower, pass on her or let her go. If you want more talented employees, simply “raise the bar”—increase the score you use as your cutoff for hiring or promotions.

Ranking individual talent and performance on a single scale, or a few scales, seems to make perfect sense. And yet, by 2015, Google, Deloitte, and Microsoft had each modified or abandoned their rank-based hiring and evaluation systems.

Despite Google’s continued growth and profitability, by the mid-2000s there were signs that something was wrong with the way it was selecting talent. Many of its hires were not performing the way management had imagined, and there was a growing sense within Google that company recruiters and managers were ignoring many candidates whose talent was not getting captured by the familiar metrics used by most companies, such as grades, test scores, and diplomas.11 As Todd Carlisle, the human resources director for product quality operations at Google, explained to me, “We began to spend a lot of time and money analyzing the ‘missed talent’ that we felt we should have hired, but didn’t.”12

At Deloitte, by 2014 they were also beginning to realize that its single-score employee evaluation method was not working as well as they expected. Deloitte was devoting more than two million hours each year to the process of calculating employee performance rankings—a tremendous amount of time—but the value of these rankings was being questioned.13 In a Harvard Business Review article coauthored with Marcus Buckingham, Ashley Goodall, former director of leader development at Deloitte, wrote that what gave them pause was research suggesting that a single-score rating might not capture the true performance of an employee so much as reveal the idiosyncratic tendencies of the person rating that performance. “Both internally and externally it was clear that people were starting to recognize that traditional single-score performance reviews don’t work, so there was a sense of clarity about what we needed to get away from,” Goodall told me.14

Meanwhile, at Microsoft, stack ranking was an unmitigated disaster. A 2012 Vanity Fair article called the era when Microsoft relied on stack ranking “the lost decade.” The performance rating system forced employees to compete for rankings, killing collaboration among employees and, worse, leading employees to avoid working with top performers, since doing so threatened to lower their own ranking as a result. While stack ranking was in effect, the article reports, the company had “mutated into something bloated and bureaucracy-laden, with an internal culture that unintentionally rewards managers who strangle innovative ideas that might threaten the established order of things.”15 In late 2013, Microsoft abruptly jettisoned stack ranking.16 So where did Google, Deloitte, and Microsoft go wrong?

Each of these innovative companies initially followed the averagarian notion that you can effectively evaluate individuals by ranking them—a notion rooted in Francis Galton’s belief that if you are good or eminent at one thing, you are good or eminent at most things.17 And to most of us, it seems like this approach ought to have worked. After all, isn’t it obvious that some people are generally more talented than others, and therefore it should be possible to rank talent on a single scale and make assumptions about their potential based on that ranking? Google, Deloitte, and Microsoft, however, discovered that the idea that talent can be boiled down to a number that we can compare to a neat average simply doesn’t work. But why? What is at the root of the unexpected failure of ranking?

The answer is one-dimensional thinking. The first principle of individuality—the jaggedness principle—explains why.


Our minds have a natural tendency to use a one-dimensional scale to think about complex human traits, such as size, intelligence, character, or talent. If we are asked to assess a person’s size, for example, we instinctively judge an individual as large, small, or an Average Joe. If we hear a man described as big, we imagine someone with big arms and big legs and a big body—someone who is large all over. If a woman is described as smart, we assume she is likely good at solving problems across a wide range of domains, and is probably well educated, too. During the Age of Average, our social institutions, particularly our businesses and schools, have reinforced our mind’s natural predilection for one-dimensional thinking by encouraging us to compare people’s merit on simple scales, such as grades, IQ scores, and salaries.18

But one-dimensional thinking fails when applied to just about any individual quality that actually matters—and the easiest way to understand why is to take a closer look at the true nature of human size. The previous image portrays the measurements of two men on nine dimensions of physical size, the same dimensions analyzed by Gilbert Daniels in his breakthrough study of pilots.




Which man is bigger? It seems like there should be an easy answer, but when you compare the two men on each dimension, the answer turns out to be more elusive than we might expect. The man on the right is tall but has narrow shoulders. The man on the left has a large waist but nearly average-size hips. You might attempt to determine which man is bigger by simply taking the average of all nine of each man’s dimensions—except if you performed this calculation, you would discover that each man’s average size is nearly identical. At the same time, we can see that it would be misleading to say they are the same size—or to describe either one of them as average: the man on the left is average on two dimensions (reach and chest), while the man on the right is barely average on only one dimension (waist). There is no simple answer to the question, “Which man is bigger?”

While this may seem obvious once you think about it, don’t let this statement fool you, because the fact that there is no answer to the question—the reason it is not possible to rank individuals on size—reveals an important truth about human beings and the first principle of individuality: the jaggedness principle. This principle holds that we cannot apply one-dimensional thinking to understand something that is complex and “jagged.” What, precisely, is jaggedness? A quality is jagged if it meets two criteria. First, it must consist of multiple dimensions. Second, these dimensions must be weakly related to one another. Jaggedness is not just about human size; almost every human characteristic that we care about—including talent, intelligence, character, creativity, and so on—is jagged.

To understand these criteria, let’s return to our example of human size. If the question we were trying to answer was “Which man is taller?,” the answer would be easy. Height is one-dimensional, so it is perfectly acceptable to rank people according to how tall they are. But human size is a different story: it is composed of many different dimensions that are not strongly linked. Look again at the figure. The vertical band in the center of the image represents the range of measurements of the “average pilot” as Daniels had once defined it. For decades, the Air Force presumed that the bodies of most pilots would lie within that vertical band, because they assumed that someone with average-size arms would also have average-size legs and an average-size torso. But, because size is jagged, it turns out this is not true at all. In fact, Daniels discovered that less than 2 percent of pilots measured were average on four or more of these nine dimensions, and nobody was average on all of them.19

What if we were to expand the average band to include the middle 90 percent of each dimension, instead of the middle 30 percent? You might guess that most people’s bodies would surely lie within such a wide range. In actuality, less than half of all people would.20 It turns out that most of us have at least one body part that is rather large or rather small. That is why a cockpit designed for the average is a cockpit designed for nobody. Jaggedness also explains why the Norma-Look-Alike contest organizers could not find a woman who was a perfect match. Women have long protested the artificially exaggerated dimensions of Mattel’s Barbie doll, but the principle of jaggedness tells us that an average-size doll—a Norma-size doll—is just as phony.

Of course, it is reasonable to sometimes pretend size is one-dimensional if the trade-off is worth it, like when it comes to mass-produced clothing: in return for a lack of great fit for any one person, we get inexpensively manufactured shirts and pants for everyone. But if the stakes are high—if you’re altering an expensive wedding gown or designing a safety feature like an automobile airbag, or engineering the cockpit of a jet—then ignoring the multidimensionality of size is never a good compromise. When it matters, there are no shortcuts: you can only produce a good fit if you think about size in terms of all its dimensions.

Just about any meaningful human characteristic—especially talent—consists of multiple dimensions. The problem is that when trying to measure talent, we frequently resort to the average, reducing our jagged talent to a single dimension like the score on a standardized test or grades or a job performance ranking. But when we succumb to this kind of one-dimensional thinking, we end up in deep trouble. Take, for example, the New York Knicks.

In 2003, Isiah Thomas, a former NBA star, took over as president of basketball operations for the Knicks with a clear vision of how he wanted to rebuild one of the world’s most popular sports franchises. He evaluated players using a one-dimensional philosophy of basketball talent: he acquired and retained players based solely on the average number of points they scored per game.21

Thomas figured that since a team’s basketball success was based on scoring more points than your opponent, if your players had the highest combined scoring average, you would expect—on average—to win more games. Thomas was not alone in his infatuation with top-ranked scoring. Even today a player’s scoring average is usually the most important factor in determining salaries, postseason awards, and playing time.22 But Thomas had made this single metric the most important factor for selecting every member of the team, and the Knicks had the financial resources to make his priority a reality. In effect, the Knicks were assembling a team using the same one-dimensional approach to talent that companies use when making academic rankings the primary criteria for hiring employees.

At great expense, the Knicks managed to assemble a team with the highest combined scoring average in the NBA … and then suffered through four straight losing seasons, losing 66 percent of their games.23 These one-dimensional Knicks teams were so bad that only two teams had a worse record during the same stretch. The jaggedness principle makes it easy to see why they failed so badly: because basketball talent is multidimensional. One mathematical analysis of basketball performance suggests that at least five dimensions have a clear effect on the outcome of a game: scoring, rebounds, steals, assists, and blocks.24 And most of these five dimensions are not strongly related to one another—players who are great at steals, for instance, are usually not so great at blocking. Indeed, it is exceptionally rare to find a true “five-tool player.” Out of the tens of thousands of players who have come through the NBA since 1950 only five players have ever led their team on all five dimensions.25

The most successful basketball teams are composed of players with complementary profiles of basketball talent.26 In contrast, Thomas’s Knicks teams were terrible at defense and, perhaps surprisingly, they were not even particularly great at offense despite the talented scorers on the team, since each individual player was more intent on getting his own shots than facilitating anyone else’s. The Knicks—like Google, Deloitte, and Microsoft—eventually realized that a one-dimensional approach to talent was not producing the results they wanted. After Thomas left in 2009, the Knicks returned to a multidimensional approach to evaluating talent and started winning again, culminating in a return to the playoffs in 2012.27


For a human trait like size or talent to be considered jagged, however, it’s not enough to be multidimensional. Each of the dimensions also must be relatively independent. The mathematical way to express this independence is weak correlations.

Francis Galton helped develop the statistical method of correlation more than a century ago as a way of assessing the strength of the relationship between two different dimensions, like height and weight.28 Galton began applying an early version of correlation to people with the intention of demonstrating the validity of his idea of rank: that a person’s talent, intelligence, health, and character were closely related to one another.29 Today, we express correlation as a value between 0 and 1, where 1 is a perfect correlation (like the correlation between your height in inches and your height in centimeters) and 0 is no correlation at all (like the correlation between your height in inches and the temperature on Saturn).30 Across many scientific fields, a correlation of 0.8 or higher is considered strong while a correlation of 0.4 or lower is considered weak, although the precise cutoffs for “strong” and “weak” are ultimately arbitrary.

If the correlations between all the dimensions in a system are strong, then that system is not jagged and you are justified in applying one-dimensional thinking to make sense of it. Consider the Dow Jones Industrial Index. The Dow is a single numerical score that represents the combined stock value of thirty large and famous “blue-chip” companies. At the close of each American business day, the financial news dutifully reports the value of the Dow to the hundredth decimal place (it was 17,832.99 on January 2, 2015) and whether this number has moved up or down. Investors use the Dow to evaluate the overall strength of the stock market, and for good reason—between 1986 and 2011 (twenty-five years), the average correlation between the Dow and four other leading stock market indices was extremely high: 0.94.31 Even though the stock market is multidimensional (there are thousands of publicly traded companies in the United States), its general vitality can be reasonably captured with a single number: using the Dow to assess the overall strength of the stock market is one-dimensional thinking at its most reasonable.

Human size, however, is a different matter. In 1972, in a follow-up to Daniels’s study of pilots, U.S. Navy researchers calculated the correlations between ninety-six dimensions of Naval aviators’ size. They found that only a few correlations were stronger than 0.7, while many were lower than 0.1. The average correlation among all ninety-six dimensions of body size for Naval aviators was only 0.43.32 This means that knowing someone’s height or neck thickness or grip width is unlikely to tell you much about the rest of his dimensions. If you want to truly understand a person’s body size, there is no simple way to summarize it. You need to know the details of their jagged profile.

What about our minds? Are mental abilities jagged? When Galton first introduced correlation into the social sciences, he did so with the expectation that scientists would find strong correlations between our mental abilities—that our minds, in other words, were not very jagged.33 One of the very first scientists to systematically test this hypothesis was a man named James Cattell, the first American to obtain a Ph.D. in psychology and an early pioneer of testing theory who coined the term “mental test.”34 He was also an enthusiastic disciple of Galton’s idea of rank. In the 1890s, Cattell set out to prove, once and for all, that a one-dimensional view of mental ability was justified.35

Cattell administered a battery of physical and mental tests to hundreds of incoming freshmen at Columbia University across several years, measuring such things as their reaction time to sound, their ability to name colors, their ability to judge when ten seconds passed, and the number of letters in a series they could recall. He was convinced he would discover strong correlations between these abilities—but, instead, he found the exact opposite. There was virtually no correlation at all.36 Mental abilities were decidedly jagged.

For a devout believer in ranking, there was worse to come. Cattell also measured the correlations between students’ grades in college courses and their performance on these mental tests and discovered very weak correlations between them. And not only that—even the correlations between students’ grades in different classes were low. In fact, the only meaningful correlation Cattell found at all was between students’ grades in Latin classes and their grades in Greek classes.37

At the dawn of our modern educational system, when our schools were first becoming standardized around the mission of sorting students into average, above-average, and below-average bins of “general talent,” the first scientific investigation of this assumption revealed that it was false. But psychologists were so convinced that one-dimensional mental talent must exist, even if it was hidden, that most of Cattell’s colleagues rejected his results, suggesting that something was wrong with the way he conducted his experiments or analyzed his results.38

Meanwhile, psychologists—and then education, and then business—all doubled down on the notion that mental abilities are highly correlated and could be represented with a one-dimensional value like an IQ score.39 Ever since Cattell, study after study has revealed that individual intelligence—not to mention personality and character—is jagged.40 Even Edward Thorndike, who fashioned our modern education system around the notion that if you are good at one thing, then you are good at most things, conducted his own research to examine the correlation between school grades, standardized test scores, and success at professional jobs. He also found weak correlations between all three—yet he still rationalized that he could safely ignore this fact because he believed in a hypothetical (though unproven) one-dimensional “learning ability” that undergirded success in both school and work.41

Even today, scientists, physicians, businesspeople, and educators rely on the one-dimensional notion of an IQ score to evaluate intelligence. Even if we’re willing to concede that, yes, there are multiple kinds of intelligence—like musical intelligence, or artistic intelligence, or athletic intelligence—it’s hard to shake the feeling that there must be some kind of “general intelligence” a person possesses that can be applied to a great many domains. If we hear that one person is smarter than another, we assume that the smarter person is probably going to do better at just about any intellectual task that we set before him or her.

Consider, though, the following two jagged profiles of intelligence. They show scores for two different women on the Wechsler Adult Intelligence Scale (WAIS),42 one of the two most commonly used contemporary tests of intelligence.43 Each woman’s profile represents her score on ten subtests from the WAIS test, each measuring a different dimension of intelligence, such as vocabulary or puzzle solving. All of the subtest scores are combined to generate an individual’s IQ score.

Which woman is smarter? According to the WAIS, they are equally intelligent—each has an IQ of 103—and each is close to average intelligence, defined as an IQ of 100. If we were tasked with hiring the smartest candidate for a job, we might rate each woman equally. Yet each of these women clearly possesses different mental strengths and weaknesses, and if the goal is to understand these women’s talents, it is obvious that relying on an IQ score is misleading.44




As with physical size, the correlations between each of the dimensions of mental ability assessed by the WAIS are for the most part not particularly strong,45 indicating that mental talent is jagged and cannot be described or understood by a one-dimensional value like an IQ score. Yet to this day few of us can resist the lure of evaluating a person’s intelligence with a single ranking or number. But a one-dimensional evaluation of mental abilities is even more misguided than these intelligence profiles portray. If you subdivide intelligence even further and compare, for instance, short-term memory for words to short-term memory for images, scientists have shown that these “microdimensions” also exhibit weak correlations.46 No matter how fine you slice your mind, you are jagged all the way down.

All of this leads to one obvious question: If human abilities are jagged, why do so many psychologists, educators, and business executives continue to use one-dimensional thinking to evaluate talent? Because most of us have been trained in averagarian science, which implicitly prioritizes the system over the individual. It is entirely possible to build a functional evaluation system upon weak correlations: if you select employees based on a one-dimensional view of talent, while you may be wrong about any one individual, on average you will do better than someone who selects employees randomly.

As a result, we have managed to convince ourselves that weak correlations mean something that they do not. In most fields of psychology and education, if you find a correlation of, say, 0.4 (the correlation between SAT scores and first-semester college grades47), it is usually assumed you have found something important and meaningful. Yet, according to the mathematics of correlation, if you find a 0.4 correlation between two dimensions, that means you have managed to explain 16 percent of the behavior of each dimension.48 Do you really understand something if you can explain 16 percent of it? Would you hire a mechanic who said he could explain 16 percent of what was wrong with your car?

Of course, if we care more about the efficiency of the system than about individuality, then understanding 16 percent, on average, is undeniably better than nothing. It may even be enough to set policy for groups of people. But if our goal is to identify and nurture individual excellence, then weak correlations tell us something different: we will only succeed if we pay attention to the distinct jaggedness of every individual.


In 2004, Todd Carlisle became an analyst in the Google human resources department, where he helped facilitate interactions between Google project managers who needed to hire new employees and recruiters who put together “hiring packets” about job candidates that the managers could use to make their hiring decisions. At the time, a candidate’s GPA and standardized test scores held a prominent place in these packets. But Carlisle noticed a very curious phenomenon: increasingly, project managers were asking recruiters to include additional information about the candidates.49 Some wanted to know whether the candidates had competed in programming competitions. Others wanted to know if their hobbies included chess or playing in a band. It seemed every project manager had a pet idea about what extra information was salient when making a hiring decision.

“One day I just realized, if the traditional metrics—the grades and test scores—were really so great, why was everyone supplementing them with additional, clearly nontraditional metrics?” Carlisle told me. “That’s when I decided to do the experiment.”50 Carlisle harbored the private feeling that there were probably a lot of talented people out there that Google was missing out on, and he thought part of the problem was an overemphasis on a small set of familiar metrics. He believed he could change the way the company approached recruiting so that it instead looked at the whole applicant in all her complexity. Since big decisions at Google operate primarily by consensus rather than decree, Carlisle knew that if he was going to convince project managers of the value of his multidimensional vision of talent evaluation, he would need a study that systematically tested not only his own ideas about which dimensions of talent predicted success at Google, but all the dimensions that managers and executives believed were related to being a great employee.

First, Carlisle collected an enormous list of more than three hundred dimensions (he called them “factors”) that included traditional dimensions like standardized test scores, diplomas, alma mater rankings, and GPAs, as well as more idiosyncratic factors that other managers had identified as being significant. (One prominent Google executive, for example, suggested that the age someone first became interested in computers might be important.) Next, Carlisle ran test after test to analyze which of these factors was actually related to employee success. The results were startling and unequivocal.51

It turned out that SAT scores and the prestige of a candidate’s alma mater were not predictive at all. Neither was winning programming competitions. Grades mattered a little, but only for the first three years after you graduated. “But the real surprise for me and for a lot of people at Google,” Carlisle told me, “was that when we analyzed the data we couldn’t find a single variable that mattered for even most of the jobs at Google. Not one.”52

In other words, there were many different ways to be talented at Google, and if the company wanted to do the best possible job of recruiting employees, it needed to be sensitive to all of them. Carlisle had discovered the jaggedness of Google talent and, as a result, made changes to the way Google recruits new employees. They rarely ask candidates for their GPAs if they’ve been out of school for three years and no longer require test scores for any candidate. “We no longer look at school selectivity the same way, either,” Carlisle explained to me. “The challenge now is not only what information to collect, but how to present it—you have to focus on which factors you emphasize as most important in a hiring packet. The experiment has helped to create a more complete picture of candidates that managers can use to make better hires.”53

Taking into consideration the jagged talent of job applicants is not some kind of sophisticated luxury that only giants like Google can afford to undertake. It’s also a way for smaller companies to identify and attract top talent in a competitive job market. IGN is a popular website devoted to video games and other media, but it has less than 1 percent the number of employees of Google, and an even smaller percentage of sales.54 Initially, IGN approached hiring using the same one-dimensional thinking as other tech companies. Of course, if every company in the entire tech industry is evaluating employees using identical one-dimensional criteria like grades and standardized test scores, there is going to be a very small set of candidates at the top of these rankings—and these “top-ranked” candidates are far more likely to sign with a big fish like Google or Microsoft than a small one like IGN.

IGN executives realized they simply couldn’t compete with all the other tech firms for the employees they considered talented. They only had two choices: offer higher pay—not feasible—or change the way they thought about talent. So in 2011, IGN created a program called Code-Foo that was a “no résumé allowed” recruitment program aimed at finding untapped programming talent.55 The six-week program paid aspiring programmers to learn new programming languages and then work on actual software engineering projects at IGN.56 What was so unusual about Code-Foo was the way IGN managers evaluated applicants. They completely ignored applicants’ educational background and previous experience. Instead of submitting a résumé, candidates submitted a statement of passion for IGN and answered four questions that tested their coding ability. In essence, IGN was saying, “We don’t care what you’ve done or how you learned to program, we just want you to be good—and excited about putting your skills to work.”

In 2011, 104 people applied to the Code-Foo program; 28 were accepted, and only half of them had earned a college degree in a technical field. IGN president Roy Bahat told Fast Company magazine that he hoped Code-Foo would eventually lead to one or two hires. IGN ended up hiring eight.57 “It’s not like if you looked at their résumés, you would have said it’s impossible that they would be qualified for the jobs,” Bahat reported to Fast Company. “But if you only looked at their résumés … there wouldn’t necessarily be a reason to say yes. They’re the kind of people we would have overlooked.”58

Often, when organizations embrace jaggedness for the first time, they feel like they have found a way to uncover diamonds in the rough, to identify unorthodox or hidden talent. But the jaggedness principle says otherwise: while we may have identified overlooked talent, there is nothing unorthodox or hidden about it. It is simply true talent, as it has always existed, as it can only exist in jagged human beings. The real difficulty is not finding new ways to distinguish talent—it is getting rid of the one-dimensional blinders that prevented us from seeing it all along.

Of course, the blinders that are most important to eliminate are the ones we use to look at ourselves.


As I approached the end of my degree requirements at Weber State University, I decided to apply to graduate schools in fields related to neuroscience. If I could get admitted, I would become the first person on either side of my family to attend graduate school. I had managed to turn things around in college and obtain strong grades as well as enthusiastic letters of recommendation from a couple of professors. Only one thing was standing in my way: a standardized test.

I needed to perform well on the GRE, the Graduate Record Examination, a test required by every one of the graduate science programs I was applying to.59 At the time, the test consisted of three parts: a math part, a verbal part, and the so-called analytical reasoning part, which is supposedly designed to evaluate your ability to think logically. It consisted of knotty word problems along the lines of, “Jack, Jenny, Jeanie, Julie, Jerry, and Jeremy are all attending a dinner party. Jack doesn’t like Jenny, Jeanie doesn’t like Jeremy, Julie loves Jerry, and Jenny always steals Julie’s dinner rolls. If they are sitting at a round table, who would you seat to Jeremy’s left?”

I started preparing for the GRE six months before I needed to take the exam, but with just two weeks to go things were looking grim. I had taken about twenty practice tests. I consistently did well on the math and verbal sections, but the analytical reasoning section was a disaster. I never scored above the 10th percentile. Each time, I got almost every question wrong. My tutor, who got a perfect score on the analytical reasoning section, had shared his method with me, and I figured that if I simply practiced his method enough times, eventually my performance would improve. It didn’t. The Julies and Jennys and Jeanies all blurred together and I could never seem to reason my way through to the answer. I was once again staring at the possibility of all my dreams coming to an unceremonious end, because it was hard to imagine any graduate program would admit someone who scored in the 10th percentile on any test.

While studying at my parent’s house, I got so frustrated that I chucked my pencil across the room, nearly spearing my father as he unexpectedly walked by. Lucky for me, he came over and asked what was going on. I told him I was failing the analytical section and showed him the method I was using to solve the problems.

“That requires you to do most of the problem in your head,” he pointed out.

“Sure,” I responded. “That’s the way the problems are supposed to be done.” After all, I thought to myself, my teacher had got a perfect score with that method, and most of the other kids in my test preparation class were also scoring above the 80th percentile using it.

“But you don’t have great working memory. Why would you try to use a method that places demands on your working memory?” he said. He knew, however, that I had done well in my geometry classes. “You’re pretty good at visual thinking, though, so why don’t you use a problem-solving method that relies on that.”

He sat down and proceeded to show me a way to convert each problem into a kind of visual table that allowed me to draw the precise relationships between Jerry and Jenny and Julie in a clear and reliable fashion. At first, I was completely skeptical that this technique, which was indeed very easy for me, could possibly work. But I tried it out on problem after problem, and each time it gave me the right answer. I couldn’t believe it. Two weeks later I took the GRE and I got my highest score on the analytical reasoning section.

My GRE instructor had figured out a way to solve problems that suited his jagged mental abilities—but not necessarily mine. Fortunately, my father had a clearer sense of my jaggedness. He helped me see that my problem was not that I had weak analytical skills—the one-dimensional view I had settled on after failing practice test after practice test using my instructor’s method—but rather that I was relying on one of my weakest mental abilities, working memory, to solve the problems. Once my father helped me identify a strategy that played to my strengths, I could finally answer the test questions correctly and demonstrate my true talent.

I owe a debt of gratitude to my dad. His thoughtful consideration of my jagged profile—my individuality—led him to offer invaluable advice that changed the course of my life. If I had not switched to a visual way of analyzing the GRE problems, I would have performed poorly on the test and, as a result, would probably never have gotten into Harvard. That is the power of the first principle of individuality. When we are able to appreciate the jaggedness of other people’s talents—the jagged profile of our children, our employees, our students—we are more likely to recognize their untapped potential, to show them how to use their strengths, and to identify and help them improve their weaknesses, just like my dad did.

And when we become aware of our own jaggedness, we are less likely to fall prey to one-dimensional views of talent that limit what we are capable of. Had I failed the test, it’s likely that I would have concluded that I didn’t have what it takes to succeed in graduate school—after all, that’s what the test is supposed to tell you—and lowered the expectations that I had for myself.

Recognizing our own jaggedness is the first step to understanding our full potential and refusing to be caged in by arbitrary, average-based pronouncements of who we are expected to be.