The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by DIGITAL LIBRARY, 2023-03-16 10:42:35

Leaders Eat Last

Leaders Eat Last

wealthier, as individuals and as a country. Though the wealthiest Americans were getting wealthier at a disproportionately higher rate than the rest of the country, even the poorest Americans at least stayed the same or even rose by a small degree. The point is, no segment of the population got significantly poorer. With the 1970s coming to a close, Americans started to replace their bellbottom jeans with Members Only jackets and to rip up their shag carpeting. The Baby Boomers were finally coming of age. They started to work at more senior levels at companies and in government. The coddled Boomers, the ones who didn’t have to suffer much, the ones who grew up in a society that could afford for them to put themselves first, were now starting to take positions in which they could affect political, business and economic theory en masse. It’s worth noting that it was when the Boomers arrived that relationships in Congress really started to suffer. Until the early 1990s, members of the opposing parties, while still prone to the same theatrics they are today, were able to sit down together with the goal of reaching a compromise. They may not have agreed, but they tried. And for the most part they behaved with civility. Their children went to school together, and their families knew each other. They even socialized on weekends. And as a result, Congress functioned. The Boomer generation would emerge bigger and more powerful than any opposing force that could help keep things in check. Without a balancing tension, the impulses and desires of one group would prove to be hard to restrain. Like the unchecked power of America after the fall of the Soviet Union, like the dictator who overthrows his predecessor, like legislation passed when one party has a supermajority in Congress, the Boomers would start to impose their will on the world around them, surrounded only by outnumbered voices telling them they couldn’t. By the 1980s and 1990s, this “shockwave,” this “pig in the python,” as the Baby Boom is sometimes described because of its sheer size and force, this demographic bulge able to remodel society as they passed through it, was fully in charge.


T CHAPTER 12 The Boomers All Grown Up he 1980s were now upon us and we were no longer a country trying to figure out how to rally a population and win a war; we were now trying to figure out how to capitalize on the amazing boom years in which we were living —the Roaring Eighties. During this period, new economic theories were being proposed to protect the wealth the Boomers were accumulating—a classic symptom of excess. Where the radio, automobile and electric refrigerator were the “must have” items of the 1920s, another new technology became all the rage in the 1980s. The IBM PC, MS-DOS, Apple’s Macintosh and Microsoft Windows all contributed to the rise and spread of the personal computer. “A PC on every desk,” as Bill Gates, the young founder of Microsoft, envisioned. We no longer needed to go to work to have power—we could have power alone at home too. The individual could compete against the corporation. Even the new technologies of the day supported the desire for more individualism. We were also becoming more and more comfortable with products having shorter lifespans. Other inventions of the 1980s included the disposable camera and disposable contact lenses. Disposability, another symptom of our excess, was now an industry to be pioneered. We were actually looking for more things we could throw out. And there was one other thing we started to view as disposable: people. The Day We Embraced Layoffs AUGUST 5, 1981. That’s the date it became official. It’s rare that we can point to an exact date when a business theory or idea becomes an accepted practice. But in the case of mass layoffs, we can. August 5, 1981, was the day President Ronald Reagan fired more than 11,000 air traffic


controllers. Demanding more pay and a shorter workweek, PATCO, the air traffic controllers’ union at the time, was embroiled in a vicious labor dispute with the Federal Aviation Administration. When the talks broke down, PATCO threatened to go on strike, ostensibly shutting down airports and causing the cancellation of thousands of flights during one of the busiest travel periods of the year. Such a strike is illegal, according to the sometimes controversial Taft-Hartley Act of 1947. The act essentially prohibits any labor strike to cause unfair harm to those not involved in the dispute or to do any damage to any commerce that would negatively affect the general welfare. This is the reason police and emergency room nurses are forbidden to strike. The damage such a strike would cause is believed to outweigh any grievances over unfair pay or hours. Without an acceptable deal and, worse, without the ability to find common ground, on August 3, PATCO’s members refused to go to work. Given the strike’s impact on the country, President Reagan got personally involved, ordering the air traffic controllers back to work. Meanwhile, contingency plans were put into place, with supervisors (who were not members of the union), a small group of controllers who had chosen not to strike and military air traffic controllers enlisted to cover the losses. Though not a perfect solution, these temporary workers were able to keep the majority of flights going. The effect of the strikes was not as severe as expected, and so, on August 5, 1981, President Reagan fired 11,359 air traffic controllers, nearly every controller working for the FAA at the time. And it didn’t stop there. Reagan banned every one of the strikers from ever working for the FAA again for the rest of their lives, a ban that remained in effect until President Clinton lifted it in 1993. Many of the air traffic controllers who were fired that day were war veterans (which is where they learned the trade) or civil servants who had worked hard to earn their middle-class incomes. Because of the ban and the fact that their skills were hardly transferable to other industries (there’s not a huge demand for air traffic controllers outside of the FAA), many of them found themselves in poverty. This is not a story about whether Reagan should or should not have fired the air traffic controllers. This is not a story about labor disputes and the right of unions to stand up to management. This is a story of something quite diabolical. This is a story about the long-term repercussions when a leader sets a new tone about what is acceptable or unacceptable behavior inside an organization. In an attempt to alleviate one short-term strain on our country, President


Reagan inadvertently created a new, longer-lasting one. By firing all the air traffic controllers, he sent a message to business leaders across the nation. He unwittingly blessed the swift and even aggressive decision to use mass layoffs to guard against a short-term economic disruption. Though I am certain Reagan never intended it as such, some eager CEOs interpreted his actions as permission for them to do the same. There was now a precedent for protecting commerce before protecting people. And so, for the first time ever, the social conventions that had restrained many a CEO from doing something that many may have wished they could in the past were instantly gone. With the tacit approval from on high, the practice of laying off people in mass numbers to balance the books started to happen with greater frequency. Layoffs had existed before the eighties, but usually as a last resort and not an early option. We were now entering a time in which even meritocracy mattered less. How hard someone worked or how much they sacrificed or contributed to the company no longer necessarily translated into job stability. Now anyone could be laid off simply to help balance the books for that year. Careers ended to make the numbers work. Protecting the money, as economic theory, replaced protecting the people. Under such conditions, how can we ever feel safe at work? How can we ever feel committed to the jobs we have if the leaders of our companies aren’t committed to us? The very concept of putting a number or a resource before a person flies directly in the face of the protection our anthropology says leaders are supposed to offer. It’s like parents putting the care of their car before the care of their child. It can rip apart the very fabric of the family. Such a redefining of the modern leader wreaks the same havoc on relationships in our companies (or even our society) as it does in our families. Starting in earnest in the 1980s, public institutions and industries succumbed to this new economic perspective. The consumer products industry, the food industry, the media, banking, Wall Street, even the Congress of the United States have all, to varying degrees, abandoned the people they exist to serve in favor of more selfish priorities. Those in positions of authority and responsibility more readily allow outside constituents—sometimes unengaged constituents—to influence their decisions and actions. By agreeing to offer a supply to meet the demands of outsiders, these leaders who act like followers may make the profit they expect, while harming the people they claim to be serving. Long-term thinking gives way to short-term thinking and selfish replaces selfless, sometimes even in the name of service. But it’s service in name only.


This new leadership priority rattles the very foundation upon which trust and cooperation are built. This has nothing to do with restricting a free market economy. This has to do with forgetting that people—living, breathing people, those who will play a greater role in our ability to innovate, make progress and beat our competition—are now no longer viewed as our most valuable asset as we aim to compete with the numbers. If anything, prioritizing performance over people undermines the free market economy. The better the products, services and experiences a company is able to offer its customers, the more it can drive demand for those products, services and experiences. And there is no better way to compete in a market economy than by creating more demand and having greater control over the supply—which all boils down to the will of those who work for us. Better products, services and experiences are usually the result of the employees who invented, innovated or supplied them. As soon as people are put second on the priority list, differentiation gives way to commoditization. And when that happens, innovation declines and the pressure to compete on things like price, and other short-term strategies, goes up. In fact, the more financial analysts who cover a company, the less innovative the company. According to a 2013 study that appeared in the Journal of Financial Economics, companies covered by a larger number of analysts file fewer patents than companies covered by fewer analysts. And the patents those companies do generate tend to have lower impact. The evidence supports the idea that “analysts exert too much pressure on managers to meet short-term goals, impeding firms’ investment in long-term innovative projects.” Put simply, the more pressure the leaders of a public company feel to meet the expectations of an outside constituency, the more likely they are to reduce their capacity for better products and services. When Leaders Eat First SINCE THE BOOMERS took over the running of business and government, we have experienced three significant stock market crashes. One in 1987 that corrected for a period of excessive speculation and, some argue, an overreliance on computer programs to make trades instead of people. One in 2000, after the burst of the dot-com bubble. And one in 2008 that followed the collapse of the


overvalued housing market. Before 1987, there hadn’t been a stock market crash since the Great Depression, which itself followed the excess and overvaluations of the 1920s. If we do not find ways to correct the imbalance ourselves, the laws of nature will always balance it for us. Too many of the environments in which we work today frustrate our natural inclinations to trust and cooperate. For a species born in a time when resources were limited and dangers were great, our natural inclination to share and cooperate is complicated when resources are plenty and outside dangers are few. When we have less, we tend to be more open to sharing what we have. A Bedouin tribe or nomadic Mongolian family doesn’t have much, yet they are happy to share because it is in their interest to do so. If you happen upon them in your travels, they will open up their homes and give you their food and hospitality. It’s not just because they are nice people; it’s because their survival depends on sharing, for they know that they may be the travelers in need of food and shelter another day. Ironically, the more we have, the bigger our fences, the more sophisticated our security to keep people away and the less we want to share. Our desire for more, combined with our reduced physical interaction with the “common folk,” starts to create a disconnection or blindness to reality. Unfortunately, too many of the environments in which we work today do more to frustrate than to foster our natural inclinations to trust and cooperate. A new set of values and norms has been established for our businesses and our society—a system of dopamine-driven performance that rewards us for individual achievement at the expense of the balancing effects of serotonin and oxytocin that reward us for working together and building bonds of trust and loyalty. It is this imbalance that causes stock markets to crash. It is this imbalance in corporate cultures that affects the stability of large organizations. (Enron, Tyco, WorldCom and Lehman Brothers are just a few examples of large, “stable” organizations that collapsed because of imbalances in their cultures.) The seeming lack of effort to want to change this system only creates greater imbalance of the chemicals. And so the vicious cycle continues. Our health is at risk. Our economy is at risk. The stability of our companies is at risk. And who knows what else. The big Boomer generation has, by accident, created a world quite out of


balance. And imbalance, as history has proven over and over, will self-correct suddenly and aggressively unless we are smart enough to correct it ourselves slowly and methodically. Given our inclination for instant gratification and the weak Circles of Safety in our organizations, however, our leaders may not have the confidence or patience to do what needs to be done. Obviously, we can’t simply blame an entire generation for the ills we face today. Nor can we blame an industry, any particular CEO or “the corporations.” There aren’t comic book–style archenemies running companies, trying to take over the world, who we can simply set our sights on overthrowing to right all that is wrong. But there is a lack of empathy and humanity in the way we do business today. There are smart executives running companies and managing systems, but there seems to be a distinct lack of strong leaders to lead the people. As Bob Chapman, CEO of Barry-Wehmiller, is fond of saying, “No one wakes up in the morning to go to work with the hope that someone will manage us. We wake up in the morning and go to work with the hope that someone will lead us.” The problem is, for us to be led, there must be leaders we want to follow. Dehumanization OUR INTERNAL WIRING, though complicated and messy in practice, is pretty straightforward in intention. Designed during a time when we lived in small groups with limited resources and great dangers around us, our chemical incentive system was built to help us manage and thrive in what was a very tangible world. We knew all the people with whom we lived and worked. We saw the things we needed and we worked together to get them. We saw the things that threatened us and we worked together to protect each other from them. The problem now is that we have produced an abundance of nearly everything we need or want. And we don’t do well with abundance. It can shortcircuit our systems and actually do damage to us and to our organizations. Abundance can be destructive not because it is bad for us, per se. Abundance can be destructive because it abstracts the value of things. The more we have, the less we seem to value what we’ve got. And if the abstraction of stuff makes us value it less, imagine what it does to our relationships.


The scale at which we are able to operate today is sometimes too big for us to wrap our heads around. By its very nature, scale creates distance, and at distance, human concepts start losing their meaning. A consumer is just that: an abstraction of a person who we hope will consume whatever we have to offer. We try to guess what this “consumer” wants so that they will consume more of what we have. And if they do, we will keep track of lots of metrics so that we may better manage the process. And as our processes, metrics and scale continue to grow, we employ technology to help us operate at greater speed and scale. In other words, the human beings, the end users of all this, become so far removed from the people who mean to serve them that they simply become just another metric to be managed. The more distance there is between or the more things we do that amplify the abstraction, the harder it becomes to see each other as human. It is not the abundance we need to manage or restrict, it is the abstraction. We no longer see each other as people; we are now customers, shareholders, employees, avatars, online profiles, screen names, e-mail addresses and expenses to be tracked. The human being really has gone virtual. Now more than ever, we are trying to work and live, be productive and happy, in a world in which we are strangers to those around us. The problem is, abstraction can be more than bad for our economy . . . it can be quite deadly.


“L [ THE ABSTRACT CHALLENGE ] CHAPTER 13 Abstraction Kills et me out of here!” he shouted. “Let me out! Let me out!” Kept in a small room with no windows, he started banging on the wall to get the attention of the others. “You have no right to hold me here!” he screamed. The man enlisted to help that day sat at the control console. He started to get nervous. He could hear the muffled pleas from the other room. He looked up at the man in charge, and, as if stating something not already terribly obvious, said, “He’s in pain.” But the man in charge showed no emotion. Nothing. He said only one thing: “The experiment requires that you continue.” And so the man enlisted to help that day turned back to the control panel, muttering to himself, “It’s got to go on. It’s got to go on.” He flipped the switch and administered another electric shock to the stranger in the other room. “You have no right to hold me here!” shouted the man in the other room again. But no one answered him and the experiment continued. “Let me out!” he continued to scream hysterically. “My heart’s bothering me! Let me out!” Then suddenly, the screaming stopped and the experiment was over. As World War II was moving toward its conclusion, the main architects of the Nazi movement—Adolf Hitler, Heinrich Himmler and Joseph Goebbels— managed to escape capture by committing suicide. Others were not able to avoid justice. They were rounded up and put on trial for their roles in the systematic genocide committed during the war. Crimes against humanity was one of the charges levied against the twenty-four most senior Nazis captured, most of whom were found guilty for their respective roles. But there was one man who was conspicuously absent during the Nuremburg Trials. Nazi SS-Obersturmbannführer, or lieutenant colonel, Adolf Eichmann played


a significant role in the organizing of the Holocaust. He was responsible for managing the logistics of rounding up and deporting mass numbers of Jews and other unwanted groups to the ghettos and concentration camps across Eastern Europe. He was the one who oversaw the process that sent innocent men, women and children, young and old, to the death camps. But after the war, using falsified papers, he was able to escape Germany and make his way to Argentina. For fifteen years Eichmann lived a relatively normal, suburban life under the name Ricardo Klement until he was captured by Israeli agents in 1960 and brought back to face trial in Jerusalem. Eichmann’s capture reignited debate over how the Holocaust could have happened in the first place. It wasn’t possible for just a few warped minds to have effectively committed genocide on such a remarkable scale. That amount of planning and organization and logistics required the help of thousands if not millions of people. It required the involvement of all levels of soldiers perpetrating the actual crimes and millions of ordinary Germans willfully turning a blind eye. Some believed that there was a collective intent, that an entire population had abandoned all humanity and morality. Others saw it differently. The common defense that many Nazis and Germans offered after the war was less dramatic. “We had no choice,” they said, “we were just following orders.” That was the mantra. Whether they were senior officials held accountable for their roles, or ordinary soldiers and civilians who tried to rebuild a sense of normalcy after the upheaval of the war, they were able to rationalize their actions, avoiding personal responsibility by holding their superiors accountable. This is what they would tell their grandchildren. “We were just following orders.” Stanley Milgram, a Yale psychologist, wanted to understand more. Were we humans such lemmings that if someone who outranked us, someone in a position of authority, ordered us to do something entirely counter to our moral code, our sense of right and wrong, we would simply obey? Sure it’s possible on a small scale, but on such a mass scale? So in 1961, just a few months after Adolf Eichmann’s trial began in Israel, Milgram designed an experiment to understand our obedience to authority. The experiment was relatively simple. In each enactment, there were two volunteers. One would play the role of the teacher and the other would play the role of the student. The person who played the student was actually another scientist involved in the experiment. (To assign the roles, the real volunteer was asked to pick a piece of paper out of a hat that indicated if they would be the teacher or


the student. In fact, both folded pieces of paper said teacher on them, giving the illusion to the volunteer that their role was picked by chance.) The volunteers who played the role of the teachers, recruited from a newspaper ad and told they were taking part in an investigation into memory and learning, sat at a console with a series of switches. Each one was told that a series of questions would be asked of the student. If the student got the wrong answer or refused to answer the question, the teacher was to flip a switch on the console to administer an electric shock to the student. In fact, the only electric shocks administered during the entire experiment were mild, 15-volt shocks given to the teachers just so they could have a sense of what it felt like. There were thirty switches on the console, labeled from 15 volts to 450 volts. With each switch labeled in 15-volt increments, it was made very clear to the teacher that with each switch the shocks would get increasingly more severe. To make sure that the teacher understood the implications of the increasing severity of the shocks, there were also labels placed above certain ranges. The voltage range of 15 to 75, for example, was labeled “Slight Shock.” Written above the 75-to-120-volt range of switches was “Moderate Shock.” The 135-to-180-volt range was labeled “Strong Shock.” “Very Strong Shock,” “Intense Shock” and “Extreme Intensity Shock” covered the next few ranges until the voltages reached “Danger: Severe Shock” above the 375-to-420-volt switches. The final range, 435 to 450 volts, was painted red and marked simply “XXX.” There was no confusion as to what the switches meant. The 160 volunteers were put through the experiment with four variations, 40 volunteers for each setup. In one variation, the scientist playing the student sat right next to the teacher and the teacher had to physically place the student’s hand onto a shock plate. In another variation, the student was in the room with the teacher. The teacher could see and hear the student’s reactions after each shock was administered. There was no uncertainty about the impact of each successive decision to flip a switch. In another variation, the student was kept in a separate room. Though the teacher was unable to see the effects of the shocks, they could clearly hear the student’s protests and screams through the walls. In all of these variations, the teacher could hear the scientist playing the role of the student pretending to express discomfort at first and then shouting and pleading for the experiment to end as it progressed. “Stop!” they would scream. “This hurts!” In yet one more variation, however, the student was kept in another room, and but for thumping on the walls, the teacher could neither see nor hear the student’s reactions to the


shocks. As expected, all the volunteers expressed concern. As they realized or believed they were causing pain to the student, they would look up to the scientist, standing next to them in a white lab coat with clipboard in hand, and ask if they should continue despite the pain they were knowingly inflicting. The first time the volunteer expressed a desire to stop the experiment or no longer be a part of it, the scientist would say, “Please continue.” If the volunteer expressed a desire to stop a second time, the scientist would always say, “The experiment requires that you continue.” As they went further and further down the line of switches, some of the volunteers started to get nervous. Very nervous. They started sweating and shaking. Although extremely uncomfortable, most went on with the experiment. Upon the third request to halt the experiment, the scientist replied coldly, “It is absolutely essential that you continue.” After a fourth protest, the scientist responded simply, “You have no other choice, you must go on.” If any other protests were expressed, the experiment would immediately end. How far do you think you would go? How much pain could you cause someone before you would stop? Most of us would say we would not go very far and that we would have quit long before we believed we had caused any serious harm to someone. And the scientists expected the same thing. Before the experiment, they predicted that 2 percent to 3 percent would go all the way, and those people would exhibit psychopathic tendencies. But the actual results were horrifying. When the volunteers had to physically place the student’s hand on the shock plate, 70 percent quit the experiment without going very far. When the volunteers were in the same room but didn’t have to physically touch the student, the number went down slightly, with 60 percent refusing to continue. But when they could neither see the students in pain nor hear their cries, only 35 percent refused to continue. That means 65 percent of the volunteers were able to go through the entire experiment, reach the final switch and, for all intents and purposes, kill someone. The experiment has been criticized for being unethical, and for good reason. Nearly eighty people who woke up that morning with the belief they were good people went home that day with the knowledge they could kill someone. Though they expressed concern, though they were nervous, though they had a sense that what they were doing could have a negative impact, even a seriously negative impact, the majority still went all the way.


Upon the conclusion of the experiment, despite believing that the student may be hurt or worse, the volunteers expressed concern for their own culpability, insisting that they should not be held responsible. Not a single volunteer showed any concern for the student’s well-being. None asked to look in the other room. They were more concerned with their own skins. Eventually, the volunteers were debriefed and shown that the student, who was played by a scientist, was fine and unhurt. They were assured that no shocks were given and that no pain was caused at any time. Some of those who obeyed, who went all the way, now felt remorse for what they had done. They had a sense of personal responsibility. Others who went all the way, in contrast, justified their actions by blaming the scientists. If there were any repercussions, they reasoned, it would be the guys in charge, not them, who would be held responsible. After all, they were just doing as they were told. Some even went so far as to transfer blame to the student. “He was so stupid and stubborn,” said one volunteer trying to come to terms with his actions, “he deserved to be shocked.” Interestingly, nearly all those volunteers who refused to continue to take part in the experiment once they realized they were causing pain to someone else felt accountable to a greater moral imperative. Some were religious but all of them felt they were accountable to a higher authority than the scientists in the room. The reality is, Milgram’s experiment is being carried out every single day in offices across the country and around the world. The cycle of abstraction endemic to our brand of capitalism is easily seen when we take a broader view of Milgram’s conclusions. Abstraction is no longer restricted to physical space; it also includes the abstracting nature of numbers. The bigger our companies get, the more physical distance is created between us and the people who work for us or buy our products. At such scale, we can no longer just walk into the aisles and count the cans of soup on the shelf either. Now we rely on documents that report the numbers of what we’ve sold and how much we’ve made. When we divorce ourselves from humanity through numerical abstraction, we are, like Milgram’s volunteers, capable of inhuman behavior. Just like the conditions Milgram set in his experiment, the physical separation between us and those on the receiving end of our decisions can have a dramatic impact on lives . . . the lives of people who cannot be seen or heard. The more abstract people become, the more capable we are of doing them harm.


CHAPTER 14 Modern Abstraction Milgram’s Findings Come to Life IN 2009, THE New York Times and nearly every other major news outlet carried a story about an outbreak of salmonella that killed nine people and sickened more than seven hundred others. The outbreak triggered the biggest food recall in American history. The contamination was traced to products made by over three hundred companies using peanuts and peanut meal supplied by the Peanut Corporation of America (PCA) of Lynchburg, Virginia. Did the head of PCA do everything in his power to make sure the people who trusted him and his company were safe? Sadly, no. FDA investigators concluded that PCA knowingly shipped tainted products (charges the company denies). And the extensive evidence that company executives put enormous pressure on employees to meet targets is hard to ignore. Stewart Parnell, the president of the Peanut Corporation of America, sent an email to one of his plant managers complaining the positive salmonella tests were “costing us huge $$$$$, causing obviously a huge lapse in time from the time we pick up peanuts until the time we can invoice,” according to court documents. (Four years later, as this book was going to press, federal prosecutors filed criminal charges against Mr. Parnell and his team. The company went out of business in 2009.) When our relationships with customers or employees become abstract concepts, we naturally pursue the most tangible thing we can see—the metrics. Leaders who put a premium on numbers over lives are, more often than not, physically separated from the people they serve. Putting Mr. Parnell aside, what about all the people who worked in the company who did as they were told? In a weak culture, employees see their employer just as Milgram’s subjects saw the scientist—as the final authority figure. A leader who presides over a weak culture does not invest in programs to build the confidence of their people so that they will do the right thing. Instead,


command and control perpetuates a system in which people will more likely do the thing that’s right for them. Uncertainty, silos and politics—all of which thrive in a command-and-control culture and work counter to the concept of a Circle of Safety—increase our stress and hurt our ability to form relationships to the point where self-preservation becomes our primary focus. Anything that separates us from the impact our words and actions have on other people has the potential to lead us down a dangerous path. As Milgram showed us, when we cannot see the impact of our decisions, when the lives of people become an abstraction, 65 percent of us have the capacity to kill someone. When we are unable to see or hear the people we are hurting, fears of getting in trouble, losing our jobs, missing the numbers or disturbing our place in the pecking order become primary drivers of decisions. And just as the German soldiers who defended their actions by pleading they were “just following orders” or Milgram’s subjects who muttered to themselves “the experiment must continue,” we have our own modern mantras to defend ourselves or pass on accountability when our decisions harm others. We work to “provide shareholder value” or “fulfill our fiduciary duty,” all the while defending our actions as “within the law” or claiming that the decisions made were above our pay grade. During the time I was researching this book, I had an argument with an investment banker at a dinner I attended. With my new understanding in hand, I pressed and pressed him on his responsibility to the people who are impacted by his decisions. I was stunned how he parroted Milgram’s volunteers. “I don’t have the authority to make those kinds of decisions,” he said to me. “It’s not my job. My job is to find the best value for my clients,” he defended. When we do not feel safe from each other in the environments in which we work, our instincts drive us to protect ourselves at all costs instead of sharing accountability for our actions. Faced with the reality of what the banking industry did to the economy, some bankers went beyond simply blaming the mortgage companies. Just like Milgram’s executioners trying to distance themselves from any role they played in the harm caused, even blaming the student, some bankers went so far as to blame the American homeowner for their troubles. Jamie Dimon, CEO of JPMorgan Chase, told his shareholders in 2010, “We’re not evicting people who deserve to stay in their house.” The Responsibility of Business


“THERE IS ONE and only one social responsibility of business,” said Milton Friedman in 1970, six years before winning the Nobel Prize in Economics, “to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” By the “rules” I believe Friedman was referring to the law, a well-intentioned yet imperfect set of guidelines filled with accidental or sometimes political loopholes designed by well-intentioned or sometimes political people. Friedman’s words seem to have become the standard for American capitalism today. Over and over, companies demonstrate a preference for adhering to the letter of the law in their aim to drive profit over any moral responsibility they may have to people they serve or the country or economy within which they operate. Translated to Milgram’s experiment, too many leaders of companies prefer to obey the scientist instead of a higher moral authority. They can justify their actions as within the law while ignoring the intention of the laws they aim to uphold. Apple Inc. managed to sidestep paying tens of billions of dollars in taxes by setting up subsidiaries in Ireland, where companies are taxed based on where they are incorporated (Apple is incorporated in the United States). The U.S. tax code, in contrast, calculates a company’s tax liability based on where it makes or keeps its money (Apple was keeping all the money it made in Asia and Europe in Ireland). This distinction allowed Apple to fall between the cracks of the two countries’ tax laws and, in so doing, between 2009 and 2012, it kept $74 billion out of the reach of the IRS, or any taxing authority for that matter. This is a fact Apple does not deny. As one of the great innovators of our day, the technique Apple pioneered of routing profits through Irish subsidiaries and the Netherlands then to the Caribbean to avoid American taxes has been copied by many other companies since. Yet Apple, according to Friedman’s thinking, broke no rules. We have an absolute need to form bonds of trust. Our survival depends on it. To that end, our primitive brain is constantly evaluating the words and behaviors of companies exactly the same way it evaluates the words and behaviors of individuals. On a biological level, trust is trust, regardless with whom it is formed. If someone says or does something that makes us feel that we couldn’t trust them with our lives, then we keep our distance. Simply following the law means we should trust cheating boyfriends or girlfriends because they broke no laws of marriage. As social animals morality also matters. Our (or indeed a company’s) sense of right or wrong, despite the letter of the law, matters on a social level. This is the very foundation of civil society.


Timothy Cook, Apple’s CEO, raised the question of responsibility at a congressional hearing about the matter. “Unfortunately, the tax code has not kept up with the digital age,” he said. Is it the governing authority’s responsibility to close all loopholes or do companies bear some responsibility also? Is this an act of civil disobedience by Apple to force the government to do better? Apple is a good company that does good things, like giving to education, but because most people are unaware of those things, when they hear about Apple’s tax avoidance, it can affect how we trust the company. But the problem is bigger than Apple. It seems to be the standard for doing business today—to exploit the loopholes until the rules catch up (and sometimes lobby against changing the rules). And if that’s the case, then no one should have any problems with the decisions made by the Oceanic Steam Navigation Company. Within the Law THE LARGEST SHIPS in the period before the turn of the twentieth century were predominantly ferries. They moved huge numbers of people from one place to another within close proximity to the shore. Logically, the regulations that outlined the responsibilities of the ship owners were based on how ships were used at that time—as ferries. By the time the Titanic set sail in 1912, however, the regulations had not yet been updated to reflect this new breed of oceangoing vessel (the equivalent to Timothy Cook’s “digital age”). The Titanic carried as many lifeboats as was required by the law, which was sixteen. The problem was, the Titanic was four times larger than the largest legal classification of ships of the day. The Oceanic Steam Navigation Company, the Titanic’s owner, adhered to the outdated regulation (in fact, they actually added four more inflatable rafts). Unfortunately, as we all know, on April 14, 1912, just four days after leaving port on its maiden voyage, the Titanic struck an iceberg far from any shoreline. There were not enough lifeboats for everyone and more than 1,500 of the 2,224 passengers and crew on board died as a result. A ship four times bigger than the largest classification carried only a quarter of the lifeboats they actually needed. Not surprising, only a few more than a quarter of the passengers and crew survived that day. The entire shipping industry was fully aware that the outdated regulation


would soon be updated. In fact, additional space was added aboard the deck of the Titanic in expectation of a “lifeboats for all” requirement. But lifeboats were expensive. They require maintenance and could affect a ship’s stability, so executives at the Oceanic Steam Navigation Company decided not to add the lifeboats until the regulation said they had to. Though there were not enough lifeboats for all the passengers on board the Titanic, the company was in full compliance with applicable rules. The disturbing correlation between Apple’s arguments against paying taxes and the decision of the Titanic’s owners not to add lifeboats doesn’t stop there. Just as the shipping industry lobbied against the change in regulations in the early twentieth century, arguing that having so many lifeboats sitting visibly on the decks would hurt business because people would think their vessels were unsafe, Apple and others contend that paying their actual tax liability would hurt their ability to compete. (Incidentally, this is the same argument that car manufacturers used in the 1950s as seat belt requirements were being considered. They feared that the existence of a seat belt would lead people to think their cars were unsafe.) It may be worth noting that, as reported by the Congressional Budget Office, in 2011 American taxpayers contributed $1.1 trillion to the government whereas corporate taxes totaled just $181 billion. Though lives may not be at stake in this shell game many companies play, on a strictly biological level, such behavior makes it very hard for the rest of us to really trust them. Being a company of high moral standing is the same as being a person of high moral character—a standard not easily determined by the law but easily felt by anyone. Given the scale at which so many companies now operate, it seems fair that the leaders of many large companies have no choice but to manage their businesses on spreadsheets and screens, often far removed from the people their decisions will ultimately impact. But if Milgram’s numbers play out, it would mean that 650 of the leaders of Fortune 1000 companies, the largest companies in America, are able to make decisions without consideration for their impact on the lives of human beings. This goes straight back to the conditions in which we, the human animal, operate best. If we are to reduce the damaging effects of abstraction on our decision making, based on Milgram’s experiment, a sense of a higher authority —God, a noble cause, a compelling vision for the future or some other moral code and not a shareholder, customer or market demand—is essential. When our leaders give us something noble to be a part of, offer us a compelling purpose or


reason why we should come to work, something that will outlive us, it seems to give us the power to do the right thing when called upon, even if we have to make sacrifices to our comfort in the short term. And when a leader embraces their responsibility to care for people instead of caring for numbers, then people will follow, solve problems and see to it that that leader’s vision comes to life the right way, a stable way and not the expedient way. It is not about good people or bad people. Like Milgram’s volunteers, many of us work out of sight of the people our decisions affect. That means we are working at a significant disadvantage if we have any desire to do the right thing (which is different from doing what’s legal). One cannot help but to recall Johnny Bravo who, above the clouds and unable to have a visual contact with the Special Operations Forces below, felt it necessary to fly down just so he could see those he was there to protect. When we opt to stay above the clouds, relying only on information fed to us instead of going down to see for ourselves, not only is it harder to make the right moral decisions, it makes it even harder to take responsibility when we fail to do so. The good news is, there are things we can do to help us manage the abstraction and keep our Circles strong.


CHAPTER 15 Managing the Abstraction Numbers of People Aren’t People, They’re Numbers “THE DEATH OF one man is a tragedy,” Joseph Stalin reportedly said. “The death of a million is a statistic.” Stalin was a man who well understood statistics. As General Secretary of the Communist Party of the Soviet Union from 1922 to 1952, he is said to have been responsible for the deaths of millions of people, most of whom were Soviet citizens. Like so many dictator types, he had a cult of personality, operated with extreme brutality, trusted very few people and was very, very paranoid. But he was also absolutely right about how we perceive a tragedy that befalls one person over that of hundreds, thousands or even millions. Here are two stories to show you what I mean. Both stories are completely true. STORY 1 When I wrote this book, the country of Syria was being torn apart by what was basically a civil war. Inspired by the Arab Spring that swept across the region, the Syrian people rose against the dictatorship of Bashar al-Assad, who took control of the country in 2000 when his father, Hafez al-Assad, died after twenty-nine years of equally brutal rule. In over forty years of Assad rule, two generations of Syrian men and women have known nothing else. This is a modern media world, however, and as much as the Syrian government tried to suppress news of uprisings in neighboring nations, word of these rebellions made it through. But in stark contrast to the peaceful uprising in Tunisia, the Syrian rebellion was met with extreme and intense


brutality by the Assad government. World opinion did nothing to affect the Assad regime as it continued to pound a disorganized and ill-equipped rebellion with the full might of the army. United Nations estimates, at the time of this book, were that over 100,000 Syrians were killed by the Syrian military, including nearly 1,500 in a single chemical attack. A good many of them innocent civilians. STORY 2 An eighteen-year-old girl was lying in the middle of the street in San Clemente, California. She had been hit by a car driven by a seventeen-year-old girl. Unconscious with one of her legs broken and pointing sideways at an unnatural angle, she was in bad shape. Cami Yoder, an Army reservist, who happened to be driving past, pulled over to see if she could help. Kneeling down beside the injured young woman, Cami took her vitals. The girl wasn’t breathing and her pulse was faint, at best. Immediately, Cami began CPR and mouth-to-mouth resuscitation to try to keep the young woman alive. Not much later an ambulance arrived and the paramedics took over. They stabilized the young woman and took her to the hospital. A few days after the incident, Cami wondered how the girl was faring. She was able to find the news story online and learned what had happened. She had died. This young woman, her whole life ahead of her, was gone. Which story evoked a stronger feeling, the first one or the second one? A story about tens of thousands of people struck down by their own military as they stood up for something noble does not have the same emotional impact on us as the story of one person does. We mourn the death of one young woman with an empathy that we are seemingly unable to muster for thousands of young women and children and others struck down as senselessly and even more brutally. This is one of the shortcomings of using numbers to represent people. At some point, the numbers lose their connection to the people and become just numbers, void of meaning. We are visually oriented animals. We can pursue


things we can see. If it is a person in need, we can rush to their aid. If there is a clear vision of a future state brighter than our world today, we can work to build it. And if it is to advance a metric from one number to another, we can do that too. But when numbers are the only thing we can see, our ability to perceive the distant impact our decisions may have is frustrated. It’s one thing for big numbers to represent money or products. But when big numbers start representing human beings, as Stalin told us, our ability to empathize starts to falter. If your sister, the major breadwinner in her family, loses her job, it will have a significant impact on the lives of your niece and nephew. And that loss would be a deep emotional burden on your sister, her family and probably you too. But a decision made using a spreadsheet to lay off four thousand people at some large corporation loses tangibility and becomes something that just needs to be done to meet certain goals. The numbers no longer represent people who support their families but simply abstractions to be calculated. Be it a politician or someone working in a company, perhaps the most valuable thing we can do if we are to truly serve our constituents is to know them personally. It would be impossible to know all of them, but to know the name and details of the life of someone we are trying to help with our product, service or policy makes a huge difference. The moment we are able to make tangible that which had previously been a study or a chart, the moment a statistic or a poll becomes a real living person, the moment abstract concepts are understood to have human consequences, is the moment our ability to solve problems and innovate becomes remarkable. Rule 1. Keep It Real—Bring People Together AS IF THE abstracting qualities of numbers and scale aren’t enough to deal with when trying to run an organization, these days we have the added complication of the virtual world. The Internet is nothing short of awe inspiring. It gives the power to operate at scale or spread ideas to anyone, be it a small business or a social movement. It gives us the ability to find and connect with people more easily. And it is incredible at speeding the pace of commercial transactions. All of these things are good. But, just as money was developed to help expedite and simplify transactions by allowing payment to be rendered without barter, we


often use the Internet as a means to expedite and simplify communication and the relationships we build. And just as money can’t buy love, the Internet can’t buy deep, trusting relationships. What makes a statement like that somewhat tricky or controversial is that the relationships we form online feel real. We can, indeed, get bursts of serotonin when people “like” our pictures, pages or posts or when we watch ourselves go up in a ranking (you know how much serotonin loves a ranking). The feelings of admiration we get from virtual “likes” or the number of followers we have is not like the feelings of admiration we get from our children, or that a coach gets from their players. It is simply a public display of “like” with no sacrifice required—a new kind of status symbol, if you will. Put simply, though the love may feel real, the relationship is still virtual. Relationships can certainly start online, but they only become real when we meet face-to-face. Consider the impact that Facebook and other online communication tools have had on teen bullying. One quarter of all teenagers in the U.S. say they have experienced “cyberbullying.” What we’ve learned is that abstractions can lead people to abhorrent behavior, to act like they’re not accountable. An online community gives shy people a chance to be heard, but the flip side is it also allows some to act out in ways they probably never would in real life. People say horrible things to each other online, things they probably would never say in person. The ability to maintain distance, even complete anonymity, has made it easier to stop acting as humans should—with humanity. And despite the positive feelings we can have when meeting people online, unlike real friendships based on love and trust, the feelings we get don’t last too long after we’ve logged off and they rarely if ever stand the test of time. It seems to stir controversy when I talk about the fact that no matter how great social media is, it is not as effective for building strong bonds of trust as real human contact is. Social media fans will tell me about all the close friends they’ve made online. But if social media is the end-all-be-all, then why do over thirty thousand bloggers and podcasters descend on Las Vegas every year for a huge conference called BlogWorld? Why don’t they meet online? Because nothing can replace face-to-face meetings for social animals like us. A live concert is better than the DVD and going to a ball game feels different from watching on TV, even though the view is better on television. We like to actually be around people who are like us. It makes us feel like we belong. It is also the reason a video conference can never replace a business trip. Trust is not formed through a screen, it is formed across a table. It takes a handshake to bind


humans . . . and no technology yet can replace that. There is no such thing as virtual trust. On the website for NMX (the official name for the BlogWorld event), there’s a promotional video in which people talk about what is so great about going to the event. “Sharing ideas” is a frequent advantage discussed. “Getting to meet so many different people,” “bringing everyone together” and “meeting people who do what I do, who are on the same journey.” These are also frequent themes. And of course, my personal favorite, said by someone who follows many of the bloggers who attend the conference, “I got to shake their hands and that was awesome!” Even bloggers have to appreciate the irony of bringing together the champions of the blogosphere to meet in person to share ideas about the supremacy of the blogosphere. Real, live human interaction is how we feel a part of something, develop trust and have the capacity to feel for others. It is how we innovate. It is why telecommuters never really feel like they are a part of the team as strongly as the ones who go to work every day. No matter how many e-mails they send or receive, no matter how kept in the loop they are, they are missing all the social time, the gaps, the nuance . . . the humanity of being around other humans. But what do we do in hard times when we need good ideas most? We cut back on conferences and business trips because video conferencing and webinars are cheaper. Perhaps. But only in the short term. Given how relatively new social media is, the long-term impact of all this dehumanizing is still yet to be fully realized. Just as we are feeling the impact today of the policies and practices implemented in the 1980s and 1990s that prioritized profit over people, we will have to wait a generation before we feel the full effects of our modern bias to replace real interaction with virtual ones. Rule 2. Keep It Manageable—Obey Dunbar’s Number IN 1958, BILL Gore quit his job at DuPont to pursue his belief in the possibilities of the polymer polytetrafluoroethylene, or PTFE, commonly known as Teflon. That same year, he and his wife, Vieve, started W. L. Gore & Associates in their basement. It was a friendly place, and everyone knew everyone else. But the discovery of a new polymer—expanded polytetrafluoroethylene (ePTFE)—by


their son Bob changed the course of Bill and Vieve’s company forever. ePTFE, or GORE-TEX, as it’s more commonly known, had nearly infinite applications in medical, fabric and industrial markets. It was only a matter of time before the humble, family-oriented company outgrew its basement headquarters and moved into a factory. Business was booming and as demand grew, so did the factory and the number of people in its employ. As the story goes, one day Bill Gore walked out onto the floor of his factory and realized he didn’t recognize many of the people. Things had gotten so big that he simply did not know who was working for him anymore. Something told him that this couldn’t be good for him, his employees or the company. After doing some counting, Gore concluded that to maintain the sense of camaraderie and teamwork he felt was essential for the factory to run smoothly, it should have only about 150 people. That was the magic number. Instead of trying to eke out more efficiencies by increasing the size of the existing factory, Gore would simply build an entirely new factory, sometimes right next door to an old one. Each factory was capped at 150 people. It turned out, Bill Gore was onto something. Business continued to boom under this model and, as important, the relationships among the employees stayed strong and cooperative. Today the still privately held company has sales of $3.2 billion per year and employs more than 10,000 people around the world, and it still attempts to organize its plants and offices into working groups of about 150 people. Though Bill Gore was trusting his gut based on his own observations, it’s no coincidence that he arrived at the 150 person limit. Robin Dunbar, British anthropologist and a professor in the Department of Experimental Psychology at Oxford University, arrived at this same conclusion. Professor Dunbar figured out that people simply cannot maintain more than about 150 close relationships. “Putting it another way,” he likes to say, “it’s the number of people you would not feel embarrassed about joining uninvited for a drink if you happened to bump into them in a bar.” The earliest groups of Homo sapiens lived in hunter/gather tribes that maxed out between 100 to 150 people. Amish and Hutterite communities are about 150 in size. The Bushmen of South Africa and Native Americans also live in groups that cap out at about 150. Even the size of a company of Marines is about 150 people. That magical number is the number of close relationships we are naturally designed to manage. Any more than that starts to cause a breakdown if rigid social systems, or effective hierarchy and bureaucracy, are not implemented


to help manage the scale. This is the reason senior leaders must trust midlevel leaders, because no one person can effectively manage large numbers of people if there is to be a strong sense of trust and cooperation. The reasons groups function best when they do not get bigger than about 150 people make perfect sense when you look closely. The first reason is time. Time is a constant—there are only twenty-four hours in a day. If we only gave two minutes to everyone we knew, we wouldn’t get to know people very well and deep bonds of trust would likely never form. The other is brain capacity. We simply can’t remember everyone. Which is why Dunbar’s Number is about 150, some can remember more and some remember fewer. In addition, as Dunbar has noticed in his research, when groups get bigger than about 150, the people are less likely to work hard and less likely to help each other out. This is a pretty significant finding as so many businesses work to manage their growth by focusing on cost efficiencies but ignore the efficiencies of human relationships. And ultimately, it is the strength of those human relationships that will help an organization manage at scale. Many people thought that with the introduction of the Internet Dunbar’s Number would be rendered obsolete. The ability to communicate with large numbers of people would become more efficient, giving us the capacity to maintain more relationships. It turns out not to be the case. Our anthropology wins again. Even though you may have eight hundred friends on Facebook, odds are high that you do not personally know them all and they may not all personally know you. If you were to sit down and try to contact all of them directly, as the journalist Rick Lax wrote about on wired.com, you would learn very quickly that Dunbar’s Number wins. Lax was surprised how few of his two thousand “friends” he actually knew or who actually knew him. In small organizations, where we are able to know everyone, it is much easier for us to do the work necessary to look after them. We are, for all the obvious reasons, more likely to look after people we personally know than those we don’t. If a person on a factory floor knows who the accountant is and the accountant knows who the machinists are, they are more likely to help each other. When a leader is able to personally know everyone in the group, the responsibility for their care becomes personal. The leader starts to see those for whom they are responsible as if they were their own family. Likewise, those in the group start to express ownership of their leader. In a Marine platoon of about forty people, for example, they will often refer to the officer as “our” lieutenant.


Whereas the more distant and less seen senior officer is simply “the” colonel. When this sense of mutual ownership between leader and those being led starts to break down, when informality is replaced by formality, it is a sure sign the group may be getting too big to lead effectively. This means, for larger organizations, the only way to manage the scale and keep the Circle of Safety strong is to rely on hierarchies. A CEO can “care” about their people in the abstract, but not until that abstraction is mitigated can the care be real. The only way to truly manage at scale is to empower the levels of management. They can no longer be seen as managers who handle or control people. Instead, managers must become leaders in their own right, which means they must take responsibility for the care and protection of those in their charge, confident that their leaders will take care of them. Professor Dunbar learned that in bigger companies, ones with many hundreds or thousands of employees who are not distributed into groups of fewer than 150, employees tend to have more friends outside of their jobs than inside. The larger the group of people we work with, the less likely we are to develop


any kind of trusting relationships with them. I had the opportunity to take a tour of the old offices of a large social media company in Northern California. (I can’t say which one it was because the company requires that every visitor sign a restrictive nondisclosure agreement before they let them in the building.) The office was a large, loft-style open space with rows of people working together. The goal of the open space was to encourage open communication and a cross-pollination of ideas. The manager giving the tour made a comment that I found interesting, given Dunbar’s own findings. This company grew in part because of a culture of amazing cooperation, sharing and open communication, he told me. The company believed it was because of the open-plan layout. And so, as the company grew, they kept that same layout—the one that I was being shown. But for reasons they couldn’t quite explain, cooperation and open communication did not improve as the company grew. In fact, as my tour guide admitted, it got worse. Dunbar wins again. Rule 3. Meet the People You Help IN 2010, ADAM Grant, a management professor at the Wharton School of Business at the University of Pennsylvania and author of Give and Take: A Revolutionary Approach to Success, set out to study the effectiveness of his college’s fundraising department and to understand what worked and what didn’t. The job was straightforward: employees called on alumni and tried to persuade them to donate money to a scholarship fund for exceptional students whose families couldn’t afford to pay for college. The fund-raisers were instructed to describe the university’s dire financial position and the impressive accomplishments of the prospective recipients. The alumni would hear about the university’s need to increase its investment in computer science, say, or business administration, to help create the next generation of leaders. This was, after all, the future workforce of the new economy, the callers would tell them. By all accounts, the pitch was pretty inspiring. Yet as hard as they would try, fund-raisers were having only moderate success. Their numbers didn’t improve even with an arsenal of research about the sting of the recession on university budgets. Furthermore, the job had all the


characteristics of any mundane work—repetitive tasks, long hours sitting still and occasionally rude customers. Needless to say, turnover in the fund-raising department was extraordinarily high, leading to even worse morale. So Grant came up with an idea to improve the effectiveness of the fund-raisers . . . and it only took five minutes. Professor Grant arranged for students who received the scholarships to come to the office and spend five minutes describing to fund-raisers how the scholarship they received changed their lives. The students told them how much they appreciated the hard work of the fund-raising department. Even though the people impacted by the work of the fund-raisers were only there for a short time, the results were astounding. In the following month, the fund-raisers increased their average weekly revenue by more than 400 percent. In a separate similar study, callers showed an average increase of 142 percent in the amount of time they spent on the phone and a 171 percent increase in the amount of funds they raised. As social animals, it is imperative for us to see the actual, tangible impact of our time and effort for our work to have meaning and for us to be motivated to do it even better. The logic seems to follow Milgram’s findings, except in this case, it’s positive. When we are able to physically see the positive impact of the decisions we make or the work we do, not only do we feel that our work was worth it, but it also inspires us to work harder and do more. A control group that had not received a visit from a student showed no improvement in sales or time spent on the phone. A third group that simply listened to a manager describe how much a scholarship meant to a student also showed no increase in performance. In other words, our bosses telling us how important our work is, is nowhere near as powerful as us getting to see it ourselves. The loan department of Wells Fargo Bank had a similar experience. When they invited a customer to come into the bank and describe how a loan had changed their life—how it allowed them to buy a house or pay off a debt—it had a dramatic effect on the motivation of bank employees to help more people do the same. They could see for themselves the impact their work was having in someone’s life. This is a significant shift in how the employees perceived their jobs and it is foundational to having a sense of purpose in the work we do. Without necessarily being aware of it, many of the employees stopped coming to work to sell loans and started coming to work to help people. Further proof of how much the quality of our work improves when we can attach a human being


to the results was seen in a study that found that simply showing radiologists a photograph of a patient led to a dramatic improvement in the accuracy of their diagnostic findings. Adam Grant conducted another study on lifeguards at a community recreation center. One group of lifeguards was given reading material of testimonies from other lifeguards about how their work helped them advance their personal goals. A second group was given materials to read of firsthand accounts of lifeguards who had actually saved the lives of swimmers. Those who had read about lifeguards saving people’s lives were far more motivated at work and devoted more time to helping swimmers than those who read about how the job could help them personally. Many of us would say we’re not surprised by these findings. After all, it seems rather obvious. Or does it? Grant surveyed several thousand executives to find out how important it was to them that they feel their work has value. The results: only 1 percent of the executives said managers should bother showing employees that their work makes a difference. If anything, many companies try to explain the value our work will have in our own lives, the benefits we will reap if we hit a goal, as opposed to the benefit that others will derive. But remember our biology: we are naturally cooperative animals that are biologically more inspired and motivated when we know we are helping others. This is one of the reasons I love the organization charity: water. If you give them a donation (which you can do at charitywater.org), besides the fact that 100 percent of that donation goes to the cause they are championing, to bring clean drinking water to the 700 million people who don’t have it, they will actually send you a photograph and GPS coordinates of the well your money paid for. Though going to Africa and meeting the people yourself is even better, it is quite powerful to see the actual result of the donation you give. Most of us, unfortunately, never see the people whose lives our work touches. For the vast majority, the closest we come to “seeing” results is evaluating numbers on spreadsheets or reading about what “customers” like in a report. If the line on the graph goes up, we are told we’ve done well and we should feel proud for what we’ve accomplished. We are expected to feel something for the numbers and think about the people. Our want to invest more time and energy is, however, biologically tied to the opposite—to feel for the people and think about the numbers. It makes sense for social animals that our sense of purpose is always human.


Rule 4. Give Them Time, Not Just Money LET’S SAY YOU’RE moving to a new house. To help you out, one of your friends pays for the moving company. A very generous offer worth $5,000. Another friend comes to your house and helps you pack the boxes, load the truck, travel with you to the new house, unload and unpack the boxes. Two weeks later, both friends need a favor from you on the same day. Which would you feel more inclined to help, the one who wrote a check or the one who committed time and energy? Money is an abstraction of tangible resources or human effort. It is a promissory note for future goods or services. Unlike the time and effort that people spend on something, it is what money represents that gives it its value. And as an abstraction, it has no “real” value to our primitive brains, which judge the real value of food and shelter or the behavior of others against the level of protection or safety they can offer us. Someone who gives us a lot of money, as our brains would interpret their behavior, is not necessarily as valuable to our protection as someone willing to commit their time and energy to us. Given our obsessive need to feel safe among those in our tribe—our communities and our companies—we inherently put a premium value on those who give us their time and energy. Whereas money has relative value ($100 to a college student is a lot, $100 to a millionaire is a little), time and effort have an absolute value. No matter how rich or poor someone is, or where or when they are born, we all have 24 hours in a day and 365 days in a year. If someone is willing to give us something of which they have a fixed and finite amount, a completely nonredeemable commodity, we perceive greater value. If we waste money, we can make more (especially in our society). But we’ve all had the experience of sitting in a meeting or watching a movie . . . or maybe even reading this book . . . and thinking to ourselves, “I will never get this time back.” You can save time if you stop reading now, but I cannot give back the time you spent to get here. Sorry. And it’s not just time. The energy we give also matters. If a parent goes to watch their kid’s soccer game but only looks up from their mobile device when there is cheering, they may have given their time, but they haven’t given their energy. The kid will look over to see their parent’s head down most of the game, busy texting or e-mailing the office or something. Regardless of the intentions of that parent, without giving their attention, the time is basically wasted for both parent and child. The same is true in our offices when we talk to someone while


reading our e-mails or sit in a meeting with one eye on our phone. We may be hearing all that is said, but the person speaking will not feel we are listening, and an opportunity to build trust—or be seen as a leader who cares—is squandered. Just as a parent can’t buy the love of their children with gifts, a company can’t buy the loyalty of their employees with salaries and bonuses. What produces loyalty, that irrational willingness to commit to the organization even when offered more money elsewhere, is the feeling that the leaders of the company would be willing, when it matters, to sacrifice their time and energy to help us. We will judge a boss who spends time after hours to help us as more valuable than a boss who simply gives us a bonus when we hit a target. If a colleague told you that over the weekend they gave $500 to charity, what would you think of them? We’d think they were nice but we would probably wonder why they were telling us. Did they want a medal or something? If another colleague told us that over the weekend they volunteered their time to paint a school in the inner city, what would you think of them? “That’s cool,” we’d think to ourselves, “I should do more.” Simply hearing about the time and energy someone gave to others can inspire us to want to do more for others too (remember your oxytocin). Though we may get a shot of chemical feel-good from the money we give, it doesn’t last long and it isn’t likely to affect how others view us. Someone participating in a walk-a-thon finds it personally fulfilling and does more to raise their status than the one who simply donated to their effort. Giving time and energy actually does more to impact the impression others have of us than giving money. This is the reason a CEO with a bad reputation can’t redeem themselves by writing checks to charity. That’s not behavior that we would view as valuable to the tribe. It is also the reason we are more tolerant of the missteps or occasional bad decisions made by a CEO whom we believe to be genuinely committed to the protection of their people. A leader of an organization can’t simply pay their managers to look out for those in their report. A leader can, however, offer their time and energy to those in their care, and in turn those managers would be more willing to give their time and energy to their subordinates. Then their subordinates would, in turn, be more inclined to give time and energy to their direct reports. And, at the end of the chain, the people with outward-facing jobs are more likely to treat the customer better. It’s just biology. The oxytocin and serotonin make us feel good when time and energy are given to us, which inspires us to give more of ourselves to others. Business is a human enterprise. It may even be why we call a business a


“company”—because it is a collection of people in the company of other people. It’s the company that matters. Rule 5. Be Patient—The Rule of Seven Days and Seven Years I WENT ON a first date with a woman recently. It was an amazing first date. We spent nearly eight hours together. We went for brunch and strolled around the city. We went to a museum then went for dinner. We talked and talked the whole time. We were both smiling, giggling, we even started holding hands a few hours in. As a result of that amazing first date, we’ve decided to get married. Needless to say, we are both very excited. You flinched a bit when you read that last bit, didn’t you? It’s normal. When we hear stories like that, our immediate reaction is “that’s crazy.” But you weren’t on my date with me. We’re in love . . . I swear. The fact is, we instinctively know that the strong bond of human trust cannot be formed after one date or even after one week. In contrast, if I told you I’ve been dating the same woman for seven years and we’re not married yet, you might think, “What’s wrong then?” The strong positive feeling we may have after a great first date, or even a great job interview is not love or trust. It’s a predominantly dopamine-fueled feeling telling us that we think we’ve found what we’re looking for. Because it feels good, we can sometimes mislabel it as something more stable than it is, even if both parties feel it. This helps us explain how that love-at-first-date may crumble soon after. It also helps us understand why someone we loved in an interview, a few months into the job, doesn’t turn out to be a good fit for the organization. It’s because we didn’t actually spend enough time to get to know if we can, indeed, rely on the person. Jumping straight in, even if it “feels right,” is nothing short of gambling. It may work out, but the odds are against you. It is just as bad if we stay too long without ever feeling like we belong. If we’ve been at a job for seven years and still don’t feel it. . . well . . . maybe it’s time to move on. Our internal systems are trying to help us navigate the social world so that we can find people who may be more willing to give of themselves to help us and be a part of our Circle of Safety. It takes time to get to know someone and


build the trust required to sustain a relationship, personal or professional. Our world is one of impatience. A world of instant gratification. A world ruled by dopamine. Google can give us the answer we want now. We can buy online and get what we want now. We can send and receive information instantaneously. We don’t have to wait a week to see our favorite show, we can watch it now. We have gotten used to getting what we want when we want it. This is all fine and good for movies or online shopping, but it’s not very helpful when we are trying to form the bonds of trust that can withstand storms. That takes time, and there’s no app that can speed that up. I have no data to say exactly how long it takes to feel like we trust someone. I know it takes more than seven days and I know it takes fewer than seven years. I know it is quicker for some and slower for others. No one knows exactly how long it takes, but it takes patience.


F CHAPTER 16 Imbalance or an animal designed to live and work in conditions in which resources were relatively scarce, having too much of anything can create some inherent problems for the forces that influence our behavior. For 40,000 years, we lived in a predominantly subsistence economy. We rarely had significantly more than we needed. It was only about 10,000 years ago, when we first became farmers instead of hunters and gatherers, that we started to move into a surplus economy. Able to produce more than we needed, we could now grow our populations beyond about 150 people. We could trade our surplus with others. We could afford to waste more than was thought prudent in an earlier age. And we could afford to have standing armies and intellectual and ruling classes. Whenever a group moves from subsistence to surplus, and ruling classes, those with the greatest surplus work hardest to mold society to meet their expectations. The question is, are they using their surplus to affect change that is good for society or for themselves? It should come as no surprise that the richest companies work so hard to lobby legislators to make (or eliminate) regulations to suit their interests. They have more resources to use, protect and further accumulate. And if not properly managed, the cultures of these organizations can fall out of balance. “Destructive Abundance” is what I call the result of this imbalance. It is what happens when selfish pursuits are out of balance with selfless pursuits. When the levels of dopamine-incentivized behaviors overwhelm the social protections afforded by the other chemicals. When protecting the results is prioritized above protecting those who produce the results. Destructive Abundance happens when the players focus almost exclusively on the score and forget why they set out to play the game in the first place. For all the organizations that have suffered from Destructive Abundance, there is a clear pattern that provides lessons for the rest of us. In nearly all those organizations, the cultures weren’t managed properly. There was almost always a leader who didn’t take their responsibility as a leader to heart. Once the


Destructive forces of the Abundance really set in, integrity started to falter and cooperation gave way to politics until the people themselves became just another commodity to be managed, like the electricity bill. Destructive Abundance almost always follows when challenge is replaced by temptation.


[ DESTRUCTIVE ABUNDANCE ] CHAPTER 17 Leadership Lesson 1: So Goes the Culture, so Goes the Company A Culture Sacrificed “LONG-TERM GREEDY.” These were the words Gustave “Gus” Levy, the venerable senior partner at Goldman Sachs, would use to describe the way the company operated. The year was 1970, and Goldman was a “gentleman’s” organization, one that believed in partnership and doing what was best for the client and the firm. Given their reputation these days, it sounds funny, but Goldman bankers were known as “billionaire Boy Scouts” for their seeming desire to always try to do the right thing for clients. “Long-term greedy” meant that sometimes it was worth taking a short-term hit to help a client because the loyalty and trust it produced would in time pay back in spades. And pay back it did. Like so many organizations with a strong culture, Goldman Sachs grew while rivals struggled or failed. Starting in the 1970s and lasting until the early 1990s, it seemed Goldman could do no wrong. “Up until the 1990s, their reputation was very high,” writes Suzanne McGee, a journalist and author of the book Chasing Goldman Sachs. “If an IPO was underwritten by Goldman Sachs, that was akin to Good Housekeeping’s seal of approval.” While we must be careful not to romanticize Goldman’s culture (just as we must not romanticize the Greatest Generation), there is no question that it was considered the gold standard on Wall Street. And as with all strong cultures, it was hard to get in. By hard, I don’t mean the academic standards—I mean something even more difficult. There was a time when even the most academically qualified candidates could not count on getting a position at


Goldman. They had to be a good fit for the culture. They were expected to put the needs of the firm above their own. The partners had to sense that they could trust their people even more than their people could make them rich. The people, in turn, had to believe in long-term greed. It was because their culture was built on these high standards of character that Goldman did well in hard times. While other crews were busy trying to save themselves, sometimes even abandoning ship, Goldman’s people came together to see their ship through rough waters. But something happened. Starting in the 1990s, and certainly accelerating after the company went public in 1999, there’s evidence that the partnership culture started to break down. The time was ripe for a new mentality to take hold at Goldman. “The regulations that had kept finance boring had all but disappeared by the time Goldman’s IPO was issued,” wrote Harvard Law professor Lawrence Lessig in a column for CNN.com. “Bold (and sometimes reckless) experiments (‘financial innovations’) created incredible opportunities for firms like Goldman to profit.” In this atmosphere, the quickly expanding firm began to embrace a new kind of trader, a decidedly more aggressive personality than the investment bankers who had previously occupied the firm’s ranks. The standards by which new people were brought in now put academic pedigree and prior success before cultural fit. The arrival of the new broker caused resentment among those who were proud of the company they had built and of the culture they devoted their lives to uphold and protect. And the company split into two distinct camps: the old Goldman and the new Goldman. One culture was built on loyalty and long-term greed, the other built on numbers and short-term targets. One was built on a balance of social chemicals, the other built on an imbalance that was tilted decidedly toward dopamine. The more people Goldman let in who were driven to maximize their own wealth and status, sometimes at the expense of the firm or the client’s long-term advantage, the more damage it did to the culture of the company, its overall reputation and ultimately the decisions the firm made. William Cohan highlights this in his book Money and Power: How Goldman Sachs Came to Rule the World. “The first time Goldman had actual layoffs, as in fired people because the firm was having a bad year (as opposed to for individual performance reasons), was in the early 1990s, and it was highly traumatic,” Cohan writes. Think about that. Goldman Sachs did not embrace the concept of layoffs until the 1990s. Something had clearly changed.


By 2010, with Goldman Sachs’ role in the mortgage-backed securities crisis, coupled with the huge bonuses it gave out just months after receiving a government bailout, the company’s tarnished reputation was at its lowest point. It was no longer the most trusted firm on Wall Street but rather a symbol of its excess and greed. Its CEO, Lloyd Blankfein, even issued an apology: “We participated in things that were clearly wrong and we have reasons to regret and apologize for,” he said in November 2009. But it was too late (and halfhearted, many felt). No longer called boy scouts, the Goldman Sachs leaders were considered something closer to crooks. This story is not unique to Goldman Sachs. I use Goldman to illustrate what is happening in a good too many of our companies across all sorts of industries. Every culture has its own history, traditions, languages and symbols. When we identify with a culture, we articulate our belonging to that group and align ourselves with a shared set of values and beliefs. We may define ourselves, in part, by the culture of our country of citizenship—for example, I am an American—or by the culture of an organization—such as, I am a Marine. This doesn’t mean we think about our cultural identity on a daily basis. But when we are away from the group or if our tribe is threatened from the outside, it becomes more important. It can even become our primary focus. Remember how the country came together as one after the events of September 11? In strong corporate cultures, employees will form similar attachments. They will identify with the company in a very personal way. The employees of WestJet, Canada’s rebellious populist airline akin to America’s Southwest Airlines, don’t say they work for WestJet—that would make it a job. They call themselves WestJetters. It’s an identity. When we don’t have a sense of belonging, we wear a T-shirt stamped with the company logo to sleep in or while painting the house. When we have a sense of belonging, however, we wear the company schwag in public and with pride. In a weak culture, we veer away from doing “the right thing” in favor of doing “the thing that’s right for me.” When cultural standards shift from character, values or beliefs to performance, numbers and other impersonal dopamine-driven measurements, our behavior-driving chemicals fall out of balance and our will to trust and cooperate dilutes. Like adding water to a glass of milk, eventually the culture


becomes so watered down it loses all that makes it good and healthy, and by then it only looks like or vaguely tastes like milk. We lose our sense of history, of responsibility to the past and of shared tradition. We care less about belonging. In this kind of weak culture, we veer away from doing “the right thing” in favor of doing “the thing that’s right for me.” To work for Goldman Sachs used to mean something more. It wasn’t just a description of a place of employment. For those who fit the culture, it said something about what kind of person they were. It told the outside world what they could expect from them. And it was largely positive. A person could take pride in the association. But the leaders of the company didn’t protect what took so long to build. As Goethe, the great nineteenth-century thinker, reportedly summed up, “You can easily judge the character of a man by how he treats those who can do nothing for him.” If character describes how an individual thinks and acts, then the culture of an organization describes the character of a group of people and how they think and act as a collective. A company of strong character will have a culture that promotes treating all people well, not just the ones who pay them or earn them money in the moment. In a culture of strong character, the people inside the company will feel protected by their leaders and feel that their colleagues have their backs. In a culture of weak character, the people will feel that any protection they have comes primarily from their own ability to manage the politics, promote their own successes and watch their own backs (though some are lucky enough to have a colleague or two to help). Just as our character defines our value to our friends, so too does the culture of a company define its value to those who know it. Performance can go up and down; the strength of a culture is the only thing we can truly rely on. It’s always fascinating to pay attention to the words people choose when describing their relationship with their jobs. Words like “love” and “pride” are feelings associated with oxytocin and serotonin, respectively. Or in the case of Goldman Sachs, the lack thereof. “I don’t feel safe,” a current employee at Goldman Sachs told me. “I could lose my job at any moment. Goldman has no heart,” she said. That she would say the company has “no heart” is a recognition of the lack of empathy in the culture. And when empathy is lacking, aggression, fear and other destructive feelings and actions dominate. A former Goldman employee who worked at the firm in the 2000s, well into the cultural transformation, described an atmosphere of ruthlessness, with managers pitting one team of advisers against another as they fought for a


project or client. He described an environment with no trust, no mutual respect and, above all, no accountability when things went wrong. The environment was one of win at all costs, even if it meant squashing a coworker (not to mention a client). Not surprisingly, despite the status one got from working at Goldman (a status probably built from the venerable years before), the former employee and nearly all his colleagues left for other companies within two years. It was just too much for a human to put up with if they wanted to maintain their sanity and be happy, if not successful. But the leaders allowed this culture to continue. On March 14, 2012, the New York Times carried an editorial by Greg Smith, then an executive director of Goldman Sachs, in which he announced his immediate resignation from the firm, where he had worked for twelve years. In it, he wrote about the firm’s “toxic” culture: The culture was the secret sauce that made this place great and allowed us to earn our clients’ trust for 143 years. It wasn’t just about making money; this alone will not sustain a firm for so long. It had something to do with pride and belief in the organization. I am sad to say that I look around today and see virtually no trace of the culture that made me love working for this firm for many years. I no longer have the pride, or the belief. Leadership used to be about ideas, setting an example and doing the right thing. Today, if you make enough money for the firm (and are not currently an ax murderer) you will be promoted into a position of influence. . . . When the history books are written about Goldman Sachs, they may reflect that the current chief executive officer, Lloyd C. Blankfein, and the president, Gary D. Cohn, lost hold of the firm’s culture on their watch. When we assess how we “feel” about our jobs, we are very often responding to the environments in which we work. It is not just about the work we are doing, per se. And when a culture changes from a place where people love to work into a place where they go to work simply to take something for themselves, the finger gets pointed at the people who run the company. People will respond to the environment in which they operate. It is the leaders who decide what kind of environment they want to build. Will they build an inner circle around those closest to them or will they extend the Circle of Safety to the


outer edges of the organization? The vast majority of people who work at Goldman Sachs, despite what some critics would like to believe, are neither bad nor evil. However, the environment their leaders have created for them to work in makes it possible for them to do bad or evil things. As humans, our behavior is significantly influenced by the environments in which we work . . . for better and for worse. In November 2008, terrorists armed with automatic weapons attacked various sites in Mumbai, India, killing over 160 people. The Taj Mahal Palace Hotel was one of those sites. What makes the story of the Taj extraordinary, however, is that their employees risked their lives to save the guests. There are stories of telephone operators who, after having made it out safely, ran back into the hotel to call guests to help them get out. There are other stories of kitchen staff who formed a human shield to protect guests as they tried to escape the carnage. Of the 31 people who died at the hotel that day, nearly half of them were staff members. Rohit Deshpande, a Harvard business professor who researched the events at the Taj, was told by senior management at the hotel that they couldn’t explain why their people acted so bravely. But the reason is not elusive—it was the result of the culture those leaders had cultivated. One of the finest hotels in the world, the Taj insists that their people put the interests of their guests before those of the company; in fact, they are often rewarded for doing so. Unlike the culture of Goldman Sachs these days, at the Taj grades and pedigree play less of a role in how they select their people. They’ve learned that graduates from second-tier business schools, for example, often treat others better than those from top-tier business schools . . . and so they prefer to hire from the second tier. Respect and empathy are valued over talent, skill or motivation for personal advancement. Once hired, the staff’s inclinations are reinforced and encouraged, which in turn builds a strong culture in which people can be trusted to improvise rather than do things by the book. The Taj knows its people will “do the right thing,” not the thing that’s right for them. So goes the culture, so go the people. I am always struck when a CEO of a large investment bank is shocked to learn that there was a “rogue trader” in their midst who, in pursuit of personal gains or glory, made decisions that caused damage to the rest of the company. What else should we expect from a culture that reinforces and rewards selfinterested behavior? Under these conditions, a CEO is basically gambling that their people will “do the right thing.” But it’s not the people who set the course.


It’s the leadership. Bad Cultures Breed Bad Leaders KIM STEWART WAS just one of the many employees who suffered as a result of a toxic environment. She knew on her first day at Citigroup that there was something wrong with the culture. “I remember I came home and told my husband, ‘I have to limit the number of smart things I say.’” The problem wasn’t that she thought her boss or her colleagues were stupid, but rather that they felt threatened (a perfectly valid feeling to have in an organization with a weak Circle of Safety). There seemed to always be an air of suspicion and mistrust at the office. Stewart recalls that when she first joined the investment banking division in 2007, she immediately set out to understand the way the company closed certain kinds of deals. She went to her boss and asked him to confirm her understanding of the process, which he did. So why was her first deal an embarrassing disaster? Stewart later found out that her boss, concerned that her success might threaten his own status, intentionally left out a key part of the deal-making process, ensuring she would bomb. It was as if he wanted her to fail in order to make his performance look better. “At Citi,” Stewart says, “the feeling was ‘I don’t want anybody to know as much as I do because then I am expendable.’” This is a behavior designed for nothing but self-preservation. It is a classic symptom of a cortisol-rich, unsafe culture where valuable information is hidden to advance or protect an individual or a small group of individuals even though sharing would benefit the others in the group and the organization as a whole. Everybody feared being one-upped by a colleague, Stewart recalls. Nobody felt safe. And not because the company needed to make cutbacks; it was simply the culture. It would be another year before the company would suffer enormous financial losses, leading to its rescue by the federal government, in large part due to an atmosphere of hoarding information rather than sharing it. One cannot but wonder how the financial crisis would have turned out had more of the banks had healthier, chemically balanced cultures in which the people didn’t feel threatened by each other. Of course, cutbacks did come eventually. In November 2008, the company


had one of the single largest rounds of layoffs on record in any industry in history. On one day, Citi issued 52,000 pink slips, amounting to about 20 percent of its workforce. Stewart’s department was cut by more than half, down from 190 to 95, and bonuses were slashed. Once the dust settled, you would think the leaders of the organization would have been humbled. But they weren’t. Instead, the atmosphere got worse. Stewart recalls that in late 2011, a few years after the crisis, when the company was back in the black, her new boss at Citi, a managing director, arrived to introduce himself. He told the employees he was interested in only three things: revenue, net income and expenses. Then he added privately to Stewart, “If you think I’m going to be your mentor and give you career advice, you’re wrong.” So goes the leadership, so goes the culture. A Culture Protected MOST PEOPLE ARE familiar with Post-it Notes. But what most people do not know is how they came to be. Unlike so many companies that develop products by imagining and trying to build them—3M owes the development of Post-it Notes, and so many of its other products, to one simple thing: its culture of sharing. Spencer Silver, the scientist who is partially credited with the creation of the Post-it, was working in his lab at the Minnesota-based company, actually trying to develop a very strong adhesive. Unfortunately, he wasn’t successful. What he accidentally made was a very weak adhesive. Based on the job specs given to him, he had failed. But Silver didn’t throw his “failure” in the trash out of embarrassment. He didn’t keep his misstep a secret out of fear for his job or guard it closely in the hopes of someday profiting from it. In fact, the unintentional invention was shared with others at the company . . . just in case someone else could figure out a way to use it. And that’s exactly what happened. A few years later, Art Fry, another scientist at 3M, was in church choir practice getting frustrated that he couldn’t get his bookmark to stay in place. It kept falling out of the page, off the music stand and onto the floor. He remembered Silver’s weak adhesive and realized he could use it to make the perfect bookmark! And that was the birth of what would become one of the best-recognized brands in history, with four thousand varieties sold in over a hundred countries. Innovation at 3M is not simply the result of educational pedigree or technical


expertise. Innovation is the result of a corporate culture of collaboration and sharing. In stark contrast to the mind-set of leaders at some investment banks, 3M knows that people do their best work when they work together, share their ideas and comfortably borrow each other’s work for their own projects. There’s no notion of “mine.” In another company, Silver’s botched formula might never have made its way into Fry’s hands. But not at 3M. “At 3M we’re a bunch of ideas,” Fry is known to have said. “We never throw an idea away because you never know when someone else will need it.” The cross-pollination of ideas—combined with an emphasis on sharing across product lines—has led to an atmosphere of collaboration that makes 3M a place where employees feel valued. “Innovation from interaction,” is one of the company’s favorite mottos. Employees are encouraged to present new ideas at internal Tech Forums, regular gatherings of peers from other divisions. One sure sign that all this collaborating is working is that more than 80 percent of 3M’s patents have more than one inventor. This kind of culture has nothing to do with the kind of industry 3M is in. Even an industry that is less collaborative by the nature of its product or service can benefit from sharing. Huge improvements can happen just by getting a fresh set of eyes on the work. Hearing one person’s solution to a problem can inform someone else how to solve a problem of their own. Isn’t this the idea of learning —to pass on our knowledge to others? Take a look at the products 3M develops and you will be amazed at how their innovation leaps from one division to another. Scientists in a 3M lab developing products for the automotive industry set out to create a substance that would help auto body shops mix the filler they used to fix dents. The technology they used came from a 3M lab for creating dental products, from a substance dentists use to mix the putty for dental impressions. In another example, a 3M technology used to brighten highway signs would later be used to invent “microneedle patches,” which allow injections to be delivered painlessly. The cross-pollination of ideas produces innovation to a degree that would make most people’s heads spin. The company has over twenty thousand patents with over five hundred awarded in 2012 alone. In 2009, in the middle of a very tough economy, when other companies were slashing their R&D budgets to save money, 3M still managed to release over a thousand new products. 3M’s products are ubiquitous, though typically unnoticed—and almost always taken for granted. If everyday products had a “3M inside” sticker on them like computers had an “Intel inside”


sticker, the average consumer would see that sticker sixty to seventy times a day. 3M has succeeded not because they hire the best and the brightest (though I am sure they would argue that they do), but because they have a corporate culture that encourages and rewards people for helping each other and sharing everything they learn. Though 3M surely has its share of problems and bureaucracy, they work very hard to foster collaboration. Inside a Circle of Safety, when people trust and share their successes and failures, what they know and what they don’t know, the result is innovation. It’s just natural.


CHAPTER 18 Leadership Lesson 2: So Goes the Leader, so Goes the Culture I Before You. Me Before We. HE WANTED TO be in charge. He wanted to be the leader. And no one was going to stand in his way . . . not even the current leader. This is how Saddam Hussein came to power in Iraq. Even before he took power, he formed strategic alliances that would bolster his position and help ensure his own rise. And once in power, he showered his allies with wealth and position to keep them “loyal.” He claimed to be on the side of the people. But he wasn’t. He was in it for himself, for the glory, fame, power and fortune. And all his promises to serve were part of his strategy to take. The problem with such transitions is that they create a culture of mistrust and paranoia. Though things may be functional while the dictator is in power, once ousted, the whole country is left on shaky ground for years to come. These stories are not exclusive to the rise of dictators in unstable nations or plots of HBO series. All too often, similar scenarios play out in modern corporations. Stanley O’Neal’s ascent at Merrill Lynch in 2001 is just one example. Born during the heart of the Baby Boom in the small town of Wedowee, in eastern Alabama, O’Neal, the grandson of a former slave, went to Harvard Business School on a scholarship from General Motors. He later took a job at GM and quickly rose through the ranks of the firm’s treasury department. But he had his sights set on other things, bigger things. And so, despite having no real interest or experience in the brokerage business, he moved on to Wall Street. One of only a handful of African Americans to make it to the top rungs in the banking industry, O’Neal had the opportunity to become one of the great leaders of our day, a symbol of what’s possible in America. But he chose a different path. In 1986, he joined Merrill Lynch, and within a few years had become head of


Click to View FlipBook Version