Highlights from Stolen Focus by Johann Hari Last read on November 3, 2022

Cover of Stolen Focus

Highlights from this book

  • We cannot put off living until we are ready…. Life is fired at us point-blank.

  • One of the most effective tools we have is called “pre-commitment”. It’s right there in one of the oldest surviving human stories, Homer’s Odyssey. Homer tells of how there was once a patch of sea that sailors would always die in, for a strange reason: living in the ocean, there were two sirens - a uniquely hot blend of woman and fish — who would sing to the sailors to join them in the ocean. Then, when they clambered in for some sexy fish-based action, they’d drown. But then, one day, the hero of the story—Ulysses—figured out how to beat these temptresses. Before the ship approached the sirens’ stretch of sea, he got his crew members to tie him to the mast, hard, hand and foot. He couldn’t move. When he heard the “sirens, no matter how much Ulysses yearned to dive in, he couldn’t.

  • I chose Provincetown because I found it charming but not complex. If I had chosen (say) Bali, I know that I would have soon started trying to figure out how Balinese society worked, and begun interviewing people, and soon I would be back to my manic information-sucking. I wanted a pretty purgatory where I could decompress, and nothing more.

  • Twitter makes you feel that the whole world is obsessed with you and your little ego — it loves you, it hates you, it’s talking about you right now. The ocean makes you feel like the world is greeting you with a soft, wet, welcoming indifference. It’s never going to argue back, no matter how loud you yell.

  • I was struck again by a big difference — between standing in a group of strangers singing with them, and interacting with groups of strangers through screens. The first dissolves your sense of ego; the second jabs and pokes at it.

  • Sune and his team decided to analyze books that were written between the 1880s and the present day using a mathematical technique—the scientific term for it is “detecting n-grams”—that can spot the rise and fall of new phrases and topics in the text. It’s the equivalent of finding hashtags from the past. The computers could detect new phrases as they appear—think of, say, “the Harlem Renaissance,” or “no-deal Brexit”—and they could see how long they were discussed for, and how quickly they faded from discussion. It was a way of finding out how long the people who came before us talked about a fresh topic. How many weeks and months did it take for them to get bored and move on to the next thing? When they looked at the data, they found that the graph looked remarkably similar to Twitter’s. With each decade that passed, for more than 130 years, topics have come and gone faster and faster. When he saw the results, Sune told me, he thought: “Goddammit, it really is true…. Something is changing. It’s not just the same-old, same-old.” This was the first proof gathered anywhere in the world that our collective attention spans have been shrinking.

  • When people think they’re doing several things at once, they’re actually—as Earl explained—“juggling. They’re switching back and forth. They don’t notice the switching because their brain sort of papers it over, to give a seamless experience of consciousness, but what they’re actually doing is switching and reconfiguring their brain moment to moment, task to task—[and] that comes with a cost.

  • I felt like everywhere I went, I was surrounded by people who were broadcasting but not receiving. Narcissism, it occurred to me, is a corruption of attention — it’s where your attention becomes turned in only on yourself and your own ego.

  • He decided to become a psychologist, but it turned out there were no psychology degrees in Europe. He learned, though, that the subject existed in a distant country he had only seen in the movies: the United States.

    Finally, after years of saving, he made it there—only to get a nasty shock when he arrived. American psychology was dominated by one big idea, epitomized by a famous scientist. A Harvard professor named B.F. Skinner had become an intellectual celebrity by discovering something strange. You can take an animal that seems to be freely making up its own mind about what to pay attention to—like a pigeon, or a rat, or a pig—and you can get it to pay attention to whatever you choose for it. You can control its focus, as surely as if it was a robot and you had created it to obey your whims. Here’s an example of how Skinner did it that you can try for yourself. Take a pigeon. Put it in a cage. Keep it until it is hungry. “Then introduce a bird feeder that releases seed into the cage when you push a button. Pigeons move around a lot—so wait until the pigeon makes a random movement that you have chosen in advance (like, say, jerking its head up high, or sticking out its left wing), and at that precise moment, release some pellets. Then wait for it to make the same random movement again, and give it more pellets.

    If you do this a few times, the pigeon will quickly learn that if it wants pellets, it should carry out the random gesture you have chosen—and it will start to do it a lot. If you manipulate it correctly, its focus will come to be dominated by the twitch that you chose to reward. It will come to jerk up its head or stick out its left wing obsessively. When Skinner discovered this, he wanted to figure out how far you could take this. How elaborately can you program an animal using these reinforcements? He discovered you can take it really far. You can teach a pigeon to play ping-pong. You can teach a rabbit to pick up coins and put them into piggy banks. You can teach a pig to vacuum. Many animals will focus on very complex—and, to them, meaningless—things, if you reward them right.

    Skinner became convinced that this principle explained human behavior almost in its entirety. You believe that you are free, and that you make choices, and you have a complex human mind that is selecting what to pay attention to—but it’s all a myth. You and your sense of focus are simply the sum total of all the reinforcements you have experienced in your life. Human beings, he believed, have no minds—not in the sense that you are a person with free will making your own choices. You can be reprogrammed in any way that a clever designer wants. Years later, the designers of Instagram asked: If we reinforce our users for taking selfies—if we give them hearts and likes—will they start to do it obsessively, just like the pigeon will obsessively hold out its left wing to get extra seed? They took Skinner’s core techniques, and applied them to a billion people.

    Mihaly learned that these ideas ruled American psychology, and they were hugely influential in American society too. Skinner was a star, featured on the front page of Time magazine. He was so famous that by 1981, 82 percent of the American college-educated public could identify who he was.

    To Mihaly, this seemed like a bleak and limited view of human psychology. It clearly yielded some results, but he believed it was missing most of what it means to be human. He decided he wanted to explore the aspects of human psychology that were positive, and nourishing, and generated something more than hollow mechanical responses. But there weren’t many people in American psychology that thought like this. To begin, he decided to study something that seemed to him to be one of the great achievements of human beings—the making of art. He had seen destruction; now it was time to study creation. So, in Chicago, he persuaded a group of painters to let him witness their process over many months, so he could try to figure out the underlying psychological processes that were driving the unusual kind of focus they had chosen to dedicate their lives to. He watched one artist after another focusing on a single image and attending to it with great care.

    Mihaly was struck by one thing above all else—for the artist, when they were in the process of creation, time seemed to fall away. They almost appeared to be in a hypnotic trance. It was a deep form of attention that you rarely see elsewhere.

    Then he noticed something puzzling. After investing all this time in creating their paintings, when they were finished, the artists didn’t triumphantly gaze at what they had made and show it off and seek out praise for it. Almost all of them simply put the painting away and started working on another one. If Skinner was right—that human beings do things just to gain rewards and avoid punishments—this made no sense. You’d done the work; now here’s the reward, right in front of you, for you to enjoy. But creative people seemed mostly uninterested in rewards; even money didn’t interest most of them. “When they finished,” Mihaly said to an interviewer later, “the object, the outcome was not important.”

  • “This disagreement seemed to me to lay the groundwork for one of the defining conflicts in the world today. We now live in a world dominated by technologies based on B.F. Skinner’s vision of how the human mind works. His insight—that you can train living creatures to desperately crave arbitrary rewards—has come to dominate our environment. Many of us are like those birds in cages being made to perform a bizarre dance to get rewards, and all the while we imagine we are “choosing it for ourselves—the men I saw in Provincetown obsessively posting selfies to Instagram started to look to me like Skinner’s pigeons with a six-pack and a piña colada. In a culture where our focus is stolen by these surface-level stimuli, Mihaly’s deeper insight has been forgotten: that we have within us a force that makes it possible to focus for long stretches and enjoy it, and it will make us happier and healthier, if only we create the right circumstances to let it flow.

    Once I knew this, I understood why, when I felt constantly distracted, I didn’t just feel irritated — I felt diminished. We know, at some level, that when we are not focusing, we are not using one of our greatest capacities. Starved of flow, we become stumps of ourselves, sensing somewhere what we might have been.”

  • As you read fiction, you see inside other people’s experiences. That doesn’t vanish when you put down the novel. When you later meet a person in the real world, you’ll be better able to imagine what it’s like to be them. Reading a factual account may make you more knowledgeable, but it doesn’t have this empathy-expanding effect.

  • James Williams told me I had made a fundamental mistake in Provincetown. He was a senior Google strategist for many years, and he left, horrified, to go to Oxford University, to study human attention, and figure out what his colleagues in Silicon Valley have done to it. He told me a digital detox is “not the solution, for the same reason that wearing a gas mask for two days a week outside isn’t the answer to pollution. It might, for a short period of time, keep, at an individual level, certain effects at bay. But it’s not sustainable, and it doesn’t address the systemic issues.” He said our attention is being deeply altered by huge invasive forces in the wider society. Saying the solution is primarily to personally abstain is just “pushing it back onto the individual,” he said, when “it’s really the environmental changes that will really make the difference.”

  • With each month that passed, Tristan became more startled by the casualness with which the attention of a billion people was being corroded at Google and the other Big Tech companies. One day he would hear an engineer excitedly saying: “Why don’t we make it buzz your phone every time we get an email?” Everyone would be thrilled—and a few weeks later, all over the world, phones began to buzz in pockets, and more people found themselves looking at Gmail more times a day. The engineers were always looking for new ways to suck eyeballs onto their program and keep them there. Day after day, he would watch as engineers proposed more interruptions to people’s lives—more vibrations, more alerts, more tricks—and they would be congratulated.”

  • But whenever he came up with a specific proposal for how Google’s own products could be less interrupting and presented it to people above him, he was told, in effect: “This is hard, it’s confusing, and it’s often at odds with our bottom line.” Tristan realized he was bumping up against a core contradiction. The more people stared at their phones, the more money these companies made. Period. The people in Silicon Valley did not want to design “gadgets and websites that would dissolve people’s attention spans. They’re not the Joker, trying to sow chaos and make us dumb. They spend a lot of their own time meditating and doing yoga. They often ban their own kids from using the sites and gadgets they design, and send them instead to tech-free Montessori schools. But their business model can only succeed if they take steps to dominate the attention spans of the wider society. It’s not their goal, any more than ExxonMobil deliberately wants to melt the Arctic. But it’s an inescapable effect of their current business model.

  • You probably haven’t heard of Aza Raskin, but he has directly intervened in your life. He will, in fact, probably affect how you spend your time today. Aza grew up in the most elite sliver of Silicon Valley, at the height of its confidence that it was making the world better. His dad was Jef Raskin, the man who invented the Apple Macintosh for Steve Jobs, and he built it around one core principle: that the user’s attention is sacred. The job of technology, Jef believed, was to lift people up and make it possible to achieve their higher goals. He taught his son: “What is technology for? Why do we even make technology? We make technology because it takes the parts of us that are most human and it extends them. That’s what a paintbrush is. That’s what a cello is. That’s what language is. These are technologies that extend some part of us. Technology is not about making us superhuman. It’s about making us extra-human.

    Aza became a precocious young coder, and he gave his first talk about user interfaces when he was ten years old. By the time he was in his early twenties, he was at the forefront of designing some of the first internet browsers, and he was the creative lead on Firefox. As part of this work, he designed something that distinctly changed how the web works. It’s called “infinite scroll.” Older readers will remember that it used to be that the internet was divided into pages, and when you got to the bottom of one page, you had to decide to click a button to get to the next page. It was an active choice. It gave you a moment to pause and ask: Do I want to carry on looking at this? Aza designed the code that means you don’t have to ask that question anymore. Imagine you open Facebook. It downloads a chunk of status updates for you to read through. You scroll down through it, flicking your finger—and when you get to the bottom, it will automatically load another chunk for you to flick through. When you get to the bottom of that, it will automatically load another chunk, and another, and another, forever. You can never exhaust it. It will scroll infinitely.

  • “Aza explained it to me by saying that I should imagine that “inside of Facebook’s servers, inside of Google’s servers, there is a little voodoo doll, [and it is] a model of you. It starts by not looking much like you. It’s sort of a generic model of a human. But then they’re collecting your click trails [i.e., everything you click on], and your toenail clippings, and your hair droppings [i.e., everything you search for, every little detail of your life online]. They’re reassembling all that metadata you don’t really think is meaningful, so that doll looks more and more like you. [Then] when you show up on [for example] YouTube, they’re waking up that doll, and they’re testing out hundreds of thousands of videos against this doll, seeing what makes its arm twitch and move, so they know it’s effective, and then they serve that to you.

  • Raull decided he couldn’t just watch his friends being killed one by one—so, as the years passed, he decided to do something bold. He set up a Facebook page named Coletivo Papo Reto, which gathered cellphone footage from across Brazil of the police killing innocent people and planting drugs or guns on them. It became huge, their videos regularly going viral. Even some people who had defended the police began to see their real behavior and oppose it. It was an inspiring story about how the internet made it possible for people who have been treated like third-class citizens to find a voice, and to mobilize and fight back.

    But at the same time as the web was having this positive effect, the social-media algorithms were having the opposite effect—they were supercharging anti-democratic forces in Brazil. A former military officer named Jair Bolsonaro had been a marginal figure for years. He was way outside the mainstream because he kept saying vile things and attacking large parts of the population in extreme ways. He praised people who had carried out torture against innocent people when Brazil was a dictatorship. “He told his female colleagues in the senate that they were so ugly he wouldn’t bother raping them, and that they weren’t “worthy” of it. He said he would rather learn his son was dead than learn his son was gay. Then YouTube and Facebook became one of the main ways people in Brazil got their news. Their algorithms prioritized angry, outrageous content—and Bolsonaro’s reach dramatically surged. He became a social-media star. He ran for president openly attacking people like the residents of Alemão, saying the country’s poorer, blacker citizens “are not even good for breeding,” and should “go back to the zoo.” He promised to give the police even more power to launch intensified military attacks on the favelas — a license for wholesale slaughter.

    Here was a society with huge problems that urgently needed to be solved—but social-media algorithms were promoting far-right-wingers and wild disinformation. In the run-up to the election, in favelas like Alemão, many people were deeply worried about a story that had been circulating online. Supporters of Bolsonaro had created a video warning that his main rival, Fernando Haddad, wanted to turn all the children of Brazil into homosexuals, and that he had developed a cunning technique to do it. The video showed a baby sucking a bottle, only there was something peculiar about it—the teat of the bottle had been painted to look like a penis. This, the story that circulated said, is what Haddad will distribute to every kindergarten in Brazil. This became one of the most-shared news stories in the entire election. People in the favelas explained indignantly that they couldn’t possibly vote for somebody who wanted to get babies to suck these penis-teats, and so they would have to vote for Bolsonaro instead. On these algorithm-pumped absurdities, the fate of the whole country turned.

  • Tristan and Aza started to believe that all these effects, when you add them together, are producing a kind of “human downgrading.” Aza said: “I think we’re in the process of reverse-engineering ourselves. [We discovered a way to] open up the human skull, find the strings that control us, and start pulling on our own marionette strings. Once you do that, an accidental jerk in one direction causes your arm to jerk further, which pulls your marionette string farther…. That’s the era that we’re headed into now.” Tristan believes that what we are seeing is “the collective downgrading of humans and the upgrading of machines.” We are becoming less rational, less intelligent, less focused.

  • I understood my growing discomfort with Nir’s approach more fully when I talked it over with several other people. One was Ronald Purser, who is professor of management at San Francisco State University. He introduced me to an idea I hadn’t heard before—a concept named “cruel optimism.” This is when you take a really big problem with deep causes in our culture—like obesity, or depression, or addiction—and you offer people, in upbeat language, a simplistic individual solution. It sounds optimistic, because you are telling them that the problem can be solved, and soon—but it is, in fact, cruel, because the solution you are offering is so limited, and so blind to the deeper causes, that for most people, it will fail.

  • Using the same model, our governments could acknowledge that social media is now an essential public utility, and explain that when it is run according to the wrong incentives, it causes the psychological equivalents of cholera outbreaks. It would be a bad idea for the government to run it—it’s easy to imagine how authoritarian leaders could abuse that. Fortunately, there’s a better option: you can have public ownership, independent of the government. In Britain, the BBC is owned and funded by the British public, and it is run in the interests of the British public—but its day-to-day running is independent of the government. It’s not perfect, but this model works so well that it is the most respected media organization in the world.

  • After carefully analyzing all the options, Facebook’s scientists concluded there was one solution: they said Facebook would have to abandon its current business model. Because their growth was so tied up with toxic outcomes, the company should abandon attempts at growth. The only way out was for the company to adopt a strategy that was “anti-growth”—deliberately shrink, and choose to be a less wealthy company that wasn’t wrecking the world.

    I realized that if Facebook won’t stop promoting fascism—promoting Nazism in Germany—they will never care about protecting your focus and attention. These companies will never restrain themselves. The risks of letting them continue behaving the way they have are greater than the risks of overreacting. They have to be stopped. They have to be stopped by us.

  • “Many of us have built our identities around working to the point of exhaustion. We call this success. In a culture built on ever-increasing speed, slowing down is hard, and most of us will feel guilty about doing it. That’s one reason why it’s important we all do it together—as a societal, structural change.

  • From that point on, Nicholas began to believe you could solve the problems of many animals by responding to them in ways that, until then, had only been applied to humans. For example, he was consulted by the Calgary Zoo about a polar bear that was endlessly pacing, and he recommended giving it a massive dose of Prozac. It stopped pacing and began to sit docilely in its cage. Today, thanks in part to Nicholas’s shift in perspective, there are parrots on Xanax and Valium, there are many species from chickens to walruses being given antipsychotics, and there are cats on Prozac. One of the staff at the Toledo Zoo told a reporter that psychiatric drugs are “definitely a wonderful management tool, and that’s how we look at them. To be able to just take the edge off puts us a little more at ease.” Nearly half of all zoos in the U.S. now admit to giving psychiatric drugs to their animals, and 50 to 60 percent of the owners who come to Nicholas’s clinic are seeking psychiatric meds for their pets. At times, it sounds like One Flew over the Cuckoo’s Nest for actual cuckoos.