Can you "futureproof" your life?
Examining the life advice distilled in 9 rules from the book Futureproof
Quick summary:
A recent book looks at the impact of AI and automation on employment trends
The author Kevin Roose contends our conventional wisdom on AI is misguided
Automation takes many forms, and often AI headlines detract from the real story
Seemingly “boring” technologies like RPA are having profound impacts on work
While no one can predict the future, some trends are emerging to help guide us
Life advice on how to become “futureproof”
The last two editions of Forestview have examined the process of guiding prospective college students and the frustration and stress that automation and AI can bring to high-skilled professions such as doctors. Both topics focus on the concept that jobs are quickly evolving as the pace of technological change accelerates. In his recent book Futureproof, tech journalist Kevin Roose from the New York Times examines what the rapid development and deployment of AI-based automation solutions mean for the future of work. Roose uses his experiences, knowledge of the technology landscape, and candid conversations with top tech executives to develop “9 Rules For Humans in the Age of Automation”. I recently read the book and will summarize those rules and share my impressions in this article.
Below are the 9 rules developed by Kevin Roose, followed by a quick summary of each:
1. Be surprising, social, and scarce
Roose opens this section by discussing the focus on improving human efficiency, hustle culture, and “life hacks” that were in vogue throughout much of this century. This intense focus on our personal productivity runs contrary to the advice he received from many AI and automation experts: no matter how hard we try, we will never become more efficient than machines at many types of tasks. (I’m reminded of the episode of The Office where Dwight Schrute tries to personally outsell the new Dunder Mifflin website on the first day.) In researching past periods of rapid technological change, Roose found that it isn’t always the engineers and technologists who thrived; often, it was people doing low tech, high touch work that machines had a hard time replicating.
In keeping with the idea of maximizing human strengths as compared with machines, Roose advises us to “be suprising, social, and scarce”. Roose points out that AI works best in stable environmennts with well-defined rules, consistent inputs, and lots of data. This does not bode well for people who work jobs that are very structured and highly repetitive. By contrast, people who are in roles that make people feel something rather than produce something are hard to replace, such as a hair stylist or massage therapist. AI thrives on large data sets, huge numbers of users, and global systems. Roose states that AI is often designed to solve a narrow range of problems and is poor when asked to perform transfer learning - applying its knowledge from one domain to another. Humans, in comparison, are great connectors. Roose notes that humans excel when we “spot a problem in one area of our life and use information we learned doing something completely different to fix it”. Maria Popova refers to this trait as combinatorial creativity and Roose calls it a “uniquely human skill”. Another type of scarce work that Roose believes will be hard to automate are jobs which involve rare or high-stakes situations with ow fault tolerance, such as a 911 operator. (I would argue airline pilot is another such role but autopilot flies the majority of miles in the world today.)
2. Reset machine drift
One of the most impactful sections of Futureproof is Roose’s passionate call to pay more attention to the power of algorithms and recommendations in our daily lives. As Roose puts it:
Our entire information ecosystem is wrapped around the recommendattion engines that power social media platforms…all of which rely on algorithms to tell us what voices matter, which stories are important, and what deserves our attention.
Roose notes his peronsal experiences with lifestyle automation and the acceptance of recommendations by systems. He notes that initially this accumulation of advice seemed harmless, but eventually he began feeling that “surrendering my daily decisions wasn’t making me happier or more productive”. Roose says that he fet himself becoming a “shallower” person and starting describing this feeling as machine drift. Roose asks, “How many of my beliefs and preferences were actually mine…and how many had been put there by machines?”
Roose cites the concept of choice architecture that is commonly understood by product designers that use design to shape consumer preferences and experiences. Roose outlines a distinction made by the French researcher Camille Roth between “read our minds” algorithms that attempt to anticipate our needs and “change our minds” algorithms that seek to change our preferences. Roose is worried that the latter category goes beyond assisting us in our daily lives and instead takes away some of our autonomy and free will - without our awareness. He states:
The injection of algorithmic recommendations into every facet of our modern life has gone mostly unnoticed, and yet, if we consider how many of our daily decisions we outsource to machines, it’s hard not to think that a historic, species-level transformation is taking place.
To combat machine drift, Roose encourages us to take time periodically to formally document their own preferences as an inventory list and keeping it handy as a reference guide. He also describes the adoption of a “human hour” where Roose spends an hour each day away from all screens. Roose also intentionally added more friction into some of his routines to make him more aware of the decisions he makes.
3. Demote your devices
Building on the idea of having a “human hour”, Roose talks about his personal addiction to his smarttphone starting with a Blackberry in the mid-2000s. He describes himself as “less of a user of these devices than a servant to them” and says that at some point his phone “got a promotion and became my demanding, hard-driving nightmare of a boss”. Roose goes on to argue that smart phones, tablets, laptops, wearables, and other connected devices have “fundamentally changed what it means to use a device”. Instead of the metaphor of a computer as a “bicycle for the mind” to use a Steve Jobs expression, these days Roose states that our devices are “more like runaway trains”. He asks the rhetorical question “who is really in charge?”
In order to effectively succeed as humans in a work environment alongside machines, Roose contends that humans must recapture the ability to focus and direct their attention without the constant pull of technological distractions. He goes on to state that humans also need to understand the ways that devices harm our relationships with other humans. Roose discusses the ways that AI makes our devices addictive:
Smartphone and social apps have real benefits, but they are aso fundamentally extractive tools that exploit our cognitive weaknesses to get us to click on more posts, scroll through more videos, and view more targeted ads. They do this with the help of AI, which allows them to more accurately ppredict our preferences, steer our attention and activate our brain’s pleaure center with flashy and exciting rewards.
Not only does resetting your personal relationship with technology provide you with a more balanced perspective, it also helps your children who have been surrounded by devices their entire lives. It’s not just what children are doing online that should be concerning for parents; it’s also offline measured by their inability to concentrate on anything for long periods of time: watching a movie, reading a book, writing an essay. If the ability to form and maintain lasting social connections marks our humanity, we cannot left our devices take away this strength.
4. Leave handprints
Similar to the focus on human productivity and “life hacks”, Roose has a negative view on “hustle culture” and goes to fair to argue that hustling is counterproductive because “no matter how hard you work, you simply cannot outwork an algorithm”. As an alternative, Roose pushes us to focus instead on making a “distinctly human mark” on our work, similar to an artist or creator. Items such as handmade furniture, bespoke clothing, or custom art are valued in part because of the amount of human effort involved. According to Roose, the handprints principle says that “the more obvious the human effort behind something, the higher its perceived value”. Sometimes this can simply be a matter of making the invisible visible: for example, when hotel maids leave a mint on the pillow or a handwritten note on the nightstand. Roose advocates that people look for opportunities to make small but high-impact gestures, where “a little humanity can go a long way”. This could be by baking cookies for your project team as a token of appreciation or giving everyone a personalized thank-you card alongside gift cards. Companies can also embrace the handprints strategy by showcasing the dedication and hard work of their employees.
5. Don’t be an endpoint
In the last edition of Forestview, I focused on the stress that health care professionals faced when adopting the use of electronic medical records. This ties in to one of Roose’s rules that, in the words of former Google designer Chris Messina, “humans are quickly becoming expensive API endpoints”. Roose explains that endpoints are special kinds of web addreses that allow systems to communicate with other systems through what’s known as an application programming interface or API for short. To broaden the concept, Roose advises that humans should never seek to be in a position where they are simply bridging the gap between two or more technologies.
One clear lesson from history is that people don’t remain endpoints for long. There are simply too many incentives to finish automating these processes, and too many technologists working on taking humans out of the loop. - Kevin Roose
Roose recommends that remote workers in particulat need to avoid being perceived as endpoints: it is easy to fall into the “out of sight, out of mind” trap so remote workers must double their efforts to network with colleagues and be seen as active participants in their work so that their humanity is readily apparent to all.
6. Treat AI like a chimp army
Throughout the book Futureproof, author Kevin Roose shines a light on the extent to which automation technologies have seeped into workplaces in virtually every industry, even automating tasks such as writing sports summaries based on statistics or financial write-ups of the day’s market activity. Many automation and AI technologies have gone unheralded and haven’t made headlines, but adoption have been relatively rapid to the point where Roose is concerned some firms have gone too far. Roose describes overautomation as “giving machines tasks and authority they really aren’t equipped to handle and bring surprised when things go horribly wrong”. Roose makes the comparison of today’s AI-based technologies to “an army of chimps” in the sense that it can follow direction if properly trained and supervised, but can be erratic and destructive if not managed well. Roose contends that ”faulty and untested AI and automated systems are being entrusted with incredibly important decisions” by governments, businesses, and organizations around the globe and that more needs to be done in the area of regulation and standards for the ethical use of AI.
7. Build big nets and small webs
The reality is this: no matter how much we attempt to maintain a competitive edge in the workplace by maximizing our humanity, we simply cannot predict with certainty which jobs will be automated and how widespread the impact will be on the economy. As a result, Roose advocates for two strategies to help ensure that the damage to people and regions is limited: 1) build big nets in the form of sweeping policy changes and social programs, and 2) create small webs that support communities by providing a sense of calm and purpose during a time of major social upheaval. He states:
As a society, we can build more big nets to help people who are knocked off-balance by technological change. And as individuals, we can choose to create and strengthen small webs so that, if change comes to our doorstep, we’ll have what we need to get by.
Roose is in favor of strong collective actions by government leaders and private firms who are at the forefront of adopting automation and AI. Universal basic income (UBI) and government-funded healthcare for all are two policies that Roose advocates for. However, Roose doesn’t place much faith that large institutions will come to the rescue, which is why he also advocates for small webs of support at a local level through schools, churches, and other groups.
8. Learn machine-age humanities
Roose is a big proponent of a traditional liberal education in the humanities, but he goes beyond the standard offerings such as literature and philosophy at today’s colleges and universities to advocate for what he terms the machine-age humanities. Here are the new skills Roose contends we need to master:
attention guarding (the ability to achieve and maintain focus)
room reading (the emotional intelligence to work with diverse groups of people)
resting (reframing our value of naps and sleep, valuing rest as a critical skill)
digital discernment (learning to navigate a hazy, muddled information ecosystem)
analog ethics (building up strong social skills, cultivating kindness and fairness)
consequentialism (discipline of anticipating unintended consequences of new tech)
9. Arm the rebels
Roose concludes by outlining two potential paths to react to the harm caused by rapid technological change. The first path is to resist it - the famous tale of the Luddites comes to mind - and the second path is to shape it for good. Roose falls squarely into the latter camp. He argues that the history of resisting the tide of technological change is replete with examples where limited success may have been achieved in the short run, but none that achieved lasting success. Roose instead argues that:
It’s on us - the people who love technology but worry about its use - to explore this adjacent possible and push for the best version of it. It’s also important not to get too discouraged, and to remember, despitet all of our worries, that AI and automation could be unbelievably good for humankind, if we do it right.
In order to shape the future of automation and AI, Roose argues that we need to “arm the rebels”. Roose showcases several technologists that have concerns about how automation and AI are currently being deployed and advocate for proper supervision and ethical use. He states that those who are not technologists have a responsibility to educate ourselves, and to support the work and organizations who are pushing to make “AI and automation a liberating force rather than just a vehicle for wealth creation”. Roose goes on to share that he believes “it’s important to support the people fighting for ethics and transparency inside our most powerful tech institutions by giving them ammunition in the form of tools, data, and emotional support”.
Reactions to the 9 rules for humans
In reading through Futureproof, I found myself oscillating between being a bit dismissive of Roose’s sense of alarm and raising my level of concern. Most of my personal experiences with automation have been with VBScript macros in the early 2000s, then moving to Automation Anywhere in combination with offshoring efforts in the early 2010s and then implementing robotic process automation (RPA) bots in the late 2010s. The RPAs that we implemented greatly helped back-office workers who were overwhelmed with manual tasks (they were the “endpoints” helping to connect disparate systems together) as part of broader processes in insurance. I’ve certainly been personally influenced by algorithms and recommendation engines and, like many, all too attentive to my phone rather than people on too many occasions.
I’ve struggled to have a “healthy” relationship with my devices over the years, but I would argue I’ve gotten wiser. I’ve learned to turn off most notifications, treat recommendations with measured skepticism, and carve out blocks of time to accomplish tasks such as writing this newsletter or even watching sports while “single tasking”. As for automation replacing me or impacting my children’s ability to find meaningful employment as adults, it could happen but is difficult to conceptualize in the abstract. As Roose and many others have highlighted, in the past when technology destroyed jobs, it also creates new ones that didn’t exist previously. It’s never a one-for-one proposition and concerns about skill mismatches and the lack of broad-scale success of retaining programs are important factors, but life without work for all who want it is still hard to imagine with unemployment rates continuing to hit new lows.
The 9 rules that Futureproof lays out are important to consider but not ironclad - think of them more as “9 things to think about” than rules. As a society, we will continue to ask the key question of where do we put the machines and where do we put the humans, dealing with technology’s impact speeding up societal changes, and managing how machines are going beyond mere tools to be an integral part of our communities. It’s important to continue to have conversations on this topic both within the companies that we work for, with our customers, and in our communities.
What are your impressions of the 9 rules for humans that Futureproof lays out? Which ones most resonate with you and why? Are there any rules you disagree with? Are there any rules you would add to the list? On a scale of 1 to 10, how concerned are you about automation and AI? Which aspects are most concerning or frightening? Which are most exciting?