Machines and the illusion of knowledge
Where technology goes beyond serving as a tool and part of our "community"
Quick summary:
The “illusion of knowledge” says that we think we know more than we really do
One reason for this illusion is that humans are good at finding information
Humans are not good at distinguishing between individual and group wisdom
The Internet acts as a store of knowledge for both humans and machines
The ability of machines to “learn” makes them more complex, like a living being
Today, technology is less a “tool” and more part of a “community of knowledge”
This shift makes us less in control and has profound implications for our future
The wisdom of crowds and illusion of knowledge
How smart are you? We all are fairly knowledgable about a wide range of subjects, a few in much more detail than many others. The limits of our expertise can be shown when tested. For example, can you explain how a zipper works? How about drawing a functional bicycle complete with chain, gears, and brakes? Most of us can provide a surface explanation but our “illusion of knowledge” is exposed when we are pressed for more details. This is the subject of a 2017 book called The Knowledge Illusion by Steven Sloman and Philip Fernbach. The authors contend that rather than possess detailed knowledge about a lot of topics in our brains, people actually retain limited information but are great at using their surrounding environment to augment their memory. In short, much of what it in our brain are “pointers” that tell us where to go to find specific information when it is needed. For example, you may know that the Great Sphinx is located in Egypt and that it is a popular tourist destination. You likely don’t know exactly where it is located or how to get there, how and when it was built, and what it is intended to symbolize. But as long as you have a vague mental picture of what the Great Sphinx looks like and know that it is located in a country called Egypt, you can use those facts to fill in the rest of the details whenever you are required to.
In addition to using our external environment and techniques such as remembering landmarks or making a to-do list, humans are unique in their ability to encapsulate knowledge and communicate it effectively to other humans, both directly in real time and indirectly through the passage of time. Storytelling and culture are important mechanisms, as is our ability to read the intentions of other humans. One of our unique abilities as humans is to place ourselves in the shoes of other people, consider their motivations and perspective, and make predictions about how they might behave in certain situations or use context clues to understand their meaning. This special ability to communicate and pass along wisdom gives us a collective intelligence that goes beyond each of our individual intelligence. It also allows us to specialize in a particular domain while outsourcing other knowledge to our partners, colleagues, and associates. The book mentions studies that show couples who have been together for a minimum of three months do this intuitively: they understand each other’s strengths and areas of expertise, and when both people are presented with new information, the one who is less familiar with the topic has less retention because they subconsciously assume that the other partner has a higher likelihood of remembering and contextualizing the information. Why recall a bunch of detailed information when you can just ask someone else to whenever you actually need it?
Technology is becoming more than simply a tool
The oft-cited phrase “none of us is smarter than all of us” certainly is true. Leveraging our collective knowledge is second nature to us as humans. In fact, studies show that when groups get together to think together, it is difficult for participants to later recall which ideas were first initiated by which people. This inability to separate out who thought of what ideas makes it difficult to evaluate individual performance and provides all of us with the illusion of knowledge: we overestimate our contribution. Studies have shown this extends to technology as well: people who perform Google searches on particular topics later believe they remembered much more than they did initially and underestimate how much information they gained from their search. In both areas, whether working with other people or using the Internet, we suffer from the knowledge illusion.
People are quite comfortable with leveraging the collective wisdom of others and shape their individual expertise based on what things they want to be intimately familiar with and which subjects to rely on outside resources for. As the adoption of “smart technology” increases, we face an unprecedented era where technology goes beyond serving merely as a tool to being an integral part of our community of knowledge. One consequence is that humans are used to being “in control” of technology - using it as a tool - but increasingly technology is improving itself without our help. For example, software updates to your Tesla car or iPhone provide it with new capabilities, but we as users did not make any modifications ourselves. The technology has “learned”: it was connected to a network, an update was pushed, and now the technology has new features and capabilities. Some of these new features we will learn about and actively use, but others may operate without our awareness in the background such as a security patch.
As complexity increases, so do unforeseen events
Sloman and Fernbach argue that one consequence of the rapid change in technology today fueled by the Internet is that humans are starting to treat machines more like people. Put another way, we now often treat technology as a full-fledged member of our community of knowledge, similar to the value we place on the expertise of others. This has profound implications in a number of ways. One implication is that as we continue to grow our collective expanse of knowledge, this will no longer simply be encapsulated in humans and tools passed down through generations, but also in our technology. To advance knowledge further, we will not be able to do it without machines because it is an integral part of our collective wisdom. We must “think with machines” going forward and it is the nexus of where technology and humans interact that will be a key part of advancing know-how in the future.
Another aspect of humans treating machines as human-like is that we no longer fully trust technology as we do with tools. For example, machines can now act as humans and scam us, causing harm. As technology goes from helpful tools doing our bidding to more immersive experiences where we cannot fully predict its behaviors, this adds layers of complexity and new dynamics to the human-machine interactions. Furthermore, as we have experienced with technologies such as self-driving vehicles, increasingly the machines are in control and while a human may be “in the loop” present to act if something unexpected arises, we are increasingly unfamiliar with how the machines work and what to do is something malfunctions. Sloman and Fernbach define the automation paradox: as machines gets more sophisticated and our dependence on them increases, this undermines the contribution of the human operator and leads to greater potential for danger if and when the unexpected occurs.
While both humans and machines contribute to the collective knowledge, there is one major distinction between them. Machines cannot read our intentions they way other humans can. Machines can be programmed to achieve certain ends, but they lack the flexibility to handle unforeseen situations they way humans can. Sloman and Fernbach state:
We are at an awkward moment in the history of technology. Almost everything we do is enabled by intelligent machines. Machines are intelligent enough that we rely on them as a central part of our community of knowledge. Yet no machine has that singular ability so central to human activity: No machine can share intentionality.
Since machines do not share intentionality - the ability of humans to read each other’s actions, infer a state of mind, formulate a read on their goals and objectives, and possibly align efforts to work together towards communal outcomes - they will never be full members of the community of knowledge. The authors state further that:
[T]echnology doesn’t understand what the system is trying to accomplish - because its doesn’t share humans’ intentionality - there’s always a danger that something will go wrong. And when the human part of the system isn’t ready for technology to fail, disaster can ensue.
Sloman and Fernbach provide a warning for all of us to consider in our operations as we continue to adapt more complex and sophisticated technology. Their fear is that no one - individually or collectively - has all of the knowledge necessary to fully understand and control modern technology, because it is learning outside of standard programming where humans provide commands and the machine executes them. Rolling back the clock to earlier times where technology was simpler and more analogous to a standard tool used by our ancestors is not realistic: appreciating the new realities that come with today’s complex technologies and rapid advancements and adjusting as best possible while anticipating unexpected outcomes is crucial.
Who is responsible in your organization for thinking about human-machine interactions? How many times do you tell people you cannot do what they are asking because “the system won’t let me?” Is it clearly documented where machines make decisions and where humans do in your processes? Do you have business rules that are formally documented and codified where technology uses them but humans can quickly make changes? How agile are you at fixing technology when it is not operating as expected? Have you been impacted by any scams that use technology? As more software and systems are outsourced, how much knowledge of these should your IT experts retain? Do you lack knowledge or expertise of key technologies that enable your organization?