The Annoying Power of Technology: Why We Still Keep Making the Same Mistakes

Category Artificial Intelligence

tldr #

We keep making the same mistakes as history repeats itself because we often forget or ignore the embarrassing outcomes of decisions, and technology may not always help us. A good advisor is difficult to find and design decisions like idiot-proofing can help limit mistakes.


content #

It is a cliché that not knowing history makes one repeat it. As many people have also pointed out, the only thing we learn from history is that we rarely learn anything from history. People engage in land wars in Asia over and over. They repeat the same dating mistakes, again and again. But why does this happen? And will technology put an end to it? .

One issue is forgetfulness and "myopia": we do not see how past events are relevant to current ones, overlooking the unfolding pattern. Napoleon ought to have noticed the similarities between his march on Moscow and the Swedish king Charles XII’s failed attempt to do likewise roughly a century before him.

Technology may not always be able to help us learn from mistakes.

We are also bad at learning when things go wrong. Instead of determining why a decision was wrong and how to avoid it ever happening again, we often try to ignore the embarrassing turn of events. That means that the next time a similar situation comes around, we do not see the similarity—and repeat the mistake.

Both reveal problems with information. In the first case, we fail to remember personal or historical information. In the second, we fail to encode information when it is available.

We tend to forget facts and information that is relevant to current happenings.

--- The Annoying Power of Technology --- .

But surely technology can help us? We can now store information outside of our brains and use computers to retrieve it. That ought to make learning and remembering easy, right? .

Storing information is useful when it can be retrieved well. But remembering is not the same thing as retrieving a file from a known location or date. Remembering involves spotting similarities and bringing things to mind.

Learning from mistakes can help us repeat them less often.

An artificial intelligence also needs to be able to spontaneously bring similarities to our mind—often unwelcome similarities. But if it is good at noticing possible similarities (after all, it could search all of the internet and all our personal data), it will also often notice false ones.

For failed dates, it may note that they all involved dinner. But it was never the dining that was the problem. And it was a sheer coincidence that there were tulips on the table—no reason to avoid them.

We often choose to ignore the embarrassing outcomes of events rather than discover why the decisions were wrong.

That means it will warn us about things we do not care about, possibly in an annoying way. Tuning its sensitivity down means increasing the risk of not getting a warning when it is needed.

This is a fundamental problem and applies just as much to any advisor: the cautious advisor will cry wolf too often, the optimistic advisor will miss risks.

A good advisor is somebody we trust. They have about the same level of caution as we do, and we know they know what we want. This is difficult to find in a human advisor, and even more so in an AI.

It is difficult to find a trustworthy advisor, human or artificial.

Where does technology stop mistakes? Idiot-proofing works. Cutting machines require you to hold down buttons, keeping your hands away from the blades. A "dead man’s switch" stops a machine if the operator becomes incapacitated.

Microwave ovens turn off the radiation when the door is opened. To launch missiles, two people need to turn keys simultaneously across a room. Here, careful design renders mistakes hard to make. But we don’t care enough about less important mistakes—such as how to order dinner with a date—to make similar efforts to prevent them.

Idiot-proofing objects and features can limit our potential to make mistakes.

hashtags #
worddensity #

Share