The Unintended Ramifications of 'Move Fast and Break Things'
Category Business Saturday - November 25 2023, 03:05 UTC - 12 months ago As we mark the first anniversary of the release of AI chatbot ChatGPT, it's worth considering whether the big tech companies could do with moving slowly and taking care not to break anything. Social media, cryptocurrencies and the personal computer all brought unintended implications. The US justice system using AI for more than a decade to assist bail decisions exposes the inteplay between human and machine as the root of the problem. Moving slowly might be beneficial for people and for economics.
The unofficial motto of Silicon Valley has long been "move fast and break things". It relies on the assumption that in order to create cutting edge technology and to be ahead of the competition, companies need to accept that things will get damaged in the process.However, this approach can have implications beyond just economics. It can endanger people and be unethical. As we mark the first anniversary of the release of AI chatbot ChatGPT, it's worth considering whether the big tech companies could do with moving slowly and taking care not to break anything.
ChatGPT's impressive capabilities caused a sensation. But some commentators were quick to point to issues such as the potential it presented for students to cheat on assignments. More widely, the chatbot intensified a debate over how to control AI, a transformative technology with huge potential benefits—and risks of comparable significance.
Let's look at Silicon Valley's record on other technology too. Social media was supposed to bring us together. Instead, it has threatened democracy and produced armies of trolls. Cryptocurrencies, touted as challenging the financial status quo have been an environmental disaster and have been vulnerable to fraud.
The advent of the personal computer was supposed to make our work life easier. It did, but at the price of massive job losses which the job market took more than a decade to recover from.
It's not that technologies in themselves are bad. However, the ideology within which they are developed can be a problem. And as technology permeates more and more of our daily lives, the "things" that break could potentially end up being human lives.
Change of approach .
"Move fast and break things" could also prove to be economically wrong, making investors rush for novelty instead of value, as they did in the dot com bubble of the early 2000s. The idea assumes that although things might go wrong, we will be able to fix them quickly, and so the harms will be limited. Yet, looking at the history of Silicon Valley, this has been shown to be a problem on several counts.
Identifying that there is a problem is not the same as finding its cause. Once a technology has been deployed, the environment in which it is used may be so complex that it takes years to understand what exactly is going wrong.
The US justice system, for instance, has been using AI for more than a decade to assist bail decisions. These decide who should be released prior to trial against a cash bond.
AI was introduced not just as a way to reduce the flight risk, of defendants going on the run, but also to tackle racial bias, where white judges might be more likely to release white defendants. However, the algorithms produced the opposite result, with fewer black defendants being released.
Engineers kept on introducing new versions of the AI algorithms, hoping to reduce bias. Nothing worked. Then, in 2019—17 years after the system was first introduced—a researcher found that the problem was not the AI itself, but the way judges were using it.
They were more likely to overrule decisions that didn't fit with their stereotypes, and the problem was the inteplay between human and machine.
Share