The Power and Perils of Artificial Intelligence: A Closer Look at the Digital Services Act
Category Machine Learning Wednesday - May 3 2023, 03:29 UTC - 1 year ago The European Commission is asking 19 tech giants to explain the artificial intelligence (AI) algorithms included in the Digital Services Act. This is a very important step in making AI more transparent and accountable. AI affects every aspect of our lives and has the potential to be highly beneficial but also comes with problems such as transparency and accountability. The DSA gives us the right to appeal automated decision-making and to bail out when we disagree.
The European Commission is forcing 19 tech giants including Amazon, Google, TikTok and YouTube to explain their artificial intelligence (AI) algorithms under the Digital Services Act. Asking these businesses—platforms and search engines with more than 45 million EU users—for this information is a much-needed step towards making AI more transparent and accountable. This will make life better for everyone.
AI is expected to affect every aspect of our lives—from healthcare, to education, to what we look at and listen to, and even how how well we write. But AI also generates a lot of fear, often revolving around a god-like computer becoming smarter than us, or the risk that a machine tasked with an innocuous task may inadvertently destroy humanity. More pragmatically, people often wonder if AI will make them redundant. We have been there before: machines and robots have already replaced many factory workers and bank clerks without leading to the end of work.
But AI-based productivity gains come with two novel problems: transparency and accountability. And everyone will lose if we don't think seriously about the best way to address these problems.
Of course, by now we are used to being evaluated by algorithms. Banks use software to check our credit scores before offering us a mortgage, and so do insurance or mobile phone companies. Ride-sharing apps make sure we are pleasant enough before offering us a drive. These evaluations use a limited amount of information, selected by humans: your credit rating depends on your payments history, your Uber rating depends on how previous drivers felt about you.
--- Black box ratings --- .
But new AI-based technologies gather and organize data unsupervised by humans. This means that it is much more complicated to make somebody accountable or indeed to understand what factors were used to arrive at a machine-made rating or decision.
What if you begin to find that no one is calling you back when you apply for a job, or that you are not allowed to borrow money? This could be because of some error about you somewhere on the internet. In Europe, you have the right to be forgotten and to ask online platforms to remove inaccurate information about you. But it will be hard to find out what the incorrect information is if it comes from an unsupervised algorithm. Most likely, no human will know the exact answer.
If errors are bad, accuracy can be even worse. What would happen for instance if you let an algorithm look at all the data available about you and evaluate your ability to repay a credit? A high-performance algorithm could infer that, all else being equal, a woman, a member of an ethnic group that tends to be discriminated against, a resident of a poor neighborhood, somebody that speaks with a foreign accent or who isn't "good looking," is less creditworthy. Research shows that these types of people can expect to earn less than others and are therefore less likely to repay their credit—algorithms will also "know" this. While there are rules to stop people at banks from discriminating against potential borrowers, an algorithm acting alone could deem it accurate to charge these people more to borrow money. Such statistical discrimination could create a vicious circle: if you must pay more to borrow, you may struggle to make these hiher repayments, so you can't access competitive rates in the future.
Share