The Need for AI Regulation and How to Achieve It

Category Technology

tldr #

OpenAI CEO Sam Altman urged Congress to consider regulating AI during his Senate testimony on May 16, 2023. The solutions Altman proposed of creating an AI regulatory agency and requiring licensing for companies need further exploration, as do the other ideas suggested, such as requiring transparency on training data and establishing clear frameworks for AI-related risks. Rather than creating a new agency, Congress can support the private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. Regulating AI should involve collaboration among academia, industry, policy experts and international agencies.


content #

Try to acquire the industry may be witnessing the emergence of a new type of tech monopoly.OpenAI CEO Sam Altman urged lawmakers to consider regulating AI during his Senate testimony on May 16, 2023. That recommendation raises the question of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting. But what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks .

A Pew Research survey found that 56 percent of Americans think regulations on AI are necessary, while 37 percent believe that the technology should have little or no regulation.

Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly.As a researcher who studies social media and artificial intelligence, I believe that Altman’s suggestions have highlighted important issues but don’t provide answers in and of themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies’ economic power and political sway .

AI technology is becoming increasingly pervasive, with more companies introducing new applications that are transforming multiple sectors.

Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman’s testimony. The European Union’s AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example .

Reports indicate that AI-generated content is already being deployed in educational settings.

The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks.Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on some of the risks inherent in AI .

AI models can be used in election processes, such as developing targeted campaigns, to influence public attitudes.

The Consumer Product Safety Commission and other agencies have a role to play as well.Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies .

The AI technology can potentially be used to aggregate vast amounts of personal data.

Congress can also adopt comprehensive laws around data privacy.Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Union .

Research suggests that the application of AI is essential for humanity's future, with numerous potential applications.

We stand at an inflection point in the history of AI and machine learning. Regulating AI to ensure fairness, safety and public trust in machine learning technologies and applications is a priority for government institutions and the private sector alike. To regulate AI effectively, legislators would do well to consider the input of experts, consider the economic implications of regulation, and create a reasonable framework for risk assessment .


hashtags #
worddensity #

Share