Navigating the Regulation Landscape for Generative AI Technologies
Category Science Sunday - January 21 2024, 19:29 UTC - 10 months ago The rise of generative AI has sparked worries about its impact on society, leading to calls for regulation. Some governments are actively addressing these concerns, while others are taking a more hands-off approach. Potential methods of regulation include limiting AI's training data, attributing output to creators for compensation, and distinguishing between human-created and AI-generated works. However, the feasibility of these approaches varies and continues to be explored.
The rise of generative artificial intelligence (AI) has raised concerns about the spread of disinformation, job displacement, and even the potential extinction of the human species. These concerns have prompted calls for regulation of AI technologies, with governments and companies struggling to find a balance between control and innovation.Some countries, such as the European Union, have responded to public pressure and are actively regulating generative AI .
Others, like the U.K. and India, are taking a more hands-off approach. In the United States, the White House has taken action in the form of an executive order titled 'Safe, Secure, and Trustworthy Artificial Intelligence,' which aims to reduce risks posed by AI technologies. One of the guidelines is for AI vendors to share safety test results with the government, and the order also calls for Congress to pass consumer privacy legislation in light of the vast amount of data being collected by AI programs .
When it comes to regulating AI, one major question is: what is feasible? There are two aspects to this question - what is technologically feasible right now, and what is economically feasible? It's important to consider not only the training data used in AI models, but also the output they produce.One possible regulation tactic is to limit AI's training data to public domain material and copyrighted content that has been granted permission for use .
This would allow AI companies to control exactly what data they use, but it may not be economically feasible as the quality of AI-generated content is highly dependent on the richness and amount of training data available. Some companies, such as Adobe with their Firefly image generator, have marketed the fact that they only use content with permission as a feature. Another approach is to attribute the output of AI to specific creators or groups of creators so they can be compensated for their work .
However, the complexity of AI algorithms makes it nearly impossible to determine which input samples were used in the creation of the output, let alone the extent to which each sample contributed. This is a crucial issue, as it will likely determine whether creators and their license holders embrace or reject AI technology. The 2007 Hollywood screenwriters' strike, which resulted in new protections for writers against AI technologies, is a prime example of this issue .
Finally, there is the idea of distinguishing between human-created works and those generated by AI. This approach can help settle issues of ownership and accountability, as it would be clear who is responsible for the creation. However, this focuses solely on the output of AI and may not address concerns about the use of AI in the first place.In conclusion, finding the right balance between regulation and innovation is crucial in managing concerns around generative AI technologies .
While some approaches may be technologically and economically feasible, others may not be practical in the current landscape. It is important to continue exploring potential solutions and regulations as the use of AI continues to evolve and expand.
Share