Balancing Innovation and Responsibility in AI: The Case for Open Responsible Licensing
Category Business Tuesday - February 13 2024, 05:40 UTC - 9 months ago The EU's AI Act categorizes AI systems based on risk level, but Open Responsible AI licenses (OpenRails) could provide another solution. This model functions similarly to open-source software, but includes conditions for responsible use. Proprietary licenses may hinder the potential progress of AI, but there are already examples of successful open-source approaches.
Artificial intelligence (AI) has become a buzzword in recent years, with its potential to revolutionize industries and improve daily life. However, along with its benefits, there is a growing concern about the potential dangers of advanced AI systems. From discrimination to job displacement to existential threats, there are valid reasons for society to want to place limits on the development and use of AI.
One proposed approach to tackling this issue is the EU's AI Act, which categorizes AI systems based on their risk level. This ranges from general purpose and generative AI on one end, to limited risk, high risk, and unacceptable risk on the other. This approach is novel and bold, and could potentially mitigate some of the potential harms of AI technology.
However, there may be another solution that could work in conjunction with the EU's AI Act. By adapting existing tools, such as software licensing, we could create a framework for responsible use of AI called Open Responsible AI licensing (OpenRails).
OpenRails functions similarly to open-source software. A developer or company releases their AI system publicly under the license, which allows anyone to freely use, adapt, and re-share the original technology. The difference with OpenRails is the inclusion of conditions that promote responsible use of the AI, such as not using it to break the law or discriminate against others without consent. These conditions can also be tailored to the specific technology, such as prohibiting its use in certain ways that may be deemed irresponsible by the developer.
One of the main benefits of this model is that many AI technologies are incredibly versatile and can be used for a variety of purposes. It is difficult to predict every potential way they may be exploited, so OpenRails allows developers to push forward with open innovation while reducing the risk of their ideas being used irresponsibly.
On the other hand, proprietary licenses, which prioritize protecting the interests of creators and investors, can hinder the openness and potential progress of AI. Many large tech companies currently operate under proprietary licenses, charging for access to their AI systems. This restrictive approach may not be suitable for the broad reach and potential of AI technology.
However, there are already examples of companies using an open-source approach to AI. Meta's generative AI system Llama-v2 and the image generator Stable Diffusion are both open-source. Additionally, French AI startup Mistral, which is now valued at $2 billion USD, is set to openly release its latest AI model, rumored to rival the performance of GPT-4 (the model behind Chat GPT).
It is clear that openness is a crucial factor in driving progress and innovation in AI. However, we must also ensure that AI is used responsibly for the benefit of society. As AI technology becomes more integrated into our daily lives and societal infrastructure, we must consider the implications and challenges of a society run by AI. Open but responsible licensing could be one way to balance the need for innovation with the responsibility to mitigate potential harms.
Share