Responding to the Future of Life Institute: Why the AI Pause Should be More Precise and Targeted

Category Artificial Intelligence

tldr #

Ray Kurzweil, co-founder and member of the board of the Singularity Group, responded to the Future of Life Institute's open letter by stating that the criteria used is too vague to be practical and the coordinating problem of agreeing to a pause versus those who disagree exists. Kurzweil believes that the signers' safety concerns should be addressed in a way that does not compromise the vital lines of research.


content #

Editor's Note: The following is a brief letter from Ray Kurzweil, cofounder and member of the board at Singularity Group, Singularity Hub's parent company, in response to the Future of Life Institute's recent letter, "Pause Giant AI Experiments: An Open Letter."\nThe FLI letter addresses the risks of accelerating progress in AI and the ensuing race to commercialize the technology and calls for a pause in the development of algorithms more powerful than OpenAI's GPT-4, the large language model behind the company's ChatGPT Plus and Microsoft's Bing chatbot. The FLI letter has thousands of signatories—including deep learning pioneer, Yoshua Bengio, University of California Berkeley professor of computer science, Stuart Russell, Stability AI CEO, Emad Mostaque, Elon Musk, and many others—and has stirred vigorous debate in the AI community.\nRegarding the open letter to "pause" research on AI "more powerful than GPT-4," this criterion is too vague to be practical. And the proposal faces a serious coordination problem: those that agree to a pause may fall far behind corporations or nations that disagree. There are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields. I didn’t sign, because I believe we can address the signers’ safety concerns in a more tailored way that doesn’t compromise these vital lines of research.\nI participated in the Asilomar AI Principles Conference in 2017 and was actively involved in the creation of guidelines to create artificial intelligence in an ethical manner. So I know that safety is a critical issue. But more nuance is needed if we wish to unlock AI’s profound advantages to health and productivity while avoiding the real perils.\n— Ray Kurzweil .

Ray Kurzweil is the co-founder and a member of the board of the Singularity Group, which is the parent company of Singularity Hub

Inventor, best-selling author, and futurist  .


hashtags #
worddensity #

Share