The State of AI: A Week in Cambridge
Category Artificial Intelligence Thursday - May 23 2024, 19:53 UTC - 6 months ago MIT Technology Review's flagship AI conference, EmTech Digital, will focus on harnessing the power of AI while mitigating its risks, and world leaders will gather in Seoul for the second AI Safety Summit. Google and OpenAI have recently released their new AI models, Astra and GPT-4o. AI advancements in medical image analysis and communication capabilities continue to transform the field. Prominent researchers are advocating for more investment in AI safety and stricter regulations.
I’m excited to spend this week in Cambridge, Massachusetts. I’m visiting the mothership for MIT Technology Review’s annual flagship AI conference, EmTech Digital, on May 22-23. Between the world leaders gathering in Seoul for the second AI Safety Summit this week and Google and OpenAI’s launches of their supercharged new models, Astra and GPT-4o, the timing could not be better. AI feels hotter than ever .
This year’s EmTech will be all about how we can harness the power of generative AI while mitigating its risks, and how the technology will affect the workforce, competitiveness, and democracy. We will also get a sneak peek into the AI labs of Google, OpenAI, Adobe, AWS, and others. AI in the ER Advances in medical image analysis are now enabling doctors to interpret radiology reports and automate incident documentation .
This session by Polina Golland, the associate director of the MIT Computer Science and AI Laboratory, will explore both the challenges of working with sensitive personal data and the benefits of AI-assisted health care for patients. Future Compute On Tuesday, May 21, we are also hosting Future Compute, a day looking at how businesses and technical leaders navigate adopting AI. We have tech leaders from Salesforce, Stack Overflow, Amazon, and more, discussing how they are managing the AI transformation, and what pitfalls to avoid .
Deeper Learning To kick off this busy week in AI, heavyweights such as Turing Prize winners Geoffrey Hinton and Yoshua Bengio, and a slew of other prominent academics and writers, have just written an op-ed published in Science calling for more investment in AI safety research. The op-ed, timed to coincide with the Seoul AI Safety Summit, represents the group’s wish list for leaders meeting to discuss AI .
Many of the researchers behind the text have been heavily involved in consulting with governments and international organizations on the best approach to building safer AI systems. They argue that tech companies and public funders should invest at least a third of their AI R&D budgets into AI safety, and that governments should mandate stricter AI safety standards and assessments rather than relying on voluntary measures .
The piece calls for them to establish fast-acting AI oversight bodies and provide them with funding comparable to the budgets of safety agencies in other sectors. It also says governments should require AI companies to prove that their systems cannot cause harm. Even Deeper Learning Last Monday OpenAI released GPT-4o, an AI model that you can communicate with in real time via live voice conversation, video streams from your phone, and text .
But just days later, Chinese speakers started to notice that something seemed off about it: the tokens it uses to parse text were full of phrases related to spam and porn. Bits and Bytes What’s next in chips Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. We outline four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they’ll unlock .
(MIT Technology Review) .
Share