2023: What AI Taught Me

Category Artificial Intelligence

tldr #

This year has been a wild year in the AI world, with Big Tech going all in with generative AI, a race to find the next killer app, and continued debates about the potential for AI to pose an existential risk to humans. Optimistically, generative AI could be embedded in tools to help boost our productivity and useful applications to better humankind. It is important to remember to weigh the potential risks of AI advancement before moving forward.


content #

This has been one of the craziest years in AI in a long time: endless product launches, boardroom coups, intense policy debates about AI doom, and a race to find the next big thing. But we’ve also seen concrete tools and policies aimed at getting the AI sector to behave more responsibly and hold powerful players accountable. That gives me a lot of hope for the future of AI.

Here’s what 2023 taught me: The year started with Big Tech going all in on generative AI. The runaway success of OpenAI’s ChatGPT prompted every major tech company to release its own version. This year might go down in history as the year we saw the most AI launches: Meta’s LLaMA 2, Google’s Bard chatbot and Gemini, Baidu’s Ernie Bot, OpenAI’s GPT-4, and a handful of other models, including one from a French open-source challenger, Mistral.But despite the initial hype, we haven’t seen any AI applications become an overnight success. Microsoft and Google pitched powerful AI-powered search, but it turned out to be more of a dud than a killer app. The fundamental flaws in language models, such as the fact that they frequently make stuff up, led to some embarrassing (and, let’s be honest, hilarious) gaffes. Microsoft’s Bing would frequently reply to people’s questions with conspiracy theories, and suggested that a New York Times reporter leave his wife. Google’s Bard generated factually incorrect answers for its marketing campaign, which wiped $100 billion off the company’s share price.

In 2023, generative AI was released by many tech companies, but none became an overnight success.

There is now a frenetic hunt for a popular AI product that everyone will want to adopt. Both OpenAI and Google are experimenting with allowing companies and developers to create customized AI chatbots and letting people build their own applications using AI—no coding skills needed. Perhaps generative AI will end up embedded in boring but useful tools to help us boost our productivity at work. It might take the form of AI assistants—maybe with voice capabilities—and coding support. Next year will be crucial in determining the real value of generative AI.

The existential risk hypothesis is championed by many in Silicon Valley, with OpenAI’s chief scientist, Ilya Sutskever, playing a pivotal role in OpenAI’s CEO Sam Altman’s ousting.

Chatter about the possibility that AI poses an existential risk to humans became familiar this year. Hundreds of scientists, business leaders, and policymakers have spoken up, from deep-learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid. Existential risk has become one of the biggest memes in AI. The hypothesis is that one day we will build an AI that is far smarter than humans, and this could lead to grave consequences. It’s an ideology championed by many in Silicon Valley, including Ilya Sutskever, OpenAI’s chief scientist, who played a pivotal role in ousting OpenAI CEO Sam Altman (and then reinstating him a few days later).

Generative AI could be embedded in useful tools to help with productivity in the form of voice-capable AI assistants and coding support.

But not everyone agrees with this idea. Meta’s AI leaders Yann LeCun and Joelle Pineau have said that these fears are "ridiculous" and the conversation about AI risks has become "unhinged." Many other power players in AI, such as researcher Joy Buolamwini, say that focusing on hypothetical risks distracts from the very reals issues and practical concerns of today, such as bias, accidental interpretation issues, and biased algorithms.

Fears of existential risk from AI have been met with scepticm by many power players in AI such as Yann LeCun and Joelle Pineau.

In conclusion, 2023 has been a wild year for AI, with generative AI products, debates of existential risk, and a matching search for a new AI killer app, yet caution is necessary to weigh the potential risks of AI advancement before moving forward, and how this technology could be used to benefit humankind.


hashtags #
worddensity #

Share