© Reuters. FILE PHOTO: Words reading “Artificial intelligence AI”, miniature of robot and toy hand are pictured in this illustration taken December 14, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
By Kenneth Li
(Reuters) – ChatGPT was well on its way to becoming a household name even before 2023 kicked off.
Just weeks after the Nov. 30 launch of the generative artificial intelligence-powered chatbot, OpenAI, the non-profit behind ChatGPT, was projected to rake in as much as $1 billion in revenue in 2024, sources told Reuters at the time.
The so-called large language model’s ability to turn prompts into poetry, song, and high school essays enchanted 100 million users within two months, accomplishing what took Facebook (NASDAQ:) four and a half years and Twitter five in becoming the fastest growing consumer app ever.
Sometimes, the answers were wrong, despite being delivered with conviction. This happened often enough that “hallucinate,” in the sense of AI producing wrong information, was selected as Dictionary.com’s word of the year, owing to the technology’s deep impressions on society.
Such mistakes did not sap the euphoria or stop the existential dread this new technology inspired. Investors, led by Microsoft (NASDAQ:)’s multibillion dollar bet on OpenAI, injected $27 billion into generative AI startups in 2023, according to Pitchbook. The battle for AI supremacy, stewing in the background between big tech firms for years, was suddenly in focus with Alphabet (NASDAQ:), Meta and Amazon.com (NASDAQ:) all announcing new milestones.
By March, thousands of scientists and AI experts, including Elon Musk, signed an open letter demanding a pause to training more powerful systems to study their impact on, and potential danger to, humanity. The move drew parallels to “Oppenheimer,” Christopher Nolan’s box office hit about the titular atomic bomb maker’s warnings that the relentless pursuit of progress could lead to human extinction.
“This is an existential risk,” said one of the “godfathers of AI,” Geoffrey Hinton, who quit Alphabet in May. “It’s close enough that we ought to be working very hard right now, and putting a lot of resources into figuring out what we can do about it.”
WHY IT MATTERS
Consultancy PwC estimated AI-related economic impact could reach $15.7 trillion globally by 2030, nearly the gross domestic output of China.
Powering this growth optimism is the fact that nearly every industry from finance and legal to manufacturing and entertainment have embraced AI as part of its foreseeable strategy.
The winners and losers in the AI era are only just emerging. As in other eras, beneficiaries will likely be drawn along socio-economic lines. Civil rights advocates have raised concerns over potential bias in AI in fields such as recruitment, while labor unions have warned of deep disruptions to employment as AI threatens to reduce or eliminate some jobs including writing computer code and drafting entertainment content.
Chipmaker Nvidia (NASDAQ:), whose graphics processors are the hottest commodity in the global AI race, has emerged as a big early winner, with its market capitalization soaring into the trillion dollar club alongside Apple (NASDAQ:) and Alphabet.
In the final months of the year, another winner appeared unexpectedly out of turmoil. In November, the board of OpenAI fired CEO Sam Altman for “not being consistently candid with them,” according to its terse statement.
In the absence of explanation, the spectacle became a referendum over AI evangelism, represented on the one hand by Altman’s push to commercialize AI, versus skeptics and doomsayers who sought a slower and more careful approach.
The optimists – and Altman – won. The ousted CEO was restored just days later, thanks in no small part to OpenAI employees who threatened a mass exodus without him at the helm.
In explaining what brought the company to the brink, Altman said people were fretting over the high stakes of developing AI that could surpass human intelligence. “I think that all exploded,” he said at a New York event in December.
Some OpenAI researchers had warned of a new AI breakthrough ahead of Altman’s ouster, through a top-secret model called Q* (pronounced Q-Star), Reuters reported in November.
WHAT DOES IT MEAN FOR 2024?
One question provoked by the OpenAI saga: will the future of AI and its societal impact continue to be deliberated behind closed doors, by a privileged few in Silicon Valley?
Regulators led by the EU are determined to play a lead role in 2024 with a comprehensive plan to establish guardrails for the technology in the form of the EU AI Act. The details of the draft are due to be disclosed in the coming weeks.
These rules, and others being drafted in the U.K. and U.S., come as the world heads into the biggest election year in history, raising concern about AI-generated misinformation targeting voters. In 2023 alone, NewsGuard, a company which established a ratings system for news and information websites, tracked 614 “unreliable” AI-generated sites in 15 languages from English to Arabic and Chinese.
Good or bad, expect AI, which has already been conscripted to make campaign calls in the U.S., to play an outsize role in many of the elections taking place this year.
Read the full article here