Google is in its Gemini era, according to CEO Sundar Pichai.
Pichai took the stage at Google’s Tuesday I/O event and shared the new capabilities of the tech giant’s year-old Gemini AI model, which was officially released in December as Google’s largest AI model that could reason multimodally, or across text, code, audio, video, and image.
Google also introduced a prototype of a new AI assistant, Project Astra, and said that AI-written search summaries would be rolling out in the U.S. starting today.
Pichai stated that more than 1.5 million developers have used the Gemini AI model since its launch and that Google has incorporated it into all of its two billion user products.
Though Google is “in the very early days of the AI platform shift,” per Pichai, the multitude of AI announcements at the event shows that the company has been hard at work to not only add new AI products but also infuse familiar, existing services like Google Search with AI.
Google CEO Sundar Pichai. (Photo by Andrej Sokolow/picture alliance via Getty Images)
Google now has an updated AI image generator called Imagen 3, a new AI video generator called Veo, and a music AI sandbox developed in collaboration with YouTube artists, musicians, and creators.
It also announced a cost-effective AI model, Gemini 1.5 Flash, which is a faster version of Google’s previous AI models. It can generate images and write responses more quickly, for example.
The New: Project Astra AI Assistant
At the event, Google introduced Project Astra AI assistant, which matches the capabilities of OpenAI’s GPT-4o, revealed yesterday.
Much like GPT-4o, which will be widely available to free and paid users in the coming weeks, Google’s AI assistant unites audio, visual, and written cues to respond quickly to users.
Related: OpenAI’s Launches New AI Chatbot, GPT-4o, Which Sounds Almost Like a Friend Would
For example, a user could show Project Astra their surroundings with their phone camera and ask the AI agent to process code on a screen or talk about the components of a speaker on their desk.
The AI agent responds with little lag time in its responses, which makes for a better back-and-forth.
We’re sharing Project Astra: our new project focused on building a future AI assistant that can be truly helpful in everyday life. ?
Watch it in action, with two parts – each was captured in a single take, in real time. ↓ #GoogleIO pic.twitter.com/x40OOVODdv
— Google DeepMind (@GoogleDeepMind) May 14, 2024
Project Astra is still a prototype, but it shows Google’s intention to work towards a single AI assistant.
AI Is Taking Over Google Search
AI overviews, or written summaries of results that appear at the top of a Google search page, are rolling out to all U.S. users starting today.
By the end of the year, Google says that AI overviews will apply to more than a billion people using Search.
This is Search in the Gemini era. #GoogleIO pic.twitter.com/JxldNjbqyn
— Google (@Google) May 14, 2024
Google has already been testing AI in Search to help answer billions of queries.
Pichai stated that users were asking longer and more complex questions, using search more, and reporting increased satisfaction with the results.
Some “coming soon” AI features in Google Search include asking questions directly in the search bar with video and multi-step reasoning capabilities.
Related: Google Is Reportedly Considering a Subscription Fee for AI-Enhanced Internet Searches
Read the full article here