Google Went After OpenAI But Ended up Rattling NVIDIA

Google now looks invulnerable with Gemini 3 backed by TPUs. The post Google Went After OpenAI But Ended up Rattling NVIDIA appeared first on Analytics India Magazine.

Google Went After OpenAI But Ended up Rattling NVIDIA

Two years ago, no one could have imagined that Google would suddenly leap ahead of OpenAI in the AI race. 

The search giant has come a long way since Bard’s rocky debut in 2023. In its inaugural demo, the chatbot incorrectly stated that the James Webb Space Telescope had taken the first-ever image of an exoplanet, even though the first such image was captured in 2004. This mistake proved embarrassing for Google. 

Following that, in 2024, Google’s image generation model, Gemini, faced criticism for producing historically inaccurate and racially biased visuals. 

Between 2022 and 2024, OpenAI surged ahead as ChatGPT became a household name. Feeling the pressure, Google launched Gemini in December 2023 and tweaked its benchmark methodology to assert an edge over GPT-4. 

However, with the release of Gemini 3 and Nano Banana Pro, the search giant has finally demonstrated its technical strength—so much so that its market cap is now edging towards $4 trillion, joining NVIDIA, Microsoft and Apple in the club.

Google’s stock price has been up nearly 20% since last month, compared to NVIDIA and SoftBank, which were down 7% and 35% respectively over the same period.

Google said Gemini 3 Pro outperforms OpenAI GPT-5.1 and Claude Sonnet 4.5 across significant independent AI benchmarks, including LMArena, Humanity’s Last Exam, GPQA Diamond and MathArena Apex. These benchmarks measure how effectively LLMs handle complex, human-level tasks that test their reasoning, problem-solving and real-world capability.

Since the launch of these models, social platforms have been filled with infographics and images generated by Nano Banana Pro, with users experimenting across artistic styles.

For instance, OpenAI co-founder Andrej Karpathy said he asked Gemini 3 to design a personalised workout schedule along with accompanying posters he could print and hang on the wall as reminders.

Meanwhile, Google DeepMind CEO Demis Hassabis credits the company’s strength to how research, engineering, and infrastructure teams work together. In a post on X he asserted that the company’s “real secret” is this deep integration, driven by relentless focus and intensity.

But Google isn’t alone in the race. Anthropic remains close beside, most recently launching Claude Opus 4.5, which the company claims beats Gemini 3 Pro in coding and agentic tasks in key benchmarks. 

While Anthropic is doubling down on coding excellence, Google is pushing ahead on multimodality and ecosystem integration.

Soumith Chintala, co-creator of PyTorch, said on X that the launch of Gemini 3 feels “closer to the GPT-4 moment than any other in recent times”, describing the sudden burst of progress, especially with Nano Banana, as “overwhelming.”

He added that although Google now looks invulnerable with Gemini 3 backed by Tensor Processing Units (TPUs), Android and Chrome, the race is far from over. 

Chintala also pointed out that Anthropic continues to dominate coding tasks, suggesting that real-world usage will ultimately determine the winner.

His comments capture a divide among users. While some prioritise strong coding performance, where Anthropic leads, others are drawn to Google’s multimodal features.

Vikrant Patankar, founding filmmaker at Composio, told AIM that Gemini 3 feels like the first time Google shipped a model family that is both powerful and practical. “The quality is noticeably stable across text, images and real-time tasks, and the Nano Banana efficiency jump makes on-device multimodal feel real instead of theoretical,” he said.

That reaction sets the tone for how the broader industry is responding. For several leaders, Gemini 3 stands out not just for performance but for how usable it feels. Salesforce CEO Marc Benioff said he was stunned by Gemini 3’s capabilities, calling the leap in reasoning, speed, images and video “insane”. Having used ChatGPT “every day for three years”, he revealed that after spending just two hours with Gemini 3, he decided he’s “not going back”.

OpenAI in Crisis?

The positive response of Gemini 3 even prompted OpenAI CEO Sam Altman to congratulate Google publicly. “Looks like a great model,” he posted on X.

However, in a recent internal memo accessed by The Information, Altman acknowledged that Google’s recent AI progress could create some temporary economic headwinds for OpenAI. “Google has been doing excellent work recently in every aspect,” he said in a compliment to the tech giant, mentioning that OpenAI is catching up fast.

“It sucks that we have to do so many hard things at the same time—the best research lab, the best AI infrastructure company, and the best AI platform/product company—but such is our lot in life. And I wouldn’t trade positions with any other company,” Altman wrote.

AI consultant and tech influencer Ashutosh Shrivastava told AIM that OpenAI faces two significant structural challenges. The company is heavily dependent on Microsoft, has a growing list of external deals, and is facing massive upcoming data-centre investments could create significant long-term financial pressure

Despite these challenges, OpenAI continues to lead in user adoption. ChatGPT has around 800 million weekly active users as of late 2025, Altman revealed during his keynote address at OpenAI DevDay 2025.

Meanwhile, Google CEO Sundar Pichai announced during the earnings call that the company reported over 650 million monthly active users of Gemini by Q3 2025.

In a blog post, Google said that 65% of its Cloud customers are already using its AI products, a category that includes the Gemini Enterprise offering. Meanwhile, OpenAI has announced that it has surpassed one million business customers globally.

According to Patankar, Google’s new stack finally feels like a unified AI layer. Search, Android and Workspace all gain tightly integrated capabilities, from richer reasoning to real-time multimodal understanding. 

He said that while OpenAI still dominates the story around a single powerful app, Google is trying to build AI into the way people use their devices every day. “If they can keep this stable at scale, it changes the battlefield completely.”

Shrivastava said many people believed “Google Search was finished”, but AI Mode shows it is “far from over and has actually got better.” He also pointed out that more than 13 million developers are now using Google’s generative AI models.

Notably, AI Search Mode is now the default when users type into the Google search bar, with AI Overviews also surfacing automatically as results.

Sergey is Back

Part of Google’s resurgence can be linked to co-founder Sergey Brin’s return. In an interview earlier this year, Brin recalled how, at a party, an OpenAI employee named Dan encouraged him to rejoin the company. “What are you doing? This is the greatest transformative moment in computer science,” Dan had asked him. Brin returned to active work at Google in 2023 to focus on developing AI products, particularly Gemini.

Meanwhile, Pichai, in May this year, said, “I think Sergey is definitely spending time with the Gemini team in a pretty hardcore way, setting and coding and spending time with the engineers.” He added that this involvement brings unmatched momentum to the group.

Google’s NVIDIA Alternative

Besides LLMs and the software ecosystem, Google also holds an advantage in hardware. Gemini 3 was trained on Google TPUs, while OpenAI is currently building compute partnerships with Oracle, Amazon, and NVIDIA. Google’s TPUs are engineered to excel at inference workloads by offering high throughput, low latency and power-efficient compute. 

According to Paras Chopra of Lossfunk, Google could “eat NVIDIA’s lunch”. He explained that Google should capitalise on the fact that Gemini 3 was trained entirely on TPUs and build on that advantage by expanding JAX and cutting TPU costs on its cloud platform. 

Notably, Anthropic recently announced plans to expand its use of Google Cloud services, including the deployment of one million TPUs. Recent reports also suggest that Meta is considering adopting them.

This shift prompted a slide in NVIDIA’s stock, leading the company to issue a statement saying, “NVIDIA is a generation ahead of the industry—it’s the only platform that runs every AI model and does it everywhere computing is done.”

Currently, Google’s specialised chips (TPUs) work very well with JAX, the newer software framework that has largely taken over from TensorFlow. However, they don’t work as well with PyTorch, the most popular framework in the tech industry.

At the same time, OpenAI has partnered with Broadcom and Foxconn to build AI chips and accelerators, as well as develop data-centre networking technologies.

Tech analyst Beth Kindig captured the market’s shift in sentiment towards Google. In a post on X, she said that just nine months ago, investors believed Google was “toast” because ChatGPT’s rise threatened its dominance in search. Today, she said, sentiment has flipped so dramatically that the market now thinks Google is strong enough to challenge NVIDIA in custom silicon. 

What’s Next for OpenAI?

In response to Google’s rapid advancements, Altman has reportedly told employees that OpenAI will close the gap. However, the company’s latest model, GPT-5.1, has so far struggled to capture users’ interest.

Reports further stated that the company is working on a new language model internally referred to as ‘Shallotpeat’. The model attempts to tackle the issues that surfaced during pre-training.
Besides that, OpenAI recently launched GPT-5.1-Codex-Max, a new agentic coding model that can run for over 24 hours.

The post Google Went After OpenAI But Ended up Rattling NVIDIA appeared first on Analytics India Magazine.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow