Google’s latest AI model Gemini 2.0 is here

Google's Gemini 2.0

Google introduced Gemini 1.0 last year, followed by Gemini 1.5 this February, and both of them succeeded in creating good hype in the market. Now, the company is back with its advanced artificial intelligence model, Gemini 2.0. The company claims this model to be “a new agentic era” in AI development. Let’s explore what this new model has to offer.

Gemini 2.0 by Google

What’s new?

Gemini 2.0 introduces several powerful features, like an improved ability to understand context, think several steps ahead, and even take supervised actions on behalf of users. The technology is backed by advanced hardware, such as Google’s sixth-generation TPU (Tensor Processing Unit) called Trillium.

“If Gemini 1.0 was about organizing and understanding information, Gemini 2.0 is about making it much more useful.” Sundar Pichai, CEO of Google

gemini 2.0

VIVO Ad

Along with this the new model also features multimodal input and text output, accessible to all developers, and text-to-speech and native image generation, initially available to early-access partners.

But an experimental model before

Gemini 2.0 Flash is available now as an experimental model to users. It is said to deliver faster response times and outperforms Gemini 1.5 Pro on benchmarks, running at twice the speed. In addition to supporting multimodal inputs like images, video, and audio, this new model now offers multimodal outputs, such as text combined with natively generated images and steerable text-to-speech (TTS) multilingual audio.

Image generated by Gemini
image generated by Gemini 2.0 Flash

Furthermore, it can natively call tools like Google Search, execute code, and integrate third-party user-defined functions.

Google is trying hands on something else too

In addition to Gemini 2.0, Google has teased an exciting update to Project Astra, a smartphone digital assistant designed to respond to both images and verbal commands, more like Apple’s Siri. Wait, there’s more! Google also launched a feature called “Deep Research,” which uses advanced reasoning and long-context capabilities to function as a research assistant. This feature allows Gemini 2.0 to explore complex topics and generate detailed reports on behalf of users.

Conclusion on Gemini 2.0

While Gemini 2.0 is currently being rolled out to developers and trusted testers, a broader release is expected in early 2025. For now, the Gemini 2.0 Flash experimental model is available to all Gemini users. That said, we eagerly await its full release, which promises to bring powerful new AI capabilities to a broader audience.