New updates to Google’s Gemini AI platform have been revealed, giving developers access to more powerful tools and better capabilities. Google is introducing code execution features, adding Gemma 2 to Google AI Studio, and opening up the two million token context window for Gemini 1.5 Pro to all developers.

Once reserved for developers on a waitlist, the two million token context window is now accessible to everyone using Gemini 1.5 Pro. Users can efficiently conduct in-depth analyses and generate content with this large context window.

Google has included context caching in Gemini 1.5 Pro and 1.5 Flash to alleviate concerns about costs related to more significant inputs. The purpose of this feature is to make tasks that use tokens for multiple prompts cheaper.

Read also: How Google AI – Gemini – Empower Job Seekers in Nigeria

Capabilities of the code execution

Google has enabled code execution for Gemini 1.5 Pro and 1.5 Flash to improve accuracy in mathematical and data reasoning tasks. Thanks to this capability, the model can now generate and execute Python code, with the results allowing it to learn iteratively.

Several numerical libraries are part of the execution environment, which is sandboxed and does not have access to the internet. The model’s output tokens determine the developers’ fees.

Thanks to the Gemini API and Google AI Studio’s “advanced settings,” code execution as a model capability is now available, according to Google.

Production of Gemini 1.5 Flash and integration with Gemma 2 begins

With the release of Gemma 2, an open model, Google is further democratising AI development by making it available in Google AI Studio for experimentation. Thanks to this change, developers can now investigate and incorporate Gemma 2 with the Gemini models.

Furthermore, Google showcased numerous production use cases of Gemini 1.5 Flash, demonstrating its affordability and speed:

Read also: Google considers charging for AI-driven premium features

The mobile app would help the visually impaired by describing their surroundings in real time. It would be an automated platform for policy analysis that summarises intricate laws. It would have video editing automation with Zapier’s video reasoning capabilities. In the dot area, it is an artificial intelligence system that uses 1.5 Flash to compress data in LTMs.

The company has also announced that they will progressively roll out text tuning for developers of Gemini 1.5 Flash, which is currently in the red-teaming phase. They anticipate that by the middle of July, the Gemini API and Google AI Studio will provide full access to the 1.5 Flash tuning for Gemini.

Google has a developer forum where interested developers can discuss these new features. Google touts Vertex AI as the most enterprise-ready genAI platform, so enterprise developers are encouraged to explore it.