Google Launches Gemini 1.5 Flash Model to Enhance Speed and Efficiency

TapTechNews reported on May 15 that Google expanded its Gemini family today, launching the new Gemini 1.5 Flash model, focusing on optimizing the speed and efficiency of the model.

Demis Hassabis, CEO of Google DeepMind, wrote in a blog post that Gemini 1.5 Flash excels in summarization, chat applications, image and video captions, and extracting data from long documents and tables.

Hassabis added that Google created Gemini 1.5 Flash because developers needed a lighter and cheaper model than Gemini 1.5 Pro released by Google in February.

TapTechNews notes that Gemini 1.5 Flash falls between Gemini 1.5 Pro and Gemini 1.5 Nano. Google indicates that this was achieved through a process called distillation, transferring the basic knowledge and skills from Gemini 1.5 Pro to a smaller model.

This means that Gemini 1.5 Flash will have the same multimodal capabilities as Pro, as well as a large context window (the amount of data the AI model can ingest at once), namely one million tokens.

Google states that Gemini 1.5 Flash will be able to analyze 1500 pages of documents or over 30,000 lines of code in one go.

Gemini 1.5 Flash is not aimed at consumers, instead, it is a faster and cheaper way for developers to build their own AI products and services using Google's designed technology.

2024 Google I/O Developer Conference Special

Likes