Google releases a new multimodal AI model "Gemini"
ChainCatcher news, Google announced the launch of its artificial intelligence model "Gemini," developed by the Google DeepMind team. It is reported that Gemini is designed to be multimodal, capable of understanding and processing different types of information such as text, code, audio, images, and video. The Gemini model is divided into three different scales: Ultra, Pro, and Nano. The Ultra version is aimed at complex tasks, the Pro version is suitable for a wide range of tasks, and the Nano version is optimized for tasks on mobile devices.Currently, Gemini has begun to be promoted and applied in several Google products and platforms. For example, Google Bard will use an improved version of Gemini Pro, while the Pixel 8 Pro will be the first smartphone to adopt Gemini Nano. Additionally, starting from December 13, developers and enterprise customers will be able to access the Gemini Pro API through Google AI Studio or Google Cloud Vertex AI to develop and deploy Gemini-based applications.