On February 15th, 2024, Google launched Gemini 1.5, a new AI model with improved capabilities and efficiency. It’s built upon innovative research, including a new architecture called Mixture-of-Experts (MoE). This model can process much larger amounts of data at once, offering a context window of up to 1 million tokens.
Gemini 1.5 Pro, the first version released for testing, shows comparable performance to the previous Gemini 1.0 Ultra but with enhanced features. It can analyze extensive content like videos, audio, code, and text, making it more useful for various tasks.
The model’s efficiency allows it to learn complex tasks faster while maintaining quality. With a larger context window, it can better understand and reason across different types of content, such as transcripts, movies, and codebases.
Gemini 1.5 Pro outperforms its predecessor in most evaluations, including tasks like finding specific information within lengthy text blocks. It also demonstrates impressive learning skills, translating languages and solving problems without additional fine-tuning.
Google emphasizes safety and ethics, conducting extensive testing to ensure responsible deployment of the model. Early access to Gemini 1.5 Pro is available for developers and enterprise customers, with a focus on scalability and improving user experience.
While the model is currently in a testing phase, Google plans to introduce pricing tiers based on the context window size. Early testers can experiment with the 1 million token context window at no cost during the testing period.
To learn more about Gemini 1.5 you may read the blog post at: https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/#sundar-note