Google teams up with Mistral AI to integrate their cutting-edge AI model into Google Cloud

Technology
Webp sundar%2520pichai
Cheif Executive Officer Sundar Pichai | Google Cloud

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a message

Community Newsmaker

Know of a story that needs to be covered? Pitch your story to The Business Daily.
Community Newsmaker

Google and Mistral AI have joined forces to bring cutting-edge AI technology to Google Cloud, aiming to help smaller businesses incorporate AI into their own products and services. Mistral AI, one of Europe's leading AI producers, is integrating its advanced AI model, Vertex AI, into Google Cloud. This integration will make it easier for businesses to adopt AI technology.

The collaboration between Google and Mistral AI will provide users with the benefits of Google Cloud's commitment to multi-cloud and hybrid cloud solutions, prioritizing strong data security and privacy standards. Users will be able to adhere to privacy regulations and customize and execute their models within their preferred environment, whether it's on-premises, in Google Cloud, with another cloud provider, or spanning multiple geographic regions. Google Cloud's utilization of open source technologies will also offer users flexibility in their choices.

Mistral AI's inaugural open source model, "Mistral-7B," has already been seamlessly integrated with Vertex AI Notebooks on Oct. 17. This integration allows Google Cloud customers to execute a complete workflow for experimenting, testing, and fine-tuning with Mistral-7B and Mistral-7B-Instruct on Vertex AI Notebooks. Data scientists can work collaboratively, connecting to Google Cloud data services, conducting dataset analyses, experimenting with different modeling techniques, deploying trained models into production, and managing the entire model lifecycle through MLOps.

The integration of Mistral AI's models within Vertex AI leverages vLLM, an optimized Large Language Model (LLM) serving framework that enhances serving throughput. Users can conveniently deploy a vLLM image on a Vertex AI endpoint for inference and choose from a variety of accelerators to optimize model inference performance.

By utilizing Vertex AI model deployment, users can fully utilize the Vertex AI Model Registry, serving as a centralized repository for effectively managing the lifecycle of Mistral AI models and their own fine-tuned models. The Model Registry offers users a comprehensive overview of their models, facilitating organization, tracking, and the training of new versions. Users can easily assign a particular model version to an endpoint directly from the registry or use aliases to deploy models onto an endpoint.

Mistral AI's open-source foundational model, Mistral-7B, gained significant attention upon its release last month. Despite having only 7 billion parameters, it outperformed Meta's 13 billion parameter Llama 2 model on important benchmarks and achieved performance similar to Meta's massive 70 billion parameter Llama 2 model. Mistral-7B's launch as an uncensored base model was a groundbreaking move, setting it apart from other models in the field.

The collaboration between Google and Mistral AI is set to revolutionize the integration of AI into various industries, enabling smaller businesses to leverage cutting-edge technology and enhance their products and services.

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a message

Community Newsmaker

Know of a story that needs to be covered? Pitch your story to The Business Daily.
Community Newsmaker

MORE NEWS