Google launches Gemma: new open source AI tool for developers
Google launched another AI model only a week after it announced the release of its Gemini models. The new release, Gemma, is part of a family of “lightweight open-source models,” starting with Gemma 2B and Gemma 7B. The large language models are said to be “inspired by Gemini,” and are designed to be used by researchers and developers to innovate safely with AI.
In a new report, researchers at Gemma and Google’s DeepMind wrote: “We believe the responsible release of LLMs is critical for improving the safety of frontier models and enabling the next wave of LLM innovations.”
Introducing Gemma: a family of lightweight, state-of-the-art open models for developers and researchers to build with AI.
We’re also releasing tools to support innovation and collaboration – as well as to guide responsible use.
Get started now. → https://t.co/nsoFqfHffY pic.twitter.com/PRNIndgU4S
— Google DeepMind (@GoogleDeepMind) February 21, 2024
The Gemma models reflect the sizes of their parameters. Gemma 7B is a seven-billion-parameter model designed for efficient deployment and development on GPUs and TPUs. In comparison, Gemma 2B is a two-billion-parameter model suited for CPU and on-device applications. “Each size is designed to address different computational constraints, applications, and developer requirements,” the team said.
Alongside the new models, Google said it is introducing a new responsible generative AI toolkit aimed at offering “guidance and essential tools for creating safer AI applications with Gemma,” in addition — it also has a debugging tool.
Google compared the product to other open models, claiming that it outperformed similar-sized open models on 11 of 18 text-based tasks and demonstrated “strong performance across academic benchmarks for language understanding, reasoning, and safety.”
It announced that it would offer integration with JAX, PyTorch, and TensorFlow through native Keras 3.0. Developers can gain access to ready-to-use Colab and Kaggle notebooks, as well as integrations with Hugging Face, MaxText, and Nvidia’s NeMo. Once pre-trained and fine-tuned, these models can be run everywhere.
How Gemma is different to Gemini
In a blog post, Google Cloud AI Vice President Burak Gokturk wrote, “With Vertex AI, builders can reduce operational overhead and focus on creating bespoke versions of Gemma that are optimized for their use case.” Gemma was designed so that these users can test their products in a contained environment, but it warned that “rigorous safety testing” was required before widespread deployment.
Gemma is said to be designed with Google’s AI Principles at the forefront. In an effort to ensure the Gemma pre-trained models are reliable, it stated that automated techniques were employed to filter out certain personal information and other sensitive data from the training sets. Additionally, extensive fine-tuning and reinforcement learning from human feedback (RLHF) aligned the instruction-tuned models with responsible behaviors.
With this in mind, Google is extending free credits to developers and researchers interested in using Gemma. Access is free via Kaggle or Colab, while first-time Google Cloud users are eligible for a $300 credit. Moreover, researchers may apply for grants up to $500,000 for their projects.
Featured image: Google / Canva
The post Google launches Gemma: new open source AI tool for developers appeared first on ReadWrite.