Just months after Google DeepMind unveiled Gemini — its most capable AI model ever — the London-based lab has released its compact offspring: Gemma.
Named after the Latin word for “precious stone,” Gemma is a new family of open models for developers and researchers. Google designed them for cost-efficient app and software building.
“Demonstrating strong performance across benchmarks for language understanding and reasoning, Gemma is available worldwide starting today,” Sundar Pichai, the company’s CEO, said on Twitter.
Gemma comes in two sizes — 2 billion and 7 billion parameters. Each of them has been released with pre-trained and instruction-tuned variants.
The lightweight models are descendants of Gemini. From their parent, they inherit technical and infrastructure components. Consequently, the models offer “best-in-class performance,” Google said.
As evidence, the tech titan revealed eye-catching comparisons with Llama-2, a family of large language models (LLMs) released by Meta a year ago.
Gemma is built to run on a developer laptop or desktop computer — which has attracted further attention.
“A model being able to run directly on a laptop, with equal capabilities to Llama2, is an impressive feat and removes a huge adoption barrier for AI that many organisations possess,” said Victor Botev, the CEO of Iris.ai, an Oslo-based startup.
As well as touting its performance, Google is promoting the ethical design of Gemma. Alongside the model weights, the company has also released a new Responsible Generative AI Toolkit. In addition, the models are only available for “responsible commercial usage and distribution,” Google said.
Devs and researchers who meet those standards are invited to start experimenting with Gemma now.
Source: thenextweb.com