Skip to content

We needed Google to enter the Open-Source AI Space, and it happened!

Google's AI model can help you build chatbots, and many other useful tools. A good addition to the open-source AI industry.

Google is on a roll! After recently open-sourcing Magika, their AI-powered tool for file identification; they have now introduced Gemma, a state-of-the-art open-source AI model that shares similar foundations used to create Gemini family of models.

Let's take a look at it without any further ado.

Suggested Read 📖

Google Just Open-Sourced its AI-Powered Tool ‘Magika’ to Help Step Up the Cybersecurity Ecosystem
This AI tool is open-sourced by Google and available on GitHub.

Gemma: What to Expect?

a screenshot of the gemma website banner

Developed by Google DeepMind and other teams at Google, the name “Gemma” takes cues from Gemini, and means “precious stone” in Latin.

With the launch, Google aims to provide model weights and development tools to further boost progress while also helping with collaboration, and guiding the responsible use of Gemma models.

They have released Gemma in two sizes, Gemma 2B with two billion parameters and Gemma 7B with seven billion parameters. Both feature pre-trained and instruction-tuned variants, with the license for using Gemma said to permit commercial usage and allowing distribution to all organizations.

You can take a look at the terms for your judgement.🤓

Moving on, Google says that Gemma is cross-device compatible and can run on devices such as mobiles, laptops, desktops, IoT, and cloud. To ensure that, they have made optimizations across many AI hardware platforms that include Google Cloud TPUs and NVIDIA GPUs.

For the latter, they have partnered with NVIDIA for optimizing Gemma for NVIDIA GPUs, starting from your run-of-the-mill RTX-equipped AI PCs all the way to the bleeding-edge hardware found in data centers.

That doesn't stop here. NVIDIA will soon add support for GNOME to help you personalize your chatbot as well:

Google also shared some benchmarks comparing it with Meta's Llama-2 model, where it was able to outpace Llama-2 by some considerable margin in various tasks such as Math, Reasoning, Code, etc.

They also shared a detailed report on how Gemma sizes up to other AI models.

a photo showing benchmarking scores of gemma compared to llama 2
Source: Google

One more thing that Google claims for Gemma is that it follows their AI Principles. By making use of automated techniques to remove any personal information or other kinds of sensitive data, they can provide pre-trained models that are quite safe and reliable in nature.

Google also mentions that they have employed “extensive fine-tuning and reinforcement learning from human feedback (RLHF)” to keep Gemma's instruction-tuned models in line with “responsible behaviors”.

To complement Gemma, they also offer a Responsive Generative AI Toolkit that aims to help developers and researchers focus on building safe and responsible AI (RAI) applications.

For more details on how Google pulled off Gemma, you can refer to the announcement blog and the blog on how they are building open models responsibly.

Should OpenAI be worried?

Well, I would say yes. We needed a big player like Google to enter the open-source AI space, not because we want Google to dominate, but a big player entering the open-source AI market is most likely to shake things up.

More importantly, the model has close ties with its Gemini AI, making things very intriguing:

And, competition never hurts, right? Companies like OpenAI who pride themselves with their closed source solutions need to understand the importance of the situation we are in right now.

Closing access to things as important as AI is not the way to go, it only serves to alienate a large part of the global AI community who could have taken advantage of its capabilities.

I know AI can be misused, but that can be curtailed if you have the right safeguards in place to deter misuse.

After reading the above; Do you want to try out Gemma?

Well, Google is offering free access to Gemma on Kaggle, a free tier on Colab notebooks, and a $300 credit for first-time Google Cloud users. There is also a way to potentially get $500,000 in credits for researchers, who can apply for it using this Google Form.

For other ways to implement Gemma, you can refer to the official website for more information.

Furthermore, Google has hosted a PyTorch implementation and a standalone C++ inference engine for Gemma on their official GitHub page.

The possibilities with Google opening up a model like this are endless! What do you have in mind with Google's Gemma? Share your thoughts in the comments!


More from It's FOSS...

Latest