Google Unveils PaLM2 to Tackle GPT-4 Effect

Yana Khare 12 May, 2023 • 3 min read

Google introduced PaLM 2 that comparable to OpenAI's GPT-4. PaLM 2 powers Bard

On Wednesday, Google introduced PaLM 2, a family of foundational language models comparable to OpenAI’s GPT-4. At its Google I/O event in Mountain View, California, Google revealed that it already uses it to power 25 products, including its Bard conversational AI assistant.

Also Read: Google Bard Goes Global: Chatbot Now Available in Over 180 Countries

Features  of PaLM 2

Learn more about Google's new launch- PaLM 2 which will rival OpenAI's GPT-4

According to Google, PaLM 2 supports over 100 languages and can perform “reasoning,” code generation, and multi-lingual translation. During his 2023 Google I/O keynote, Google CEO Sundar Pichai said it comes in four sizes: Gecko, Otter, Bison, and Unicorn. Gecko is the smallest and can reportedly run on a mobile device. Aside from Bard, it is behind AI features in Docs, Sheets, and Slides.

PaLM 2 vs. GPT-4

All that is fine, but how does PaLM 2 stack up to GPT-4? In the PaLM 2 Technical Report, it appears to beat GPT-4 in some mathematical, translation, and reasoning tasks. But the reality might not match Google’s benchmarks. On a cursory evaluation of the PaLM 2 version of Bard by Ethan Mollick, he finds that its performance appears worse than GPT-4 and Bing on various informal language tests.

Also Read: ChatGPT v/s Google Bard: A Head-to-Head Comparison

PaLM 2 Parameters

The first PaLM was notable for its massive size: 540 billion parameters. Parameters are numerical variables that serve as the learned “knowledge” of the model. Thus, enabling it to make predictions and generate text based on the input it receives. More parameters roughly mean more complexity, but no guarantee they are used efficiently. By comparison, OpenAI’s GPT-3 (from 2020) has 175 billion parameters. OpenAI has never disclosed the number of parameters in GPT-4.

Lack of Transparency

So that leads to the big question: Just how “large” is PaLM 2 in terms of parameter count? Google doesn’t say. This has frustrated some industry experts who often fight for transparency in what makes AI models tick. That’s not the only property of it that Google has been quiet about. The company says PaLM 2 has been trained on “a diverse set of sources: web documents, books, code, mathematics, and conversational data.” But does not go into detail about what exactly that data is.

Concerns About Training Data

Experts are concerns About how PaLM 2 was trained | Training Data | Bard

The dataset likely includes a wide variety of copyrighted material used without permission and potentially harmful material scraped from the Internet.

Future Developments

And as far as LLMs go, PaLM 2 is far from the end of the story. In the I/O keynote, Pichai mentioned that a newer multimodal AI model called “Gemini” was currently in training.

Learn More: An Introduction to Large Language Models (LLMs)

Our Say

In conclusion, while PaLM 2 may fall short in some areas, it represents an important milestone in developing natural language processing technology. As we move closer to the next generation of language models, it will be fascinating to see how it evolves and matures and whether it can take on OpenAI’s GPT-4.

Yana Khare 12 May 2023

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers


Antgannon 26 May, 2023

Great article! It's interesting to see how Google is responding to potential GPT-4 issues with their most recent invention, Palm 2. Diverse disciplines have clearly been changed by advances in machine learning and natural language processing, but it's critical to be aware of potential biases and ethical issues that can arise. The idea of a model of collaboration between humans and AI seems intriguing. Google intends to increase the openness and accuracy of the AI system by including human reviewers and fact-checkers in the training process. This method might help calm some of the worries about inaccurate information and biased results that have been connected to past language model revisions. The reality is that Google is actively interacting with the AI research community and seeking external audits to make sure. Thank you for your informative article.

  • [tta_listen_btn class="listen"]