Starling-7B: LLM with Reinforcement Learning from AI Feedback

NISHANT TIWARI 06 Dec, 2023 • 3 min read

The research team at UC Berkeley introduces Starling-7B, an open-source large language model (LLM) that employs Reinforcement Learning from AI Feedback (RLAIF). Leveraging the power of the cutting-edge GPT-4 labeled ranking dataset, Nectar, and a sophisticated reward training and policy tuning pipeline, Starling-7B-alpha has set a new standard in language model performance, outshining all models on MT-Bench, except for OpenAI’s GPT-4 and GPT-4 Turbo.

Starling-7B | Reinforcement Learning | LLM

The Potential of Reinforcement Learning

While supervised fine-tuning has demonstrated efficacy in chatbot system development, the potential of Reinforcement Learning from Human Feedback (RLHF) or AI feedback (RLAIF) in enhancing models at scale has been a subject of limited exploration. Earlier models like Zephyr-7B and Neural-Chat-7B have not fully showcased RLHF’s potential in comparison to leading Supervised Fine-Tuning (SFT) models.

To address this gap, the research team introduces Nectar, a meticulously crafted high-quality ranking dataset specifically tailored for chat, consisting of 183K prompts and 3.8M pairwise comparisons. This dataset aims to facilitate more thorough research into RLHF, offering a diverse range of prompts sourced from various models.

The release of the reward model, Starling-RM-7B-alpha, and the fine-tuned LLM, Starling-LM-7B-alpha, on HuggingFace marks a significant advancement in open-source AI research. Notably, the model’s MT-Bench score surged from 7.81 to an impressive 8.09, accompanied by a significant improvement in AlpacaEval, measuring the chatbot’s helpfulness from 88.51% to 91.99%.

Also Read: What is Reinforcement Learning and How Does It Work (2023)

Evaluation of the Model

Evaluating Starling-7B presents unique challenges. The LLM exhibits enhanced helpfulness and safety features post-RLHF, as evidenced by improvements in MT-Bench and AlpacaEval scores. However, its basic capabilities in knowledge-based QA, math, and coding have remained steady or experienced slight regression.

Starling-7B | Reinforcement Learning | LLM

Incorporation into the LMSYS Chatbot Arena for direct chat and anonymous comparisons provides a platform for testing human preferences. The evaluation also highlights limitations in using the OpenLLM Leaderboard as a benchmark for chat models, emphasizing the importance of nuanced assessments offered by Alpaca Eval and MT-Bench.

Starling-7B | Reinforcement Learning | LLM

Goodhart’s Law for Synthetic Preference Data

A crucial aspect to consider is Goodhart’s Law for synthetic preference data. While a higher MT-Bench score indicates improved model performance according to GPT-4, it doesn’t necessarily correlate with human preference. RLHF primarily enhances response style, particularly in aspects of helpfulness and safety, showcasing the potential of scaling online RL methods with extensive preference data.

Limitations

Despite its remarkable performance, Starling-7B has its limitations, struggling with tasks involving reasoning or mathematics. Additionally, susceptibility to jailbreaking prompts and occasional verbosity in outputs are acknowledged. The research team is committed to continuous improvement, inviting collaboration from the community to enhance the open dataset, reward models, and language models with RLHF.

Our Say

Starling-7B, with its RLAIF approach and meticulous dataset creation, is a testament to the potential of reinforcement learning in language models. While challenges and limitations persist, the commitment to improvement and collaboration with the broader community positions Starling-7B as a beacon in the evolving landscape of AI research. Stay tuned for more updates as the team delves deeper into refining RLHF mechanisms and contributing to the forefront of AI safety research.

NISHANT TIWARI 06 Dec 2023

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers