Skip to content

fix: skip_reference_policy_logprobs_calculation=true crashes training#2174

Open
ShriyaRishab wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
ShriyaRishab:fix/issue-1968
Open

fix: skip_reference_policy_logprobs_calculation=true crashes training#2174
ShriyaRishab wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
ShriyaRishab:fix/issue-1968

Conversation

@ShriyaRishab
Copy link
Copy Markdown

What does this PR do ?

Summary

Setting skip_reference_policy_logprobs_calculation=true in GRPO config crashes because:

  1. reference_policy_logprobs is never assigned to train_data when skipped
  2. use_reference_model() context manager crashes when no reference state dict exists

Fixes #1968

Root Cause

Three code paths needed fixes:

  1. grpo.py sync path: missing train_data["reference_policy_logprobs"] assignment
  2. grpo.py async path: same
  3. Policy workers: use_reference_model() tries to swap non-existent state dicts

Fix

  1. When skip is enabled, assign torch.zeros_like(prev_logprobs) to reference_policy_logprobs
  2. Added _has_reference_model() base method
  3. In get_reference_policy_logprobs(): return zeros if no reference model
  4. In all three worker use_reference_model() context managers: yield without swapping if no reference state dict

Issues

List issues that this PR closes (syntax):
#1968

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Fixes NVIDIA-NeMo#1968: Setting skip_reference_policy_logprobs_calculation=true
with reference_policy_kl_penalty=0 crashed training in three ways:

Bug 1: use_reference_model() context manager crash when reference model
was never initialized (AttributeError on reference_state_dict).
Fix: Added early-return guard in use_reference_model() for all three
worker types (megatron, dtensor v1, dtensor v2) - yields without
swapping when reference model is None/missing.

Bug 2: Async GRPO path unconditionally called
get_reference_policy_logprobs() without checking the skip flag.
Fix: Added the same skip guard as the sync path, setting zeros_like
for reference_policy_logprobs when skipping.

Bug 3: Missing reference_policy_logprobs key in train_data causing
shape mismatches downstream in loss computation.
Fix: Both sync and async paths now explicitly set
train_data['reference_policy_logprobs'] = zeros_like(prev_logprobs)
when skipping. Also added a _has_reference_model() helper and
zeros fallback in base_policy_worker.get_reference_policy_logprobs()
as defense-in-depth.
@ShriyaRishab ShriyaRishab requested review from a team as code owners March 30, 2026 20:40
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 30, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

skip_reference_policy_logprobs_calculation=true crashes training with RuntimeError / NameError

1 participant