Skip to content

fix: skip loading reference model when KL penalty is zero#2178

Open
yfw wants to merge 1 commit intomainfrom
yifu/skip-ref-model-when-no-kl
Open

fix: skip loading reference model when KL penalty is zero#2178
yfw wants to merge 1 commit intomainfrom
yifu/skip-ref-model-when-no-kl

Conversation

@yfw
Copy link
Copy Markdown
Contributor

@yfw yfw commented Mar 31, 2026

When reference_policy_kl_penalty is 0, the reference model is unused during GRPO training. Pass init_reference_model=False to avoid allocating memory for the reference model weights.

Closes #1957

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

When reference_policy_kl_penalty is 0, the reference model is unused
during GRPO training. Pass init_reference_model=False to avoid
allocating memory for the reference model weights.

Closes #1957

Co-Authored-By: Jiaqi Zeng <jiaqiz@nvidia.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@yfw yfw requested a review from a team as a code owner March 31, 2026 00:01
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 31, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

policy_config["megatron_cfg"]["train_iters"] = total_train_iters

# Define initialization functions that will be used in all paths
init_reference_model = master_config["loss_fn"]["reference_policy_kl_penalty"] > 0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we set skip_reference_policy_logprobs_calculation to True in this situation? otherwise I guess we will get error when calling get_reference_policy_logprobs.

and I think it's better to add a functional test (or modify one exist functional test) for reference_policy_kl_penalty == 0.

policy_config["megatron_cfg"]["train_iters"] = total_train_iters

# Define initialization functions that will be used in all paths
init_reference_model = master_config["loss_fn"]["reference_policy_kl_penalty"] > 0
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BUG: Setting init_reference_model=False here prevents reference model weights from being loaded, but the sync training loop (line 1754) still calls policy.get_reference_policy_logprobs() unless grpo.skip_reference_policy_logprobs_calculation is explicitly True.

When reference_policy_kl_penalty=0 and the skip flag is unset, use_reference_model() accesses self.reference_model_state_dict which was never initialized → AttributeError.

Multiple existing configs are affected:

  • examples/nemo_gym/grpo_nanov3.yaml
  • examples/configs/recipes/llm/dapo-qwen2.5-7b.yaml
  • examples/configs/recipes/llm/grpo-deepscaler-1.5b-8K.yaml
  • examples/configs/recipes/llm/grpo-gspo-deepscaler-1.5b-8K.yaml

All have reference_policy_kl_penalty: 0 without setting skip_reference_policy_logprobs_calculation: true.

Suggested fix — auto-derive the skip flag:

Suggested change
init_reference_model = master_config["loss_fn"]["reference_policy_kl_penalty"] > 0
init_reference_model = master_config["loss_fn"]["reference_policy_kl_penalty"] > 0
# Auto-skip reference logprob calculation when reference model is not loaded
if not init_reference_model:
master_config["grpo"]["skip_reference_policy_logprobs_calculation"] = True

@terrykong
Copy link
Copy Markdown
Collaborator

Bug: async GRPO path missing reference logprob skip guard

The async GRPO path at nemo_rl/algorithms/grpo.py:2795 unconditionally calls policy.get_reference_policy_logprobs() — it doesn't check skip_reference_policy_logprobs_calculation. Even if the sync path is fixed, async GRPO with reference_policy_kl_penalty=0 will crash with AttributeError because reference_model_state_dict / reference_state_dict was never initialized.

This needs the same guard as the sync path (line 1754).


Re: @yuki-97's comment

Great catch — both points are valid:

  1. skip_reference_policy_logprobs_calculation: Without this flag set to true, the sync path (line 1754) will still call get_reference_policy_logprobs(), which invokes use_reference_model() and accesses self.reference_model_state_dict that was never initialized → AttributeError. Multiple existing configs with reference_policy_kl_penalty: 0 don't set the skip flag (e.g., grpo_nanov3.yaml, dapo-qwen2.5-7b.yaml, grpo-deepscaler-1.5b-8K.yaml).

  2. Functional test: Agreed — a test for reference_policy_kl_penalty == 0 would catch this. Note the async path needs the same fix.

Generated by Claude Code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[super-pr] skip loading ref model when kl>0

3 participants