Skip to content

Add Support for Packed Sequence Format in GPT Training #1696

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sbhavani
Copy link
Collaborator

Overview

Adds support for packed sequence format ('thd') in GPT training when using Transformer Engine's DotProductAttention.

Key Changes

  • Added --gpt-use-thd-qkv-format flag to enable packed sequence format
  • Added utility function get_cu_seqlens() to handle cumulative sequence lengths for packed format
  • Modified forward_step to support packed sequence parameters when enabled
  • Optimized attention mask generation in the dataloader

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants