Skip to content

[QUESTION] why is optimizer state all in CPU memory in my runs? #1730

@husam-e

Description

@husam-e

Context:

Running Llama 7B pre training via NeMo on two A3 mega machines on GCP, which have 8 GPUs each, with 80GB of GPU RAM each.
I have disabled model.cpu_offloading (set it to false) in my NeMo config.

When I run some test runs, and inspect the provided model state dict in my distributed checkpointing strategy, I notice that essentially all the optimizer* keys' values in the dict to checkpoint-save are on CPU, and that there is overall 6x as much tensor data on CPU as there is on GPU.

Curious why that is, if that's expected, and if that is configurable? Would've expected most if not all tensor state to be on GPU, especially with model CPU offloading disabled and 80 GB of GPU RAM for a 7B model.

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions