-
Notifications
You must be signed in to change notification settings - Fork 3k
Open
Labels
questionFurther information is requestedFurther information is requested
Description
Context:
Running Llama 7B pre training via NeMo on two A3 mega machines on GCP, which have 8 GPUs each, with 80GB of GPU RAM each.
I have disabled model.cpu_offloading
(set it to false) in my NeMo config.
When I run some test runs, and inspect the provided model state dict in my distributed checkpointing strategy, I notice that essentially all the optimizer*
keys' values in the dict to checkpoint-save are on CPU, and that there is overall 6x as much tensor data on CPU as there is on GPU.
Curious why that is, if that's expected, and if that is configurable? Would've expected most if not all tensor state to be on GPU, especially with model CPU offloading disabled and 80 GB of GPU RAM for a 7B model.
Thanks in advance!
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested