Skip to content

Activation memory estimation (in resource utilization) ignores layers where the activation quantization disabled #1293

Open
@elad-c

Description

@elad-c

Issue Type

Bug

Source

source

MCT Version

nightly

OS Platform and Distribution

No response

Python version

No response

Describe the issue

Currently unquantized vectors size are ignored in MaxTensor and MaxCut. Need to handle according to the quantization preserving flag.

Expected behaviour

No response

Code to reproduce the issue

Node torch.expand might cause an issue (untreated increased tensor size).

Log output

No response

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions