You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Model ID on huggingface.co or path on disk to the model repository to export. Example: `model_name_or_path="meta-llama/Llama-3.2-1B"` or `mode_name_or_path="/path/to/model_folder`.
140
-
export (`bool`, *optional*, defaults to `True`):
141
-
If `True`, the model will be exported from eager to ExecuTorch after fetched from huggingface.co. `model_name_or_path` must be a valid model ID on huggingface.co.
142
-
If `False`, the previously exported ExecuTorch model will be loaded from a local path. `model_name_or_path` must be a valid local directory where a `model.pte` is stored.
143
-
recipe (`str`, defaults to `""`):
144
-
The recipe to use to do the export, e.g. "xnnpack". It is required to specify a task when `export` is `True`.
145
-
config (`PretrainedConfig`, *optional*):
146
-
Configuration of the pre-trained model.
147
-
subfolder (`str`, defaults to `""`):
148
-
In case the relevant files are located inside a subfolder of the model repo either locally or on huggingface.co, you can
149
-
specify the folder name here.
150
-
revision (`str`, defaults to `"main"`):
151
-
Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id.
152
-
cache_dir (`Optional[str]`, defaults to `None`):
153
-
Path indicating where to store cache. The default Hugging Face cache path will be used by default.
154
-
force_download (`bool`, defaults to `False`):
155
-
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
156
-
cached versions if they exist.
157
-
local_files_only (`Optional[bool]`, defaults to `False`):
158
-
Whether or not to only look at local files (i.e., do not try to download the model).
159
-
use_auth_token (`Optional[Union[bool,str]]`, defaults to `None`):
160
-
Deprecated. Please use the `token` argument instead.
161
-
token (`Optional[Union[bool,str]]`, defaults to `None`):
162
-
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
163
-
when running `huggingface-cli login` (stored in `huggingface_hub.constants.HF_TOKEN_PATH`).
164
-
**kwargs:
165
-
Additional configuration options to tasks and recipes.
166
-
167
-
Returns:
168
-
`ExecuTorchModelForCausalLM`: An instance of the ExecuTorch model for text generation task.
169
-
"""
170
-
ifuse_auth_tokenisnotNone:
171
-
warnings.warn(
172
-
"The `use_auth_token` argument is deprecated and will be removed soon. Please use the `token` argument instead.",
173
-
FutureWarning,
174
-
)
175
-
iftokenisnotNone:
176
-
raiseValueError("You cannot use both `use_auth_token` and `token` arguments at the same time.")
177
-
token=use_auth_token
178
-
179
-
ifexport:
180
-
# Fetch the model from huggingface.co and export it to ExecuTorch
181
-
ifrecipe=="":
182
-
raiseValueError("Please specify a recipe to export the model for.")
183
-
returncls._export(
184
-
model_id=model_name_or_path,
185
-
recipe=recipe,
186
-
config=config,
187
-
**kwargs,
188
-
)
189
-
else:
190
-
# Load the ExecuTorch model from a local path
191
-
returncls._from_pretrained(
192
-
model_dir_path=model_name_or_path,
193
-
config=config,
194
-
)
195
-
117
+
196
118
@classmethod
197
119
def_from_pretrained(
198
120
cls,
@@ -269,7 +191,7 @@ def _export(
269
191
270
192
Args:
271
193
model_id (`str`):
272
-
Model ID on huggingface.co, for example: `model_name_or_path="meta-llama/Llama-3.2-1B"`.
194
+
Model ID on huggingface.co, for example: `model_id="meta-llama/Llama-3.2-1B"`.
273
195
recipe (`str`):
274
196
The recipe to use to do the export, e.g. "xnnpack".
0 commit comments