Skip to content

[反馈与教程] YouTube 视频:AnimeGamer 本地部署 (含 WSL RAM 调整、社区修复) 与初步效果演示 #4

@softicelee2

Description

@softicelee2

AnimeGamer 团队和社区的朋友们,大家好!

我注意到 AnimeGamer 这个非常有意思的项目刚刚开源,并制作了一个 YouTube 视频,记录了在本地(Windows 10 下的 WSL Ubuntu 环境)尝试部署和运行它的完整过程,希望能为其他感兴趣的同学提供一个参考,同时也分享一些我在部署和使用中遇到的情况和初步反馈。

视频链接在这里: [https://youtu.be/jb0eO2FJT6A]

视频主要内容包括:

(00:10) 项目介绍:AnimeGamer 的目标(通过提示词将动漫角色放入场景)。

(01:01) 浏览 GitHub 和 Hugging Face 页面,了解项目现状和提供的模型(目前两个角色)。

(02:40) 模型需求: 需要下载角色模型、多模态模型和视频生成模型,文件较大,建议提前准备。

(03:49) 重要环境配置 - WSL RAM 调整:

我的系统内存 64GB,默认 WSL 分配 32GB 不足以完成推理(会在第一步运行中终止)。

视频演示了如何通过修改用户目录下的 .wslconfig 文件将 WSL 内存分配提高到 48GB 才得以成功运行。

(04:56) 重要部署提示 - 使用社区修复版本:

指出项目刚开源(当时仅 4 天),官方的安装指南和代码存在一些问题,直接按官方步骤无法成功运行。

提到了 GitHub Issue 中已有用户发现了问题并提供了修复。

本次视频演示使用的是社区用户修复后的 fork 版本 ([视频中 05:49 处提到的克隆地址]) 才成功部署。(期待官方后续能合并修复)

(06:27) 详细安装步骤 (基于修复版):

安装系统编译工具 (build-essential)。

手动安装 CUDA Toolkit (下载、安装、配置环境变量)。

克隆修复后的项目代码。

创建并激活 Conda 环境。

使用修复版的 requirements.txt 安装依赖。

(10:01) 拷贝预先下载好的模型文件到项目目录。

(10:56) 准备推理用的 prompts.txt 文件(视频中为了演示效率,减少了生成数量至 4 个)。

(11:56) 执行推理: 运行两个主要的推理命令。再次观察到 内存使用率非常高 (接近 48GB)。

(13:36) 结果展示与分析:

播放生成的 4 个视频,并对照 prompts.txt 进行分析。

观察到的现象:

部分视频能生成角色和背景,但动作/姿态可能有些 "奇怪"。

部分视频背景符合提示词,但运镜/视角 "不太准确"。

部分视频甚至 未能生成提示词中指定的人物,只有背景。

结论:当前阶段的生成效果与提示词描述存在一定差距,效果尚不稳定。

总结反馈要点:

文档与代码初期问题: 官方初始版本安装指南和代码存在问题,需要依赖社区修复才能跑通(期待官方更新)。

高内存需求: 运行推理需要远超 32GB 的内存,WSL 用户需要手动配置 .wslconfig 提高内存分配,建议在文档中明确指出内存需求。

生成效果: 目前的推理效果尚不稳定,存在动作/视角不准确、甚至缺失元素(如人物)的情况。

项目潜力: 尽管处于早期,但项目概念新颖,值得关注和期待后续发展。

希望这个详细的部署过程记录和初步的使用反馈能对项目的发展和其他用户有所帮助。这是一个非常有潜力的项目,感谢团队的开源分享!

Activity

donahowe

donahowe commented on Apr 6, 2025

@donahowe
Collaborator

非常感谢您的喜欢和分享!这是我们首次尝试促进动漫游戏的研究。虽然效果上还不完美,但我们将继续探索和改进。

NOFOX

NOFOX commented on Apr 6, 2025

@NOFOX

楼主视频不错哦。不用wsl也是可以的。demo的效果确实一般,而且是单人物的。模型是真大,一个人物都快40G了。这个方向的潜力,坐等后面的训练和优化。还有就是demo中应填入的提示词不知道有什么建议或格式要求没有。比如多人的,能否描述成这样(Character多人时,用and相连?Motion中Background,t 中需要简单描述还是尽可能详细描述,还是需要提供重要权重的关键词就好。):
Character: Pazu and Qiqi; Motion: peacefully stand in class room; Background: day, desk and blackboard report.
Character: Qiqi; Motion: smoothly fly on broomstick; Background: day, windows and tree.
Character: Pazu and Qiqi; Motion: Pazu stood up and pulled Qiqi; Background: day,windows and tree.
Character: Qiqi; Motion: Suddenly fainting; Background: day ,ground and desk.

XuJianzhi

XuJianzhi commented on Apr 7, 2025

@XuJianzhi

请问楼主模型路径这么放,对吗,readme只说下载后挪过来,但没说哪个级别挪过来?(抱歉,视频打不开,没法在视频里求证)

(animegamer) root@job-3932-1733411462-mc6rb:/data/try/AnimeGamer# ls checkpoints/
AnimeGamer Mistral-7B-Instruct-v0.1 placeholder.md vae vae.zip
(animegamer) root@job-3932-1733411462-mc6rb:/data/try/AnimeGamer# ls checkpoints/AnimeGamer/
LICENSE MLLM-Qiqi MLLM-Sosuke README.md VDM_Decoder-Qiqi VDM_Decoder-Sosuke

softicelee2

softicelee2 commented on Apr 7, 2025

@softicelee2
Author

请问楼主模型路径这么放,对吗,readme只说下载后挪过来,但没说哪个级别挪过来?(抱歉,视频打不开,没法在视频里求证)

(animegamer) root@job-3932-1733411462-mc6rb:/data/try/AnimeGamer# ls checkpoints/ AnimeGamer Mistral-7B-Instruct-v0.1 placeholder.md vae vae.zip (animegamer) root@job-3932-1733411462-mc6rb:/data/try/AnimeGamer# ls checkpoints/AnimeGamer/ LICENSE MLLM-Qiqi MLLM-Sosuke README.md VDM_Decoder-Qiqi VDM_Decoder-Sosuke

可以参考以下的安装指南,我在视频当中已经验证可行,https://github.com/softicelee2/aishare/blob/main/73/AnimeGamer%EF%BC%9A%E8%85%BE%E8%AE%AF%E5%BC%80%E6%BA%90AI%EF%BC%8C%E4%B8%80%E5%8F%A5%E8%AF%9D%E7%94%9F%E6%88%90%E5%8A%A8%E6%BC%AB%E6%B8%B8%E6%88%8F%E5%9C%BA%E6%99%AF%EF%BC%81.md

XuJianzhi

XuJianzhi commented on Apr 7, 2025

@XuJianzhi

请问楼主模型路径这么放,对吗,readme只说下载后挪过来,但没说哪个级别挪过来?(抱歉,视频打不开,没法在视频里求证)
(animegamer) root@job-3932-1733411462-mc6rb:/data/try/AnimeGamer# ls checkpoints/ AnimeGamer Mistral-7B-Instruct-v0.1 placeholder.md vae vae.zip (animegamer) root@job-3932-1733411462-mc6rb:/data/try/AnimeGamer# ls checkpoints/AnimeGamer/ LICENSE MLLM-Qiqi MLLM-Sosuke README.md VDM_Decoder-Qiqi VDM_Decoder-Sosuke

可以参考以下的安装指南,我在视频当中已经验证可行,https://github.com/softicelee2/aishare/blob/main/73/AnimeGamer%EF%BC%9A%E8%85%BE%E8%AE%AF%E5%BC%80%E6%BA%90AI%EF%BC%8C%E4%B8%80%E5%8F%A5%E8%AF%9D%E7%94%9F%E6%88%90%E5%8A%A8%E6%BC%AB%E6%B8%B8%E6%88%8F%E5%9C%BA%E6%99%AF%EF%BC%81.md

您好,我遇到了这个问题,您是否遇到过:

`# python inference_MLLM.py
[2025-04-07 16:37:39,817] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Loading checkpoint shards: 100%|████████████████████| 2/2 [00:02<00:00, 1.46s/it]
old vocab size: 32000, new vocab size: 32284
Length of tokenizer and resize embedding: 32284
peft config: LoraConfig(task_type='CAUSAL_LM', peft_type=<PeftType.LORA: 'LORA'>, auto_mapping=None, base_model_name_or_path=None, revision=None, inference_mode=False, r=32, target_modules={'v_proj', 'down_proj', 'k_proj', 'gate_proj', 'o_proj', 'q_proj', 'up_proj'}, exclude_modules=None, lora_alpha=32, lora_dropout=0.05, fan_in_fan_out=False, bias='none', use_rslora=False, modules_to_save=['input_layernorm', 'post_attention_layernorm', 'norm'], init_lora_weights=True, layers_to_transform=None, layers_pattern=None, rank_pattern={}, alpha_pattern={}, megatron_config=None, megatron_core='megatron.core', trainable_token_indices=None, loftq_config={}, eva_config=None, corda_config=None, use_dora=False, layer_replication=None, runtime_config=LoraRuntimeConfig(ephemeral_gpu_offload=False), lora_bias=False)
trainable params: 348,622,848 || all params: 7,328,210,944 || trainable%: 4.7573
Init llm done.
/opt/conda/envs/animegamer/lib/python3.10/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
Traceback (most recent call last):
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/utils.py", line 644, in _locate
obj = getattr(obj, part)
AttributeError: module 'MLLM.src.models' has no attribute 'discrete_models'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/utils.py", line 650, in _locate
obj = import_module(mod)
File "/opt/conda/envs/animegamer/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/data/try/AnimeGamer/MLLM/src/models/discrete_models.py", line 9, in
pyrootutils.setup_root(file, indicator='.project-root', pythonpath=True)
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/pyrootutils/pyrootutils.py", line 151, in setup_root
path = find_root(search_from, indicator)
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/pyrootutils/pyrootutils.py", line 73, in find_root
raise FileNotFoundError(f"Project root directory not found. Indicators: {indicator}")
FileNotFoundError: Project root directory not found. Indicators: ['.project-root']

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 134, in _resolve_target
target = _locate(target)
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/utils.py", line 658, in _locate
raise ImportError(
ImportError: Error loading 'MLLM.src.models.discrete_models.ProjectionLayer':
FileNotFoundError("Project root directory not found. Indicators: ['.project-root']")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/data/try/AnimeGamer/inference_MLLM.py", line 84, in
agent_model = hydra.utils.instantiate(agent_model_cfg, llm=llm)
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 226, in instantiate
return instantiate_node(
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 342, in instantiate_node
value = instantiate_node(
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 333, in instantiate_node
target = _resolve_target(node.get(_Keys.TARGET), full_key)
File "/opt/conda/envs/animegamer/lib/python3.10/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 139, in _resolve_target
raise InstantiationException(msg) from e
hydra.errors.InstantiationException: Error locating target 'MLLM.src.models.discrete_models.ProjectionLayer', set env var HYDRA_FULL_ERROR=1 to see chained exception.
full_key: input_resampler
`

NOFOX

NOFOX commented on Apr 10, 2025

@NOFOX

楼主视频不错哦。不用wsl也是可以的。demo的效果确实一般,而且是单人物的。模型是真大,一个人物都快40G了。这个方向的潜力,坐等后面的训练和优化。还有就是demo中应填入的提示词不知道有什么建议或格式要求没有。比如多人的,能否描述成这样(Character多人时,用and相连?Motion中Background,t 中需要简单描述还是尽可能详细描述,还是需要提供重要权重的关键词就好。): Character: Pazu and Qiqi; Motion: peacefully stand in class room; Background: day, desk and blackboard report. Character: Qiqi; Motion: smoothly fly on broomstick; Background: day, windows and tree. Character: Pazu and Qiqi; Motion: Pazu stood up and pulled Qiqi; Background: day,windows and tree. Character: Qiqi; Motion: Suddenly fainting; Background: day ,ground and desk.

测试过了 效果很不理想,单人的还勉强能搭点边,多人生成的视频基本不可用,想玩的朋友可以再等等后面的更新。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @NOFOX@XuJianzhi@donahowe@softicelee2

        Issue actions

          [反馈与教程] YouTube 视频:AnimeGamer 本地部署 (含 WSL RAM 调整、社区修复) 与初步效果演示 · Issue #4 · TencentARC/AnimeGamer