You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2025-01-20 01:23:11.846572
** Platform: Linux
** Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0]
** Python executable:
** ComfyUI Path:
** Log path:
Prestartup times for custom nodes:
0.0 seconds: comfy/custom_nodes/rgthree-comfy
0.3 seconds: comfy/custom_nodes/ComfyUI-Manager
Checkpoint files will always be loaded safely.
Total VRAM 24564 MB, total RAM 80430 MB
pytorch version: 2.5.1
xformers version: 0.0.30+536363e.d20250120
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
Using xformers attention
[Prompt Server] web root: comfy/web
### Loading: ComfyUI-Impact-Subpack (V1.2.6)
[Impact Subpack] ultralytics_bbox: comfy/models/ultralytics/bbox
[Impact Subpack] ultralytics_segm: comfy/models/ultralytics/segm
[comfyui_controlnet_aux] | INFO -> Using ckpts path: comfy/custom_nodes/comfyui_controlnet_aux/ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected
[rgthree] Loaded 42 exciting nodes.
[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.
### Loading: ComfyUI-Manager (V2.50.3)
### ComfyUI Revision: 3063 [a00e1489] | Released on '2025-01-19'
### Loading: ComfyUI-Impact-Pack (V8.3)
[Impact Pack] Wildcards loading done.
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `comfy/custom_nodes/was-node-suite-comfyui/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
WAS Node Suite: Finished. Loaded 218 nodes successfully.
"Art is the universal language that transcends boundaries and speaks to all." - Unknown
### Loading: ComfyUI-Inspire-Pack (V1.10)
Import times for custom nodes:
0.0 seconds: comfy/custom_nodes/websocket_image_save.py
0.0 seconds: comfy/custom_nodes/ComfyUI-Custom-Scripts
0.0 seconds: comfy/custom_nodes/rgthree-comfy
0.0 seconds: comfy/custom_nodes/comfyui-dynamicprompts
0.0 seconds: comfy/custom_nodes/comfyui_controlnet_aux
0.0 seconds: comfy/custom_nodes/ComfyUI-Inspire-Pack
0.0 seconds: comfy/custom_nodes/ComfyUI-Manager
0.0 seconds: comfy/custom_nodes/ComfyUI-Impact-Pack
0.0 seconds: comfy/custom_nodes/ComfyUI-Model-Manager
0.2 seconds: comfy/custom_nodes/ComfyUI-Impact-Subpack
0.3 seconds: comfy/custom_nodes/ComfyUI_smZNodes
0.4 seconds: comfy/custom_nodes/was-node-suite-comfyui
Starting server
To see the GUI go to: http://0.0.0.0:7860
To see the GUI go to: http://[::]:7860
FETCH DATA from: comfy/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
[Inspire Pack] IPAdapterPlus is not installed.
got prompt
WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
model weight dtype torch.float16, manual cast: None
model_type V_PREDICTION
Using xformers attention in VAE
Using xformers attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
Requested to load SDXL
loaded completely 9.5367431640625e+25 4897.0483474731445 True
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 7.64it/s]
Requested to load AutoencoderKL
loaded completely 9.5367431640625e+25 159.55708122253418 True
model_path is comfy/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/body_pose_model.pth
model_path is comfy/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/hand_pose_model.pth
model_path is comfy/custom_nodes/comfyui_controlnet_aux/ckpts/lllyasviel/Annotators/facenet.pth
comfy/custom_nodes/ComfyUI-Impact-Subpack/modules/subcore.py:141: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return orig_torch_load(*args, **kwargs) # NOTE: This code simply delegates the call to torch.load, and any errors that occur here are not the responsibility of Subpack.
Prompt executed in 7.97 seconds
All preprocessors are working good but OpenPose.
Also, this is my WSL2 conda env:
The text was updated successfully, but these errors were encountered: