Exploring Instant Personalization: A Guide to ComfyUI Z-Image I2L
In the rapidly evolving world of generative AI, the ability to personalize image generation is highly sought after. Traditionally, creating a Low-Rank Adaptation (LoRA) required a dedicated training process, dataset preparation, and significant time.
Today, I would like to introduce a fascinating custom node suite for ComfyUI: ComfyUI Z-Image I2L (Image to LoRA). Developed by the team at HM-RunningHub and leveraging the powerful pipelines from DiffSynth-Studio, this tool allows users to generate personalized LoRA weights directly from reference images—without the traditional training phase.
In this post, we will walk through the features, installation process, model management, and usage of this impressive tool.
✨ Key Features
The primary goal of ComfyUI Z-Image I2L is to streamline the personalization workflow. Here is what makes it stand out:
- Image-to-LoRA Generation: The system extracts style and character features directly from your reference images to create LoRA weights on the fly.
- No Training Required: Unlike standard LoRA creation, which involves epochs and learning rates, this utilizes the Z-Image pipeline to generate weights instantly. This is an "inference-time" generation rather than a training process.
- Seamless ComfyUI Integration: It is designed to fit right into your existing workflows. Once the LoRA is generated, it can be passed to standard model loaders just like any other file on your disk.
⚠️ System Requirements
Before diving into installation, it is important to note the hardware requirements to ensure a smooth experience.
- VRAM: The underlying models are quite robust. It is recommended to have 24GB of VRAM or more. The developers have tested this successfully on an NVIDIA RTX 4090.
- Python: Version 3.10 or higher.
- ComfyUI: Please ensure you are running the latest version of ComfyUI.
🛠️ Installation Guide
Installing the nodes is straightforward for those familiar with ComfyUI's custom node architecture.
-
Clone the Repository: Navigate to your ComfyUI
custom_nodesdirectory via your terminal or command prompt and clone the repository:cd ComfyUI/custom_nodes git clone https://github.com/HM-RunningHub/ComfyUI_RH_ZImageI2L.git -
Install Dependencies: After cloning, enter the newly created directory and install the required Python packages:
pip install -r requirements.txt
📦 Model Downloads and Management
One of the convenient aspects of this plugin is that it handles model downloads automatically via ModelScope upon the first run. However, understanding what is being downloaded and where it goes is helpful for troubleshooting and storage management.
Required Models
The system relies on several models to function, including the base transformer from Tongyi-MAI and encoders from DiffSynth-Studio:
| Model Source | Component | Files |
|---|---|---|
| Tongyi-MAI/Z-Image | Base Transformer | transformer/*.safetensors |
| Tongyi-MAI/Z-Image-Turbo | Text Encoder, VAE, Tokenizer | text_encoder/, vae/, tokenizer/ |
| DiffSynth-Studio/General-Image-Encoders | Image Encoders | SigLIP2-G384/, DINOv3-7B/ |
| DiffSynth-Studio/Z-Image-i2L | Image-to-LoRA Model | model.safetensors |
Default Model Cache Path
By default, these models are cached in the ModelScope hub directory:
- Linux/macOS:
~/.cache/modelscope/hub/ - Windows:
C:\Users\<username>\.cache\modelscope\hub\
Customizing the Cache Directory
If you have limited space on your primary drive (a common scenario for Windows users), you can redirect where these heavy models are stored by setting the MODELSCOPE_CACHE environment variable before running ComfyUI.
Linux/macOS:
export MODELSCOPE_CACHE=/path/to/your/cache
Windows (PowerShell):
$env:MODELSCOPE_CACHE = "D:\models\modelscope"
Windows (CMD):
set MODELSCOPE_CACHE=D:\models\modelscope
🚀 Usage and Workflow
The toolkit introduces three specific nodes designed to work in sequence.
The Nodes
- ZImageI2L Loader: This initializes the pipeline and loads the necessary base models (Transformer, VAE, etc.) into memory.
- ZImageI2L LoRA Generator: The core processing unit. It takes the pipeline and your reference images as input to calculate the LoRA weights.
- ZImageI2L Saver: Saves the generated weights to a specific output folder so they can be reused or loaded immediately by other nodes.
Basic Workflow Setup
To get started, you can construct a linear workflow:
- Add the ZImageI2L Loader to your workspace.
- Connect the output of the Loader to the ZImageI2L LoRA Generator.
- Load your reference images (e.g., using a standard "Load Image" node or batch loader) and connect them to the Generator's
training_imagesinput. - Connect the Generator's output to the ZImageI2L Saver.
- Finally, use a standard LoRA Loader node to apply your newly created LoRA to a diffusion model for image generation.
Example Workflow
The repository includes a ready-to-use API example located at workflows/zimage_i2l_example_api.json. This workflow demonstrates:
- Loading 4 reference images.
- Generating the LoRA.
- Applying it to the Z-Image model.
- Generating new, personalized images.
📝 Parameters Explained
When using the ZImageI2L LoRA Generator node, there are a few key parameters to be aware of:
- pipeline: Accepts the
RH_ZImageI2LPipelineobject provided by the Loader node. - training_images: Accepts
IMAGEinput. These are the reference pictures the model will "learn" from. High-quality, clear images usually yield better results. - seed: An
INT(Integer) value. This controls the randomness of the process. keeping the seed fixed ensures that if you run the process again with the same images, you will get the exact same LoRA weights.
🙏 Acknowledgments
This project is built upon the open-source contributions of several remarkable teams. Special thanks go to:
- DiffSynth-Studio for the core Z-Image pipeline architecture.
- Tongyi-MAI for providing the underlying Z-Image models.
For more details, licensing information (Apache 2.0), or to contribute to the project, please visit the official GitHub Repository.
Disclaimer: This article is an independent technical overview based on the documentation provided by HM-RunningHub. I am not affiliated with the developers. Always ensure you check the repository for the most up-to-date updates and license agreements.
