Instructions to use Runware/Qwen-Image-Edit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Runware/Qwen-Image-Edit with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Runware/Qwen-Image-Edit", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Qwen Image Edit + ControlNet Openpose is possible?
#2
by asdasdasdwd - opened
Basically, I want specific poses for my character, which I have uploaded to Qwen Image Edit, to replicate the poses I could input with OpenPose ControlNet. Is that possible?
I've attached some links where I know they can be used separately. I've tried to make them work together but haven't had any luck.
https://www.modelscope.cn/models/DiffSynth-Studio/Qwen-Image-In-Context-Control-Union
https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files
https://huggingface.co/Comfy-Org/Qwen-Image-DiffSynth-ControlNets/tree/main
https://drive.google.com/drive/folders/1vhipvq2LXId5Ed7QkDpozEsu8cShPGYe?usp=sharing