On Architectural Compression of Text-to-Image Diffusion Models
Paper
โข
2305.15798
โข
Published
โข
4
This pipeline was distilled from SG161222/Realistic_Vision_V4.0 on a Subset of recastai/LAION-art-EN-improved-captions dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Portrait of a pretty girl']:
This Pipeline is based upon the paper. Training Code can be found here.
You can use the pipeline like so:
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("segmind/tiny-sd", torch_dtype=torch.float16)
#Load LoRA finetune
pipeline.load_lora_weights("segmind/tiny_lora_mxtun3_style", weight_name="sd15_tiny_mxtun3_style_lora.safetensors")
prompt = "Portrait of a pretty girl"
image = pipeline(prompt).images[0]
image.save("my_image.png")
These are the key hyperparameters used during training:
Base model
SG161222/Realistic_Vision_V4.0_noVAE