Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

nickpai
/
sdxl_light_caption_output

Diffusers
Safetensors
Model card Files Files and versions
xet
Community

Instructions to use nickpai/sdxl_light_caption_output with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use nickpai/sdxl_light_caption_output with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("nickpai/sdxl_light_caption_output", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
  • SDXL-Light-Caption-Output

SDXL-Light-Caption-Output

This model is trained with coco2017-colorization main branch. It was first released in text-guided-image-colorization.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Spaces using nickpai/sdxl_light_caption_output 12

nickpai/text-guided-image-colorization
๐Ÿ–ผ๏ธ
fffiloni/text-guided-image-colorization
๐Ÿ–ผ๏ธ
Potre1qw/text-guided-image-colorization
๐Ÿ–ผ๏ธ
Drjkedwards/Stable-dalle-colorrize-lense
๐Ÿ–ผ๏ธ
openfree/ColorRevive
๐Ÿ“‰
ChiKyi/Colorization
๐Ÿ–ผ๏ธ
Etrwy/text-guided-image-colorization
๐Ÿ–ผ๏ธ
addsw11/text-guided-image-colorization
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs