Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ayushtues
/
blipdiffusion

Diffusers
Safetensors
English
BlipDiffusionPipeline
Model card Files Files and versions
xet
Community

Instructions to use ayushtues/blipdiffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use ayushtues/blipdiffusion with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("ayushtues/blipdiffusion", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
blipdiffusion
8.85 GB
Ctrl+K
Ctrl+K
  • 2 contributors
History: 15 commits
ayushtues's picture
ayushtues
Update README.md
40745ff over 2 years ago
  • controlnet
    Add canny controlnet over 2 years ago
  • image_processor
    Upload 2 files over 2 years ago
  • qformer
    Update over 2 years ago
  • scheduler
    Update over 2 years ago
  • text_encoder
    Upload config.json over 2 years ago
  • tokenizer
    Add initial components almost 3 years ago
  • unet
    Update over 2 years ago
  • vae
    Update over 2 years ago
  • vision_encoder
    Add initial components almost 3 years ago
  • .gitattributes
    1.52 kB
    initial commit almost 3 years ago
  • README.md
    9.72 kB
    Update README.md over 2 years ago
  • model_index.json
    682 Bytes
    Upload model_index.json over 2 years ago