Search is not available for this dataset
latent sequence |
|---|
[
[
[
-0.42704638838768005,
-0.42606478929519653,
-0.10090602934360504,
-0.193067729473114,
-0.28890088200569153,
-0.007936445064842701,
-0.1217552050948143,
-0.35097426176071167,
-0.44563624262809753,
-0.8432407975196838,
0.09886904805898666,
... |
[
[
[
-0.47940579056739807,
-0.27483800053596497,
-0.7983444333076477,
-1.2927682399749756,
0.7293232679367065,
1.5902055501937866,
1.4485819339752197,
1.0157448053359985,
0.8077099919319153,
0.35080817341804504,
0.889861524105072,
0.69720506... |
[[[1.3503531217575073,1.2776157855987549,1.2619526386260986,1.5169672966003418,1.532788872718811,1.3(...TRUNCATED) |
[[[-0.3011414706707001,-0.7862575054168701,-0.10258518159389496,-1.2492685317993164,-0.5284313559532(...TRUNCATED) |
[[[-1.0224496126174927,-1.0255953073501587,-0.7228041291236877,-0.5972099900245667,-0.62912017107009(...TRUNCATED) |
[[[0.18618422746658325,0.2548504173755646,0.2821475565433502,0.3801378011703491,0.2643193304538727,0(...TRUNCATED) |
[[[1.3113648891448975,1.6603271961212158,0.8366925120353699,1.4193952083587646,0.9170982241630554,2.(...TRUNCATED) |
[[[1.7166658639907837,1.2541881799697876,1.4984365701675415,1.8542641401290894,1.1366596221923828,1.(...TRUNCATED) |
[[[-0.6501424908638,0.6137811541557312,0.5762643218040466,-0.01440607663244009,3.6844775676727295,1.(...TRUNCATED) |
[[[-0.5105133652687073,-0.3561466634273529,-0.11436079442501068,-0.1456730216741562,-0.1913814693689(...TRUNCATED) |
End of preview. Expand in Data Studio
Dataset Card for "latent_celebA_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_celebA_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
- Downloads last month
- 35