Instructions to use Lightricks/LTX-Video with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Lightricks/LTX-Video with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-Video", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Inference
- Notebooks
- Google Colab
- Kaggle
End frame
#37
by Hanter9901 - opened
Hi, would it be possible to add support for both first and last frame like in KLING AI?
This comment has been hidden
This comment has been hidden
Hello there, does the latest version 0.9.5 supports start and end frame generation ?
yes