AI & ML interests

None defined yet.

multimodalartΒ 
posted an update about 2 months ago
view post
Post
6345
Want to iterate on a Hugging Face Space with an LLM?

Now you can easily convert any HF entire repo (Model, Dataset or Space) to a text file and feed it to a language model!

multimodalart/repo2txt
multimodalartΒ 
posted an update 6 months ago
view post
Post
17887
Self-Forcing - a real-time video distilled model from Wan 2.1 by @adobe is out, and they open sourced it 🐐

I've built a live real time demo on Spaces πŸ“ΉπŸ’¨

multimodalart/self-forcing
Β·
linoytsΒ 
posted an update 7 months ago
view post
Post
12063
FramePack is hands down one of the best OS releases in video generation πŸ™‡πŸ»β€β™€οΈπŸ€―
βœ… fully open sourced + amazing quality + reduced memory + improved speed
but more even - its gonna facilitate *soooo* many downstream applications
like this version adapted for landscape rotation πŸ‘‡https://huggingface.co/spaces/tori29umai/FramePack_rotate_landscape
  • 2 replies
Β·
linoytsΒ 
posted an update 8 months ago
multimodalartΒ 
posted an update over 1 year ago
carotorresΒ 
updated a model over 1 year ago
radamesΒ 
posted an update over 1 year ago
view post
Post
7864
Thanks to @OzzyGT for pushing the new Anyline preprocessor to https://github.com/huggingface/controlnet_aux. Now you can use the TheMistoAI/MistoLine ControlNet with Diffusers completely.

Here's a demo for you: radames/MistoLine-ControlNet-demo
Super resolution version: radames/Enhance-This-HiDiffusion-SDXL

from controlnet_aux import AnylineDetector

anyline = AnylineDetector.from_pretrained(
    "TheMistoAI/MistoLine", filename="MTEED.pth", subfolder="Anyline"
).to("cuda")

source = Image.open("source.png")
result = anyline(source, detect_resolution=1280)
radamesΒ 
posted an update over 1 year ago
view post
Post
7052
At Google I/O 2024, we're collaborating with the Google Visual Blocks team (https://visualblocks.withgoogle.com) to release custom Hugging Face nodes. Visual Blocks for ML is a browser-based tool that allows users to create machine learning pipelines using a visual interface. We're launching nodes with Transformers.js, running models on the browser, as well as server-side nodes running Transformers pipeline tasks and LLMs using our hosted inference. With @Xenova @JasonMayes

You can learn more about it here https://huggingface.co/blog/radames/hugging-face-google-visual-blocks

Source-code for the custom nodes:
https://github.com/huggingface/visual-blocks-custom-components
radamesΒ 
posted an update over 1 year ago
view post
Post
2165
AI-town now runs on Hugging Face Spaces with our API for LLMs and embeddings, including the open-source Convex backend, all in one container. Easy to duplicate and config on your own

Demo: radames/ai-town
Instructions: https://github.com/radames/ai-town-huggingface
Β·
multimodalartΒ 
posted an update over 1 year ago
view post
Post
28584
The first open Stable Diffusion 3-like architecture model is JUST out πŸ’£ - but it is not SD3! πŸ€”

It is Tencent-Hunyuan/HunyuanDiT by Tencent, a 1.5B parameter DiT (diffusion transformer) text-to-image model πŸ–ΌοΈβœ¨, trained with multi-lingual CLIP + multi-lingual T5 text-encoders for english 🀝 chinese understanding

Try it out by yourself here ▢️ https://huggingface.co/spaces/multimodalart/HunyuanDiT
(a bit too slow as the model is chunky and the research code isn't super optimized for inference speed yet)

In the paper they claim to be SOTA open source based on human preference evaluation!
radamesΒ 
posted an update over 1 year ago
view post
Post
2656
HiDiffusion SDXL now supports Image-to-Image, so I've created an "Enhance This" version using the latest ControlNet Line Art model called MistoLine. It's faster than DemoFusion

Demo: radames/Enhance-This-HiDiffusion-SDXL

Older version based on DemoFusion radames/Enhance-This-DemoFusion-SDXL

New Controlnet SDXL Controls Every Line TheMistoAI/MistoLine

HiDiffusion is compatible with diffusers and support many SD models - https://github.com/megvii-research/HiDiffusion
  • 1 reply
Β·
radamesΒ 
posted an update over 1 year ago
view post
Post
2541
I've built a custom component that integrates Rerun web viewer with Gradio, making it easier to share your demos as Gradio apps.

Basic snippet
# pip install gradio_rerun gradio
import gradio as gr
from gradio_rerun import Rerun

gr.Interface(
    inputs=gr.File(file_count="multiple", type="filepath"),
    outputs=Rerun(height=900),
    fn=lambda file_path: file_path,
).launch()

More details here https://huggingface.co/spaces/radames/gradio_rerun
Source https://github.com/radames/gradio-rerun-viewer

Follow Rerun here rerun
radamesΒ 
posted an update over 1 year ago
radamesΒ 
posted an update over 1 year ago
radamesΒ 
posted an update over 1 year ago
radamesΒ 
posted an update over 1 year ago
view post
Post
2809
Following up on @vikhyatk 's Moondream2 update and @santiagomed 's implementation on Candle, I quickly put togheter the WASM module so that you could try running the ~1.5GB quantized model in the browser. Perhaps the next step is to rewrite it using https://github.com/huggingface/ratchet and run it even faster with WebGPU, @FL33TW00D-HF .

radames/Candle-Moondream-2

ps: I have a collection of all Candle WASM demos here radames/candle-wasm-examples-650898dee13ff96230ce3e1f
radamesΒ 
posted an update over 1 year ago
view post
Post
3902
Testing new pix2pix-Turbo in real-time, very interesting GAN architecture that leverages SD-Turbo model. Here I'm using edge2image LoRA single-step inference 🀯

It's very interesting how ControlNet Canny quality is comparable, but in a single step. Looking forward to when they release the code: https://github.com/GaParmar/img2img-turbo/issues/1

I've been keeping a list of fast diffusion model pipelines together with this real-time websocket app. Have a look if you want to test it locally, or check out the demo here on Spaces.

radames/real-time-pix2pix-turbo

Github app:
https://github.com/radames/Real-Time-Latent-Consistency-Model/

You can also check the authors img2img sketch model here

gparmar/img2img-turbo-sketch

Refs:
One-Step Image Translation with Text-to-Image Models (2403.12036)

cc @gparmar @junyanz
multimodalartΒ 
posted an update almost 2 years ago
view post
Post
The Stable Diffusion 3 research paper broken down, including some overlooked details! πŸ“

Model
πŸ“ 2 base model variants mentioned: 2B and 8B sizes

πŸ“ New architecture in all abstraction levels:
- πŸ”½ UNet; ⬆️ Multimodal Diffusion Transformer, bye cross attention πŸ‘‹
- πŸ†• Rectified flows for the diffusion process
- 🧩 Still a Latent Diffusion Model

πŸ“„ 3 text-encoders: 2 CLIPs, one T5-XXL; plug-and-play: removing the larger one maintains competitiveness

πŸ—ƒοΈ Dataset was deduplicated with SSCD which helped with memorization (no more details about the dataset tho)

Variants
πŸ” A DPO fine-tuned model showed great improvement in prompt understanding and aesthetics
✏️ An Instruct Edit 2B model was trained, and learned how to do text-replacement

Results
βœ… State of the art in automated evals for composition and prompt understanding
βœ… Best win rate in human preference evaluation for prompt understanding, aesthetics and typography (missing some details on how many participants and the design of the experiment)

Paper: https://stabilityai-public-packages.s3.us-west-2.amazonaws.com/Stable+Diffusion+3+Paper.pdf
Β·
multimodalartΒ 
posted an update almost 2 years ago
multimodalartΒ 
posted an update almost 2 years ago
view post
Post
It seems February started with a fully open source AI renaissance 🌟

Models released with fully open dataset, training code, weights βœ…

LLM - allenai/olmo-suite-65aeaae8fe5b6b2122b46778 🧠
Embedding - nomic-ai/nomic-embed-text-v1 πŸ“š (sota!)

And it's literally February 1st - can't wait to see what else the community will bring πŸ‘€