Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LAION-Tunes
908,174 AI-generated music tracks from 5 platforms, annotated with captions, transcriptions, embeddings, aesthetics scores, and NSFW safety labels. Includes a full-text and vector search engine with a web UI.
Quick Stats
| Metric | Value |
|---|---|
| Total tracks | 908,174 |
| Subsets | Mureka (383K), Suno (308K), Udio (115K), Riffusion (99K), Sonauto (3K) |
| Has caption (Music-Whisper) | 832,944 (91.7%) |
| Has transcription (Parakeet ASR) | 514,203 (56.6%) |
| Instrumental | 394,246 (43.4%) |
| NSFW flagged (very likely + likely) | 12,860 (1.4%) |
Dataset Description
LAION-Tunes is a curated metadata and annotation dataset derived from ai-music/ai-music-deduplicated, a collection of publicly available AI-generated music from Suno, Udio, Mureka, Riffusion, and Sonauto.
This dataset does NOT contain audio files. It contains metadata, annotations, embeddings, and search indices. Audio URLs pointing to the original hosting platforms are included for reference.
What's Included
For each track:
- Metadata: title, tags, genre, mood, duration, play count, upvote count
- Music-Whisper Caption: AI-generated music description using laion/music-whisper (fine-tuned OpenAI Whisper Small)
- Parakeet ASR Transcription: Speech-to-text using nvidia/parakeet-tdt-0.6b-v3 with word-level timestamps
- Sentence Embeddings: 768-dim embeddings via google/embeddinggemma-300m for tags, captions, transcriptions, lyrics, and moods
- Whisper Audio Embeddings: 768-dim mean-pooled encoder embeddings from Music-Whisper for audio similarity search
- Aesthetics Scores: coherence, musicality, memorability, clarity, naturalness (computed from Music-Whisper)
- NSFW Safety Labels: three-tier classification (very_likely_nsfw / likely_nsfw / likely_sfw) across gore, sexual, and hate speech dimensions
- Pre-built Search Indices: FAISS vector indices and BM25 text indices ready to serve
Annotation Pipeline
The annotation pipeline processes the original TAR files from ai-music-deduplicated:
- Music-Whisper (
laion/music-whisper): Generates music captions describing instruments, genre, mood, tempo, etc. - Parakeet TDT 0.6B (
nvidia/parakeet-tdt-0.6b-v3): ASR transcription with word-level timestamps for vocal content - EmbeddingGemma 300M (
google/embeddinggemma-300m): Computes 768-dim sentence embeddings for captions, transcriptions, tags, lyrics, moods - Whisper Encoder Embeddings: Mean-pooled encoder hidden states from Music-Whisper for audio fingerprinting/similarity
- NSFW Classification: Cosine similarity of transcription embeddings against reference prompts for gore/violence, sexual content, and hate speech
Repository Structure
laion-tunes/
βββ README.md # This file
βββ server.py # FastAPI search server (main application)
βββ index.html # Web UI (dark-mode, single-page app)
βββ build_search_index.py # Index builder script
βββ update_indices.py # Incremental index updater
βββ migrate_add_language.py # Language detection migration
βββ nsfw_safety_report.html # Interactive NSFW analysis report
βββ nsfw_analysis_data.json # Raw NSFW analysis data
β
βββ public/ # Annotated metadata parquets (8.3 GB)
β βββ mureka_000000.tar.parquet # One parquet per source TAR file
β βββ mureka_000001.tar.parquet
β βββ ...
β βββ udio_000015.tar.parquet # 159 parquet files total
β
βββ search_index/ # Pre-built search indices (16 GB)
β βββ metadata.db # SQLite database (908K tracks, 2.7 GB)
β βββ faiss_tag.index # FAISS IndexFlatIP - tag embeddings (2.6 GB)
β βββ faiss_whisper.index # FAISS IndexFlatIP - audio embeddings (2.6 GB)
β βββ faiss_caption.index # FAISS IndexFlatIP - caption embeddings (2.3 GB)
β βββ faiss_transcription.index # FAISS IndexFlatIP - transcription embeddings (1.5 GB)
β βββ faiss_lyric.index # FAISS IndexFlatIP - lyric embeddings (1.4 GB)
β βββ faiss_mood.index # FAISS IndexFlatIP - mood embeddings (1.1 GB)
β βββ idmap_*.npy # Row ID mappings for each FAISS index
β βββ bm25_tags.pkl # BM25 text index for tags (114 MB)
β βββ bm25_caption.pkl # BM25 text index for captions (609 MB)
β βββ bm25_transcription.pkl # BM25 text index for transcriptions (392 MB)
β
βββ whisper_embeddings/ # Raw Whisper encoder embeddings (1.6 GB)
βββ mureka_000000.tar.npz # One NPZ per source TAR file
βββ ...
βββ udio_000015.tar.npz # 159 NPZ files total
Data Format
Parquet Files (public/)
Each parquet file corresponds to one TAR file from the source dataset and contains these columns:
| Column | Type | Description |
|---|---|---|
filename |
str | Filename within the source TAR |
tar_file |
str | Source TAR filename |
audio_url |
str | Original audio URL (mp3/m4a/ogg) |
subset |
str | Source platform (suno/udio/mureka/riffusion/sonauto) |
title |
str | Track title |
tags |
str | Comma-separated genre/style tags |
mood |
str | Mood tags |
lyrics |
str | Lyrics (if available, from source metadata) |
duration_seconds |
float | Track duration |
play_count |
int | Play count on source platform |
upvote_count |
int | Like/upvote count |
music_whisper_caption |
str | Music-Whisper generated caption |
parakeet_transcription |
str | Parakeet ASR transcription (plain text) |
parakeet_transcription_with_timestamps |
str | ASR with word-level timestamps |
tag_embedding |
list[float] | 768-dim EmbeddingGemma embedding of tags |
caption_embedding |
list[float] | 768-dim EmbeddingGemma embedding of caption |
transcription_embedding |
list[float] | 768-dim EmbeddingGemma embedding of transcription |
lyric_embedding |
list[float] | 768-dim EmbeddingGemma embedding of lyrics |
mood_embedding |
list[float] | 768-dim EmbeddingGemma embedding of mood |
Whisper Embeddings (whisper_embeddings/)
NPZ files containing mean-pooled Whisper encoder hidden states:
embeddings: float32 array of shape(N, 768)- L2-normalizedfilenames: string array of filenames matching the parquet entries
SQLite Database (search_index/metadata.db)
The tracks table contains all 908,174 tracks with 34 columns including metadata, aesthetics scores, annotation flags, and NSFW safety labels. The row_id column is the primary key used by all FAISS indices.
FAISS Indices (search_index/faiss_*.index)
All indices are IndexFlatIP (inner product / cosine similarity for L2-normalized vectors) with 768 dimensions. Each index has a corresponding idmap_*.npy that maps FAISS internal indices to SQLite row_id values.
| Index | Vectors | Description |
|---|---|---|
faiss_tag |
908,241 | Tag text embeddings |
faiss_whisper |
908,174 | Audio encoder embeddings (music similarity) |
faiss_caption |
798,858 | Music-Whisper caption embeddings |
faiss_transcription |
511,610 | ASR transcription embeddings |
faiss_lyric |
479,313 | Lyrics embeddings |
faiss_mood |
383,616 | Mood text embeddings |
NSFW Safety Labels
Each track has NSFW safety scores and labels across three dimensions:
| Dimension | Strict Threshold | Moderate Threshold | Very Likely NSFW | Likely NSFW |
|---|---|---|---|---|
| Gore/Violence | >= 0.3779 | >= 0.3540 | 2,437 (0.27%) | 2,293 (0.25%) |
| Sexual Content | >= 0.3584 | >= 0.3234 | 3,367 (0.37%) | 2,689 (0.30%) |
| Hate Speech | >= 0.3633 | >= 0.3382 | 2,786 (0.31%) | 2,316 (0.26%) |
| Overall (conservative) | - | - | 6,762 (0.74%) | 6,098 (0.67%) |
very_likely_nsfw: cosine similarity above strict thresholdlikely_nsfw: between strict and moderate thresholdslikely_sfw: below moderate thresholdnsfw_overall_label: conservative (worst label across all three dimensions wins)
The raw cosine similarity scores (nsfw_gore_sim, nsfw_sexual_sim, nsfw_hate_sim) are stored so you can apply your own thresholds.
Running the Search Server
Prerequisites
pip install fastapi uvicorn faiss-cpu numpy pandas sentence-transformers torch scipy tqdm
Option 1: With HF Text Embeddings Inference (Recommended)
TEI provides fast CPU-based embedding serving (~25ms per query vs ~430ms with Python):
# Start TEI (requires Docker)
docker run -d --name tei-embeddings \
-p 8090:80 \
-e HF_TOKEN=your_token \
ghcr.io/huggingface/text-embeddings-inference:cpu-latest \
--model-id google/embeddinggemma-300m \
--max-batch-requests 4
# Start the server with TEI backend
python server.py --port 7860 --gpu 0 --tei-url http://localhost:8090
Option 2: Direct Python Inference
python server.py --port 7860 --gpu 0
This loads EmbeddingGemma 300M and Music-Whisper encoder into Python directly.
Server Flags
| Flag | Default | Description |
|---|---|---|
--port |
7860 | HTTP port |
--host |
0.0.0.0 | Bind address |
--gpu |
0 | GPU ID for Whisper encoder |
--tei-url |
None | TEI server URL for text embeddings |
--no-whisper |
False | Skip loading Whisper encoder (disables audio similarity search) |
What Loads at Startup
- 6 FAISS indices (tag, whisper, caption, transcription, lyric, mood)
- 3 BM25 indices (tags, caption, transcription)
- SQLite database (908K tracks)
- Text embedder: TEI backend or Python SentenceTransformer
- Music-Whisper encoder (on GPU): for audio upload similarity search
Total memory: ~20 GB RAM + ~200 MB GPU VRAM
Search API
POST /api/search
Main search endpoint combining vector search, BM25, and metadata filtering.
{
"query": "upbeat electronic dance music",
"search_fields": ["tag", "caption"],
"bm25_fields": ["tags", "caption"],
"top_k": 50,
"bm25_weight": 0.3,
"min_score": 4.0,
"max_score": 10.0,
"subsets": ["suno", "udio"],
"nsfw_filter": "sfw_only",
"stage2_enabled": true,
"stage2_field": "caption",
"stage2_top_k": 200
}
Search fields (FAISS vector search):
tag: search by music tags/genrescaption: search by Music-Whisper captionstranscription: search by ASR transcriptionslyric: search by lyrics contentmood: search by mood descriptorswhisper: search by audio embedding similarity
BM25 fields (text search):
tags,caption,transcription
Filters:
min_score/max_score: aesthetics score range (0-10)subsets: list of source platformsnsfw_filter:"sfw_only"or"nsfw_only"ornullfor all
Two-stage search: First retrieves stage2_top_k candidates, then re-ranks by a second field.
POST /api/search_by_audio
Upload an audio file to find similar tracks by audio fingerprint.
curl -X POST http://localhost:7860/api/search_by_audio \
-F "file=@song.mp3" \
-F "top_k=20" \
-F "nsfw_filter=sfw_only"
POST /api/search_similar
Find tracks similar to an existing track by row_id.
{
"row_id": 12345,
"field": "whisper",
"top_k": 20
}
GET /api/stats
Returns dataset statistics (total tracks, index sizes, NSFW counts).
Building the Index from Scratch
If you want to rebuild the search indices from the parquet files:
python build_search_index.py --force
This reads all parquets from public/ (and optionally private/ for additional embeddings), builds the SQLite database, FAISS indices, and BM25 indices. Takes ~30 minutes on a modern machine.
Source Data
The original audio data comes from ai-music/ai-music-deduplicated, organized by platform:
| Platform | Tracks | Audio Format | Notable Fields |
|---|---|---|---|
| Mureka | 383,549 | MP3 | genres, moods, model version |
| Suno | 307,539 | MP3 | tags (in metadata), prompt, model_name, explicit flag |
| Udio | 115,140 | MP3 | tags (array), lyrics, prompt, likes, plays |
| Riffusion | 99,228 | M4A | sound (style description), lyrics_timestamped, conditions |
| Sonauto | 2,718 | OGG | tags (array), description, keyword |
Models Used
| Model | Purpose | Output |
|---|---|---|
| laion/music-whisper | Music captioning + audio embeddings | Text caption + 768-dim encoder embedding |
| nvidia/parakeet-tdt-0.6b-v3 | ASR transcription | Text + word-level timestamps |
| google/embeddinggemma-300m | Text sentence embeddings | 768-dim L2-normalized vectors |
License
Apache 2.0
Citation
@misc{laion-tunes-2025,
title={LAION-Tunes: Annotated AI Music Search Dataset},
author={LAION},
year={2025},
url={https://huggingface.co/datasets/laion/laion-tunes}
}
- Downloads last month
- 56