AIT-86M-GGUF

Quantized GGUF releases for augmem/AIT-86M.

Historical Local Gate Baseline

The earlier card revision dropped the historical local gate summary that was previously kept in the TE-86M release directory. That baseline is restored here for continuity.

Attached JSON:

  • teacher_dual_mn20whisper_exact_gate_baseline_20260424T155324Z.json

Seeded split-excluded baseline at 1280d:

Slice Metric
Speech holdout A->T R@1 0.5652
Speech holdout T->A R@1 0.5992
Speech holdout avg R@1 0.5822
WavCaps FSD A->T R@1 0.1078
WavCaps FSD T->A R@1 0.1030
WavCaps FSD avg R@1 0.1054
SALT A->I R@1 0.1692
SALT I->A R@1 0.1261

These figures are reference metrics for the base checkpoint family. The GGUF files here are quantized distribution artifacts and are not separately rebenchmarked in this repo.

Files

File Quantization
AIT-86M-q8_0.gguf Q8_0
AIT-86M-q5_0.gguf Q5_0
teacher_dual_mn20whisper_exact_gate_baseline_20260424T155324Z.json Restored canonical local gate baseline summary

These are compact quantized exports for edge deployment and custom runtimes.

They are distribution artifacts for the AIT-86M model family, not a separate model line. Downstream runtime validation remains application-specific.

Full Model

Use the full safetensors checkpoint for training, conversion, or maximum compatibility:

augmem/AIT-86M

License

Apache 2.0

Downloads last month
185
GGUF
Model size
86.5M params
Architecture
omniembed
Hardware compatibility
Log In to add your hardware

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for augmem/AIT-86M-GGUF

Base model

augmem/AIT-86M
Quantized
(1)
this model