Parallax_Vision-AmongUs
Parallax_Vision-AmongUs is a high-precision game-state classifier designed to find order in the chaos. By utilizing a multi-stage synthesis pipeline, it identifies the presence of specific game signatures with a level of confidence that defies its tiny file size.
1. Technical Specs
- Total Parameters: 431,940
- Total Model Size: 1.65 MB
- Input Resolution: 100x100 RGB
- Architecture: 4-Part Neural Pipeline (CAE + Relational Grid)
2. The Facts
This model doesn't just "look" at pixels; it analyzes the relationship between them. This allows it to stay accurate even when images are blurry, compressed, or distorted.
| Class | Tested Confidence |
|---|---|
| IsItAmongUs | 100.0% |
| IsItGamePlay | 99.0%+ |
3. The Story
Imagine you’re looking through a foggy window. You can’t see the fine details of the person outside, but you can see the shape of their movement and the way they block the light. You know who it is not because you see their face, but because you recognize the "vibe" of their silhouette.
That is Parallax_Vision. It turns complex 100x100 images into a tiny "semantic DNA" (only 10 vectors). Even if the resulting reconstruction looks like a blob to a human, the model sees the hidden geometry. It’s a specialized brain that knows exactly what a "Bean" looks like, even in the dark.
It’s just cool.
4. Usage (Inference Script)
This script handles the download, unzips the weights, and runs a prediction on your test image.
import os
import torch
import torch.nn as nn
from PIL import Image
from torchvision import transforms
import zipfile
import matplotlib.pyplot as plt
# --- SETTINGS ---
MODEL_REPO = "Parallax-labs-1/parallax_VISION_amongus"
INPUT_IMAGE = "default.png" # Your image filename here
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# --- 1. DOWNLOAD & PREPARE ---
if not os.path.exists("model.zip"):
print("Downloading weights...")
os.system(f"wget https://huggingface.co/{MODEL_REPO}/resolve/main/model.zip")
if not os.path.exists("weights"):
with zipfile.ZipFile("model.zip", 'r') as zip_ref:
zip_ref.extractall("weights")
# --- 2. ARCHITECTURE ---
class CAE(nn.Module):
def __init__(self):
super().__init__()
self.enc = nn.Sequential(
nn.Conv2d(3,16,3,2,1), nn.ReLU(),
nn.Conv2d(16,32,3,2,1), nn.ReLU(),
nn.Flatten(),
nn.Linear(32*25*25, 10)
)
self.dec = nn.Sequential(
nn.Linear(10, 32*25*25),
nn.Unflatten(1,(32,25,25)),
nn.ConvTranspose2d(32,16,3,2,1,1), nn.ReLU(),
nn.ConvTranspose2d(16,3,3,2,1,1), nn.Sigmoid()
)
def forward(self, x): v = self.enc(x); return self.dec(v), v
class Final(nn.Module):
def __init__(self):
super().__init__()
self.net = nn.Sequential(nn.Linear(10, 4), nn.Sigmoid())
def forward(self, x): return self.net(x)
# --- 3. LOAD ---
cae = CAE().to(DEVICE)
classifier = Final().to(DEVICE)
cae.load_state_dict(torch.load('weights/cae.pth', map_location=DEVICE))
classifier.load_state_dict(torch.load('weights/classifier.pth', map_location=DEVICE))
cae.eval(); classifier.eval()
# --- 4. PREDICT ---
def predict(img_path):
if not os.path.exists(img_path):
print(f"Error: {img_path} not found.")
return
img = Image.open(img_path).convert('RGB')
transform = transforms.Compose([transforms.Resize((100, 100)), transforms.ToTensor()])
img_t = transform(img).unsqueeze(0).to(DEVICE)
with torch.no_grad():
recon, vec = cae(img_t)
p = classifier(vec)[0]
print(f"\nResults for {img_path}:")
print(f"Among Us Signature: {p[0].item()*100:.2f}%")
print(f"Gameplay Signature: {p[1].item()*100:.2f}%")
# Visualization
plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.imshow(img_t[0].cpu().permute(1, 2, 0))
plt.title("Input")
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(recon[0].cpu().permute(1, 2, 0))
plt.title("Reconstructed")
plt.axis('off')
plt.show()
if __name__ == "__main__":
predict(INPUT_IMAGE)
5. License
Licensed under Apache 2.0.