LLM4AirTrack: LLM-Driven Multi-Feature Fusion for Aircraft Trajectory Prediction
Overview
LLM4AirTrack adapts the LLM4STP (Large Language Model for Ship Trajectory Prediction) framework from maritime AIS to aviation ADS-B domain. The core insight is that pre-trained LLMs encode powerful sequential pattern recognition that transfers to spatiotemporal trajectory data through lightweight reprogramming — without full fine-tuning.
The framework uses a frozen GPT-2 backbone with trainable adapter modules (~2.4% of total parameters) to predict future aircraft positions and classify flight routes/procedures.
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ LLM4AirTrack Framework │
│ │
│ ADS-B Features (9-dim: xyz + direction + polar) │
│ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ RevIN Normalizer │ Instance normalization per feature │
│ └──────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Patch Tokenizer │ Overlapping temporal patches (8×9=72) │
│ └──────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────┐ ┌─────────────────────┐ │
│ │ Patch Embedder │ │ Text Prototype Bank │ │
│ │ (72 → 768) │ │ (256 learned protos) │ │
│ └──────────────────┘ └─────────────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ Cross-Attention Reprogrammer │ │
│ │ Q=patches, K=V=prototypes (8-head) │ │
│ │ Maps trajectory → LLM text space │ │
│ └──────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Prompt-as-Prefix │ Aviation context prompt prepended │
│ └──────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Frozen GPT-2 │ 124M params frozen, language knowledge │
│ └──────────────────┘ │
│ │ │
│ ├──────────────────┐ │
│ ▼ ▼ │
│ ┌──────────┐ ┌──────────────────┐ │
│ │ Traj Head│ │ Classification │ │
│ │ (xyz) │ │ Head (route/rwy) │ │
│ └──────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Key Components
9-Dimensional Kinematic Features (from ATSCC):
- Position: (x, y, z) in East-North-Up coordinates
- Directional unit vectors: (ux, uy, uz) — velocity direction
- Polar components: (r, sin θ, cos θ) — angular position
Patch Tokenization: Overlapping temporal windows (patch_len=8, stride=4) → 14 patches from 60-step context
Cross-Attention Reprogramming (from Time-LLM): 256 learned text prototypes serve as a "translation dictionary" between trajectory and language domains
Frozen GPT-2 Backbone: 124M frozen parameters preserve pre-trained language understanding while keeping training efficient
Dual Output Heads:
- Trajectory Prediction: Future (x, y, z) positions via Smooth L1 loss
- Route Classification: STAR/IAF/Runway procedure via Cross-Entropy loss
Parameter Efficiency
| Component | Parameters | Trainable |
|---|---|---|
| GPT-2 Backbone | 124,439,808 | 0 (frozen) |
| Patch Embedder | 57,600 | 57,600 |
| Cross-Attention Reprogrammer | 2,560,512 | 2,560,512 |
| Trajectory Head | 329,946 | 329,946 |
| Classification Head | 150,543 | 150,543 |
| Total | 127,543,059 | 3,103,251 (2.43%) |
Training
Dataset
- Source: ATFMTraj — RKSIa (Incheon International Airport arrivals)
- Origin: OpenSky ADS-B recordings, 2018-2023
- Preprocessing: Raw lat/lon/alt → ENU coordinates → normalized to [-1,1] by r_max=120km
- Trajectories: 8,091 training + 8,092 test (16,183 total flights)
- Windows: 282,191 training + 20,000 evaluation sliding windows
- Context: 60 timesteps (1-second intervals = 1 minute of flight)
- Prediction: 30 timesteps ahead (30 seconds)
- Classes: 39 route labels (STAR × IAF × Runway combinations)
Hyperparameters
| Parameter | Value |
|---|---|
| LLM Backbone | openai-community/gpt2 (768 hidden, 12 layers) |
| Optimizer | AdamW (β₁=0.9, β₂=0.999) |
| Learning Rate | 5×10⁻⁴ with cosine annealing warm restarts |
| Weight Decay | 1×10⁻⁵ |
| Batch Size | 128 |
| Epochs | 5 |
| Gradient Clipping | max_norm=1.0 |
| Multi-task Weight | λ_traj=1.0, λ_cls=0.1 |
| Loss (trajectory) | Smooth L1 (Huber) |
| Loss (classification) | Cross-Entropy |
| Hardware | NVIDIA T4 (16GB VRAM, used ~1.4GB) |
Results
| Epoch | Train Loss | ADE | FDE | Route Accuracy |
|---|---|---|---|---|
| 1 | 0.2335 | 0.01500 | 0.02047 | 34.7% |
| 2 | 0.2110 | 0.01200 | 0.01635 | 36.1% |
| 3 | 0.2033 | 0.01026 | 0.01426 | 36.3% |
| 4 | 0.2037 | 0.01345 | 0.01858 | 34.9% |
| 5 | 0.2003 | 0.01518 | 0.02043 | 36.5% |
Best model (epoch 3):
- ADE: 0.01026 (normalized ENU scale; with r_max=120km → ~1.23km average displacement)
- FDE: 0.01426 (~1.71km final displacement error at 30s horizon)
- Route Classification: 36.3% accuracy over 39 classes (14× above random baseline of 2.6%)
- RMSE: x=0.00957, y=0.00942, z=0.00072 (altitude prediction is very accurate)
Usage
Quick Inference
import torch
import json
from huggingface_hub import hf_hub_download
# Download model files
config_path = hf_hub_download("Jdice27/LLM4AirTrack", "config.json")
weights_path = hf_hub_download("Jdice27/LLM4AirTrack", "adapter_weights.pt")
# You can use the self-contained train_full.py or the modular llm4airtrack package
from llm4airtrack.model import LLM4AirTrack
with open(config_path) as f:
cfg = json.load(f)
model = LLM4AirTrack(
llm_name=cfg["llm_name"],
context_len=cfg["context_len"],
pred_len=cfg["pred_len"],
n_classes=cfg["n_classes"],
n_prototypes=cfg["n_prototypes"],
patch_len=cfg["patch_len"],
patch_stride=cfg["patch_stride"],
)
state = torch.load(weights_path, map_location="cpu")
model.load_state_dict(state, strict=False)
model.eval()
# Input: 60 timesteps × 9 kinematic features
# Features: [x, y, z, ux, uy, uz, r, sin_θ, cos_θ] in ENU coordinates
context = torch.randn(1, 60, 9) # Replace with real data
outputs = model(context, task="both")
future_xyz = outputs["pred_trajectory"] # (1, 30, 3) — future ENU positions
route_probs = outputs["pred_class"].softmax(-1) # (1, 39) — route probabilities
Data Pipeline
from llm4airtrack.data import download_atfm_dataset, load_atfm_raw, compute_kinematic_features
# Download and load ATFMTraj
download_atfm_dataset("RKSIa", cache_dir="./data")
data, labels = load_atfm_raw("RKSIa", "TEST", "./data")
# Get kinematic features for a single trajectory
traj = data[0] # (T_max, 3) ENU coordinates
valid = ~np.isnan(traj[:, 0])
features = compute_kinematic_features(traj[valid]) # (T, 9)
Downstream Tasks
The model produces rich trajectory representations suitable for:
| Task | How to Use |
|---|---|
| Track Activity Classification | Use pred_class output — identifies STAR/IAF/runway procedure |
| Trajectory Prediction | Use pred_trajectory — 30-second position forecast |
| Anomaly Detection | Compare pred_trajectory vs actual — large deviations flag anomalies |
| Conflict Detection | Run on multiple aircraft, check predicted trajectory intersections |
| ETA Prediction | Extract LLM hidden states as features for regression head |
| Transfer to New Airports | Fine-tune adapter weights on new airport data (ESSA, LSZH included in ATFMTraj) |
Design Decisions & Adaptation from Maritime (LLM4STP) to Aviation (ADS-B)
| Aspect | LLM4STP (Maritime AIS) | LLM4AirTrack (Aviation ADS-B) |
|---|---|---|
| Dimensionality | 2D (lat, lon) | 3D (lat, lon, altitude → ENU xyz) |
| Features | SOG, COG, ROT | Ground speed → directional vectors; vertical rate → uz |
| Update Rate | ~10s intervals | 1s intervals (higher resolution) |
| Route Structure | Free navigation | Defined STARs/SIDs (structured procedures) |
| Context | Port/strait proximity | Airport/procedure context (encoded in prompt) |
| Phase Segmentation | Anchoring/transiting | Climb/cruise/descent/approach |
| Classification | Vessel type | Route procedure (39 STAR×IAF×RWY classes) |
| Spatial Encoding | Lat/lon directly | ENU Cartesian + polar components |
References
Foundational Work
- LLM4STP: GitHub — Original maritime trajectory prediction framework
- Time-LLM: arXiv 2310.01728 — LLM reprogramming for time series (ICLR 2024)
Aviation Domain
- ATSCC: arXiv 2407.20028 — Self-supervised trajectory representation, 9-dim feature engineering
- LLM4Delay: arXiv 2510.23636 — Cross-modality LLM for aviation delay prediction
- ATFMTraj: HuggingFace Dataset — Aircraft trajectory classification data
Related Approaches
- Flight2Vec: arXiv 2412.16581 — Behavior-adaptive patching for flight trajectories
- H3+CLM: arXiv 2405.09596 — Spatial tokenization for trajectory prediction
- SKETCH: arXiv 2601.18537 — Semantic key-point conditioning
Citation
@misc{llm4airtrack2026,
title={LLM4AirTrack: LLM-Driven Multi-Feature Fusion for Aircraft Trajectory Prediction},
author={Jdice27},
year={2026},
url={https://huggingface.co/Jdice27/LLM4AirTrack},
note={Adapted from LLM4STP for aviation ADS-B domain}
}
- Downloads last month
- 65