--- license: mit datasets: - ILSVRC/imagenet-1k - uoft-cs/cifar10 - uoft-cs/cifar100 language: - en metrics: - accuracy base_model: - MS-ResNet ---

I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks

[![Paper](https://img.shields.io/badge/Arxiv-2511.08065-B31B1B.svg)](https://arxiv.org/abs/2511.08065) [![AAAI 2026](https://img.shields.io/badge/AAAI%202026-Oral-4b44ce.svg)](https://aaai.org/) [![Google Scholar](https://img.shields.io/badge/Google%20Scholar-Paper-4285F4?style=flat-square&logo=google-scholar&logoColor=white)](https://scholar.google.com/scholar?cluster=1814482600796011970) [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/Ruichen0424/I2E) [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Paper-FFD21E?style=flat-square&logo=huggingface&logoColor=black)](https://huggingface.co/papers/2511.08065) [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Datasets-FFD21E?style=flat-square&logo=huggingface&logoColor=black)](https://huggingface.co/datasets/UESTC-BICS/I2E)
## 🚀 Introduction This repository contains the **pre-trained weights** for the paper **"I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks"**, which has been accepted for **Oral Presentation at AAAI 2026**. **I2E** is a pioneering framework that bridges the data scarcity gap in neuromorphic computing. By simulating microsaccadic eye movements via highly parallelized convolution, I2E converts static images into high-fidelity event streams in real-time (>300x faster than prior methods). ### ✨ Key Highlights * **SOTA Performance**: Achieves **60.50%** top-1 accuracy on Event-based ImageNet. * **Sim-to-Real Transfer**: Pre-training on I2E data enables **92.5%** accuracy on real-world CIFAR10-DVS, setting a new benchmark. * **Real-Time Conversion**: Enables on-the-fly data augmentation for deep SNN training. ## 🏆 Model Zoo & Results We provide pre-trained models for **I2E-CIFAR** and **I2E-ImageNet**. You can download the `.pth` files directly from the [**Files and versions**](https://huggingface.co/Ruichen0424/I2E/tree/main) tab in this repository.
Target Dataset Architecture Method Top-1 Acc
CIFAR10-DVS
(Real)
MS-ResNet18 Baseline 65.6%
MS-ResNet18 Transfer-I 83.1%
MS-ResNet18 Transfer-II (Sim-to-Real) 92.5%
I2E-CIFAR10 MS-ResNet18 Baseline-I 85.07%
MS-ResNet18 Baseline-II 89.23%
MS-ResNet18 Transfer-I 90.86%
I2E-CIFAR100 MS-ResNet18 Baseline-I 51.32%
MS-ResNet18 Baseline-II 60.68%
MS-ResNet18 Transfer-I 64.53%
I2E-ImageNet MS-ResNet18 Baseline-I 48.30%
MS-ResNet18 Baseline-II 57.97%
MS-ResNet18 Transfer-I 59.28%
MS-ResNet34 Baseline-II 60.50%
> **Method Legend:** > * **Baseline-I**: Training from scratch with minimal augmentation. > * **Baseline-II**: Training from scratch with full augmentation. > * **Transfer-I**: Fine-tuning from Static ImageNet (or I2E-ImageNet for CIFAR targets). > * **Transfer-II**: Fine-tuning from I2E-CIFAR10. ## 👁️ Visualization Below is the visualization of the I2E conversion process. We illustrate the high-fidelity conversion from static RGB images to dynamic event streams. More than 200 additional visualization comparisons can be found in [Visualization.md](./Visualization.md).
Original 1 Converted 1 Original 2 Converted 2
Original 3 Converted 3 Original 4 Converted 4
## 💻 Usage This repository hosts the **model weights only**. For the **I2E dataset generation code**, **training scripts**, and detailed usage instructions, please refer to our official GitHub repository. To generate the datasets (I2E-CIFAR10, I2E-CIFAR100, I2E-ImageNet) yourself using the I2E algorithm, please follow the instructions in the GitHub README. [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/Ruichen0424/I2E) The download address for the datasets generated by the I2E algorithm is as follows. [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Datasets-FFD21E?style=flat-square&logo=huggingface&logoColor=black)](https://huggingface.co/datasets/UESTC-BICS/I2E) ## 📜 Citation If you find this work or the models useful, please cite our AAAI 2026 paper: ```bibtex @article{ma2025i2e, title={I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks}, author={Ma, Ruichen and Meng, Liwei and Qiao, Guanchao and Ning, Ning and Liu, Yang and Hu, Shaogang}, journal={arXiv preprint arXiv:2511.08065}, year={2025} } ```