File size: 8,899 Bytes
3852896 c513379 3852896 feafa91 3c2b539 3852896 9362d26 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 a612b51 3852896 fb563a1 a612b51 3852896 3c2b539 a612b51 3852896 21e787d 3852896 4bebcce 3852896 4bebcce 3c2b539 3852896 21e787d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
---
license: mit
datasets:
- ILSVRC/imagenet-1k
- uoft-cs/cifar10
- uoft-cs/cifar100
language:
- en
metrics:
- accuracy
base_model:
- MS-ResNet
---
<div align="center">
<h1>I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks</h1>
[](https://arxiv.org/abs/2511.08065)
[](https://aaai.org/)
[](https://scholar.google.com/scholar?cluster=1814482600796011970)
[](https://github.com/Ruichen0424/I2E)
[](https://huggingface.co/papers/2511.08065)
[](https://huggingface.co/datasets/UESTC-BICS/I2E)
</div>
## ๐ Introduction
This repository contains the **pre-trained weights** for the paper **"I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks"**, which has been accepted for **Oral Presentation at AAAI 2026**.
**I2E** is a pioneering framework that bridges the data scarcity gap in neuromorphic computing. By simulating microsaccadic eye movements via highly parallelized convolution, I2E converts static images into high-fidelity event streams in real-time (>300x faster than prior methods).
### โจ Key Highlights
* **SOTA Performance**: Achieves **60.50%** top-1 accuracy on Event-based ImageNet.
* **Sim-to-Real Transfer**: Pre-training on I2E data enables **92.5%** accuracy on real-world CIFAR10-DVS, setting a new benchmark.
* **Real-Time Conversion**: Enables on-the-fly data augmentation for deep SNN training.
## ๐ Model Zoo & Results
We provide pre-trained models for **I2E-CIFAR** and **I2E-ImageNet**. You can download the `.pth` files directly from the [**Files and versions**](https://huggingface.co/Ruichen0424/I2E/tree/main) tab in this repository.
<table border="1">
<tr>
<th>Target Dataset</th>
<th align="center">Architecture</th>
<th align="center">Method</th>
<th align="center">Top-1 Acc</th>
</tr>
<!-- CIFAR10-DVS -->
<tr>
<td rowspan="3" align="center" style="vertical-align: middle;"><strong>CIFAR10-DVS</strong><br>(Real)</td>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Baseline</td>
<td align="center" style="vertical-align: middle;">65.6%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Transfer-I</td>
<td align="center" style="vertical-align: middle;">83.1%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Transfer-II (Sim-to-Real)</td>
<td align="center" style="vertical-align: middle;"><strong>92.5%</strong></td>
</tr>
<!-- I2E-CIFAR10 -->
<tr>
<td rowspan="3" align="center" style="vertical-align: middle;"><strong>I2E-CIFAR10</strong></td>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Baseline-I</td>
<td align="center" style="vertical-align: middle;">85.07%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Baseline-II</td>
<td align="center" style="vertical-align: middle;">89.23%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Transfer-I</td>
<td align="center" style="vertical-align: middle;"><strong>90.86%</strong></td>
</tr>
<!-- I2E-CIFAR100 -->
<tr>
<td rowspan="3" align="center" style="vertical-align: middle;"><strong>I2E-CIFAR100</strong></td>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Baseline-I</td>
<td align="center" style="vertical-align: middle;">51.32%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Baseline-II</td>
<td align="center" style="vertical-align: middle;">60.68%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Transfer-I</td>
<td align="center" style="vertical-align: middle;"><strong>64.53%</strong></td>
</tr>
<!-- I2E-ImageNet -->
<tr>
<td rowspan="4" align="center" style="vertical-align: middle;"><strong>I2E-ImageNet</strong></td>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Baseline-I</td>
<td align="center" style="vertical-align: middle;">48.30%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Baseline-II</td>
<td align="center" style="vertical-align: middle;">57.97%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet18</td>
<td align="center" style="vertical-align: middle;">Transfer-I</td>
<td align="center" style="vertical-align: middle;">59.28%</td>
</tr>
<tr>
<td align="center" style="vertical-align: middle;">MS-ResNet34</td>
<td align="center" style="vertical-align: middle;">Baseline-II</td>
<td align="center" style="vertical-align: middle;"><strong>60.50%</strong></td>
</tr>
</table>
> **Method Legend:**
> * **Baseline-I**: Training from scratch with minimal augmentation.
> * **Baseline-II**: Training from scratch with full augmentation.
> * **Transfer-I**: Fine-tuning from Static ImageNet (or I2E-ImageNet for CIFAR targets).
> * **Transfer-II**: Fine-tuning from I2E-CIFAR10.
## ๐๏ธ Visualization
Below is the visualization of the I2E conversion process. We illustrate the high-fidelity conversion from static RGB images to dynamic event streams.
More than 200 additional visualization comparisons can be found in [Visualization.md](./Visualization.md).
<table border="0" style="width: 100%">
<tr>
<td width="25%" align="center"><img src="./assets/original_1.jpg" alt="Original 1" style="width:100%"></td>
<td width="25%" align="center"><img src="./assets/converted_1.gif" alt="Converted 1" style="width:100%"></td>
<td width="25%" align="center"><img src="./assets/original_2.jpg" alt="Original 2" style="width:100%"></td>
<td width="25%" align="center"><img src="./assets/converted_2.gif" alt="Converted 2" style="width:100%"></td>
</tr>
<tr>
<td width="25%" align="center"><img src="./assets/original_3.jpg" alt="Original 3" style="width:100%"></td>
<td width="25%" align="center"><img src="./assets/converted_3.gif" alt="Converted 3" style="width:100%"></td>
<td width="25%" align="center"><img src="./assets/original_4.jpg" alt="Original 4" style="width:100%"></td>
<td width="25%" align="center"><img src="./assets/converted_4.gif" alt="Converted 4" style="width:100%"></td>
</tr>
</table>
## ๐ป Usage
This repository hosts the **model weights only**.
For the **I2E dataset generation code**, **training scripts**, and detailed usage instructions, please refer to our official GitHub repository.
To generate the datasets (I2E-CIFAR10, I2E-CIFAR100, I2E-ImageNet) yourself using the I2E algorithm, please follow the instructions in the GitHub README.
[](https://github.com/Ruichen0424/I2E)
The download address for the datasets generated by the I2E algorithm is as follows.
[](https://huggingface.co/datasets/UESTC-BICS/I2E)
## ๐ Citation
If you find this work or the models useful, please cite our AAAI 2026 paper:
```bibtex
@article{ma2025i2e,
title={I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks},
author={Ma, Ruichen and Meng, Liwei and Qiao, Guanchao and Ning, Ning and Liu, Yang and Hu, Shaogang},
journal={arXiv preprint arXiv:2511.08065},
year={2025}
}
``` |