Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,43 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language: en
|
| 3 |
license: apache-2.0
|
| 4 |
+
tags:
|
| 5 |
+
- text-classfication
|
| 6 |
+
- int8
|
| 7 |
+
- PostTrainingDynamic
|
| 8 |
+
datasets:
|
| 9 |
+
- mrpc
|
| 10 |
+
metrics:
|
| 11 |
+
- f1
|
| 12 |
---
|
| 13 |
+
|
| 14 |
+
# INT8 BERT base uncased finetuned MRPC
|
| 15 |
+
|
| 16 |
+
### Post-training dynamic quantization
|
| 17 |
+
|
| 18 |
+
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
|
| 19 |
+
|
| 20 |
+
The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc).
|
| 21 |
+
|
| 22 |
+
### Test result
|
| 23 |
+
|
| 24 |
+
- Batch size = 8
|
| 25 |
+
- [Amazon Web Services](https://aws.amazon.com/) c6i.xlarge (Intel ICE Lake: 4 vCPUs, 8g Memory) instance.
|
| 26 |
+
|
| 27 |
+
| |INT8|FP32|
|
| 28 |
+
|---|:---:|:---:|
|
| 29 |
+
| **Throughput (samples/sec)** |24.707|11.202|
|
| 30 |
+
| **Accuracy (eval-f1)** |0.8997|0.9042|
|
| 31 |
+
| **Model size (MB)** |174|418|
|
| 32 |
+
|
| 33 |
+
### Load with Intel® Neural Compressor (build from source):
|
| 34 |
+
|
| 35 |
+
```python
|
| 36 |
+
from neural_compressor.utils.load_huggingface import OptimizedModel
|
| 37 |
+
int8_model = OptimizedModel.from_pretrained(
|
| 38 |
+
'Intel/bert-base-uncased-mrpc-int8-dynamic',
|
| 39 |
+
)
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
Notes:
|
| 43 |
+
- The INT8 model has better performance than the FP32 model when the CPU is fully occupied. Otherwise, there will be the illusion that INT8 is inferior to FP32.
|