OpenMed-PII-Dutch-ClinicalBGE-Large-335M-v1
Dutch PII Detection Model | 335M Parameters | Open Source

Model Description
OpenMed-PII-Dutch-ClinicalBGE-Large-335M-v1 is a transformer-based token classification model fine-tuned for Personally Identifiable Information (PII) detection in Dutch text. This model identifies and classifies 54 types of sensitive information including names, addresses, social security numbers, medical record numbers, and more.
Key Features
- Dutch-Optimized: Specifically trained on Dutch text for optimal performance
- High Accuracy: Achieves strong F1 scores across diverse PII categories
- Comprehensive Coverage: Detects 54 entity types spanning personal, financial, medical, and contact information
- Privacy-Focused: Designed for de-identification and compliance with GDPR and other privacy regulations
- Production-Ready: Optimized for real-world text processing pipelines
Performance
Evaluated on the Dutch subset of AI4Privacy dataset:
| Metric |
Score |
| Micro F1 |
0.8902 |
| Precision |
0.8798 |
| Recall |
0.9008 |
| Macro F1 |
0.8841 |
| Weighted F1 |
0.8905 |
| Accuracy |
0.9901 |
Top 10 Dutch PII Models
Supported Entity Types
This model detects 54 PII entity types organized into categories:
Identifiers (22 types)
| Entity |
Description |
ACCOUNTNAME |
Accountname |
BANKACCOUNT |
Bankaccount |
BIC |
Bic |
BITCOINADDRESS |
Bitcoinaddress |
CREDITCARD |
Creditcard |
CREDITCARDISSUER |
Creditcardissuer |
CVV |
Cvv |
ETHEREUMADDRESS |
Ethereumaddress |
IBAN |
Iban |
IMEI |
Imei |
| ... |
and 12 more |
Personal Info (11 types)
| Entity |
Description |
AGE |
Age |
DATEOFBIRTH |
Dateofbirth |
EYECOLOR |
Eyecolor |
FIRSTNAME |
Firstname |
GENDER |
Gender |
HEIGHT |
Height |
LASTNAME |
Lastname |
MIDDLENAME |
Middlename |
OCCUPATION |
Occupation |
PREFIX |
Prefix |
| ... |
and 1 more |
Contact Info (2 types)
| Entity |
Description |
EMAIL |
Email |
PHONE |
Phone |
Location (9 types)
| Entity |
Description |
BUILDINGNUMBER |
Buildingnumber |
CITY |
City |
COUNTY |
County |
GPSCOORDINATES |
Gpscoordinates |
ORDINALDIRECTION |
Ordinaldirection |
SECONDARYADDRESS |
Secondaryaddress |
STATE |
State |
STREET |
Street |
ZIPCODE |
Zipcode |
Organization (3 types)
| Entity |
Description |
JOBDEPARTMENT |
Jobdepartment |
JOBTITLE |
Jobtitle |
ORGANIZATION |
Organization |
Financial (5 types)
| Entity |
Description |
AMOUNT |
Amount |
CURRENCY |
Currency |
CURRENCYCODE |
Currencycode |
CURRENCYNAME |
Currencyname |
CURRENCYSYMBOL |
Currencysymbol |
Temporal (2 types)
| Entity |
Description |
DATE |
Date |
TIME |
Time |
Usage
Quick Start
from transformers import pipeline
ner = pipeline("ner", model="OpenMed/OpenMed-PII-Dutch-ClinicalBGE-Large-335M-v1", aggregation_strategy="simple")
text = """
Patiënt Jan Jansen (geboren 15-03-1985, BSN: 987654321) is vandaag gezien.
Contact: jan.jansen@email.nl, Telefoon: +31 6 12345678.
Adres: Herengracht 42, 1015 BN Amsterdam.
"""
entities = ner(text)
for entity in entities:
print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
De-identification Example
def redact_pii(text, entities, placeholder='[REDACTED]'):
"""Replace detected PII with placeholders."""
sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
redacted = text
for ent in sorted_entities:
redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
return redacted
redacted_text = redact_pii(text, entities)
print(redacted_text)
Batch Processing
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model_name = "OpenMed/OpenMed-PII-Dutch-ClinicalBGE-Large-335M-v1"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
texts = [
"Patiënt Jan Jansen (geboren 15-03-1985, BSN: 987654321) is vandaag gezien.",
"Contact: jan.jansen@email.nl, Telefoon: +31 6 12345678.",
]
inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
Training Details
Dataset
- Source: AI4Privacy PII Masking 400k (Dutch subset)
- Format: BIO-tagged token classification
- Labels: 76 total (54 B-tags + 21 I-tags + O)
Training Configuration
- Max Sequence Length: 512 tokens
- Epochs: 3
- Framework: Hugging Face Transformers + Trainer API
Intended Use & Limitations
Intended Use
- De-identification: Automated redaction of PII in Dutch clinical notes, medical records, and documents
- Compliance: Supporting GDPR, and other privacy regulation compliance
- Data Preprocessing: Preparing datasets for research by removing sensitive information
- Audit Support: Identifying PII in document collections
Limitations
Important: This model is intended as an assistive tool, not a replacement for human review.
- False Negatives: Some PII may not be detected; always verify critical applications
- Context Sensitivity: Performance may vary with domain-specific terminology
- Language: Optimized for Dutch text; may not perform well on other languages
Citation
@misc{openmed-pii-2026,
title = {OpenMed-PII-Dutch-ClinicalBGE-Large-335M-v1: Dutch PII Detection Model},
author = {OpenMed Science},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/OpenMed/OpenMed-PII-Dutch-ClinicalBGE-Large-335M-v1}
}
Links