YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🧠 Beans-Image-Classification-AI-Model

A fine-tuned image classification model trained on the Beans dataset with 3 classes: angular_leaf_spot, bean_rust, and healthy. This model is built using Hugging Face Transformers and the ViT (Vision Transformer) architecture and is suitable for educational use, plant disease classification tasks, and image classification experiments.


✨ Model Highlights

  • πŸ“Œ Base Model: google/vit-base-patch16-224-in21k
  • πŸ“š Fine-tuned: Beans dataset
  • 🌿 Classes: angular_leaf_spot, bean_rust, healthy
  • πŸ”§ Framework: Hugging Face Transformers + PyTorch
  • πŸ“¦ Preprocessing: AutoImageProcessor from Transformers

🧠 Intended Uses

  • βœ… Educational tools for training and evaluation in agriculture and plant disease detection
  • βœ… Benchmarking vision transformer models on small datasets
  • βœ… Demonstration of fine-tuning workflows with Hugging Face

🚫 Limitations

  • ❌ Not suitable for real-world diagnosis in agriculture without further domain validation
  • ❌ Not robust to significant background noise or occlusion in images
  • ❌ Trained on small dataset, may not generalize beyond bean leaf diseases

πŸ“ Input & Output

  • Input: RGB image of a bean leaf (expected size 224x224)
  • Output: Predicted class label β€” angular_leaf_spot, bean_rust, or healthy

πŸ‹οΈβ€β™‚οΈ Training Details

Attribute Value
Base Model `google/vit-base-patch16-224-in21k
Dataset Beans Dataset (train/val/test)
Task Type Image Classification
Image Size 224 Γ— 224
Epochs 3
Batch Size 16
Optimizer AdamW
Loss Function CrossEntropyLoss
Framework PyTorch + Transformers
Hardware CUDA-enabled GPU

πŸ“Š Evaluation Metrics

Metric Score
Accuracy 0.98
F1-Score 0.99
Precision 0.98
Recall 0.99


πŸš€ Usage

from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import torch

model_name = "AventIQ-AI/Beans-Image-Classification-AI-Model"

processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)
model.eval()

def predict(image_path):
    image = Image.open(image_path).convert("RGB")
    inputs = processor(images=image, return_tensors="pt").to(model.device)
    with torch.no_grad():
        outputs = model(**inputs)
    preds = torch.argmax(outputs.logits, dim=1)
    return model.config.id2label[preds.item()]

# Example
print(predict("example_leaf.jpg"))


  • 🧩 Quantization
  • Post-training static quantization applied using PyTorch to reduce model size and accelerate inference on edge devices.

πŸ—‚ Repository Structure

.
beans-vit-finetuned/
β”œβ”€β”€ config.json               βœ… Model architecture & config
β”œβ”€β”€ pytorch_model.bin         βœ… Model weights
β”œβ”€β”€ preprocessor_config.json  βœ… Image processor config
β”œβ”€β”€ special_tokens_map.json   βœ… (Auto-generated, not critical for ViT)
β”œβ”€β”€ training_args.bin         βœ… Training metadata
β”œβ”€β”€ README.md                 βœ… Model card

🀝 Contributing

Open to improvements and feedback! Feel free to submit a pull request or open an issue if you find any bugs or want to enhance the model.

Downloads last month
10
Safetensors
Model size
85.8M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support