Spaces:
Running
Running
feat: use HF inference
Browse files
README.md
CHANGED
|
@@ -10,64 +10,191 @@ pinned: false
|
|
| 10 |
license: apache-2.0
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# MedGemma Symptom Analyzer
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
## Features
|
| 18 |
|
| 19 |
-
- **Symptom Analysis**:
|
| 20 |
-
- **
|
| 21 |
-
- **
|
| 22 |
-
- **
|
| 23 |
-
- **
|
| 24 |
|
| 25 |
-
##
|
| 26 |
|
| 27 |
-
1.
|
| 28 |
-
2. **Adjust Settings**: Use the temperature slider to control response creativity
|
| 29 |
-
3. **Analyze**: Click "Analyze Symptoms" to get medical insights
|
| 30 |
-
4. **Review Results**: Read the AI-generated analysis and recommendations
|
| 31 |
|
| 32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
-
|
| 35 |
|
| 36 |
-
|
| 37 |
-
- Seek immediate medical attention for severe or emergency symptoms
|
| 38 |
-
- The AI may not always provide accurate medical information
|
| 39 |
-
- This is not a substitute for proper medical diagnosis
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
|
|
|
|
| 44 |
|
| 45 |
-
|
| 46 |
-
- Automatic device mapping for optimal performance
|
| 47 |
-
- Temperature-controlled generation for balanced responses
|
| 48 |
|
| 49 |
-
|
| 50 |
|
| 51 |
-
- **
|
| 52 |
-
- **
|
| 53 |
-
- **
|
| 54 |
-
- **Hardware**: GPU-accelerated inference when available
|
| 55 |
|
| 56 |
-
##
|
| 57 |
|
| 58 |
-
|
| 59 |
|
| 60 |
```bash
|
| 61 |
-
|
| 62 |
-
python app.py
|
| 63 |
```
|
| 64 |
|
| 65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
-
|
| 72 |
-
- Hugging Face for the Transformers library
|
| 73 |
-
- Gradio team for the interface framework
|
|
|
|
| 10 |
license: apache-2.0
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# MedGemma Symptom Analyzer
|
| 14 |
|
| 15 |
+
A modern medical AI application using Google's MedGemma model via HuggingFace Inference API for symptom analysis and medical consultation.
|
| 16 |
|
| 17 |
+
## 🏥 Features
|
| 18 |
|
| 19 |
+
- **AI-Powered Symptom Analysis**: Uses Google's MedGemma-4B model for medical insights
|
| 20 |
+
- **Comprehensive Medical Reports**: Provides differential diagnoses, next steps, and red flags
|
| 21 |
+
- **Interactive Web Interface**: Built with Gradio for easy use
|
| 22 |
+
- **Demo Mode**: Fallback functionality when API is unavailable
|
| 23 |
+
- **Medical Safety**: Includes appropriate disclaimers and safety guidance
|
| 24 |
|
| 25 |
+
## 🚀 Quick Start
|
| 26 |
|
| 27 |
+
### 1. Installation
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
```bash
|
| 30 |
+
# Clone the repository
|
| 31 |
+
git clone <your-repo-url>
|
| 32 |
+
cd medgemma-symptomps
|
| 33 |
+
|
| 34 |
+
# Install dependencies
|
| 35 |
+
pip install -r requirements.txt
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
### 2. HuggingFace Access Setup
|
| 39 |
+
|
| 40 |
+
The app uses Google's MedGemma model, which requires special access:
|
| 41 |
+
|
| 42 |
+
1. **Get HuggingFace Token**:
|
| 43 |
+
- Visit [HuggingFace Settings](https://huggingface.co/settings/tokens)
|
| 44 |
+
- Create a new token with `read` permissions
|
| 45 |
+
|
| 46 |
+
2. **Request MedGemma Access**:
|
| 47 |
+
- Visit [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it)
|
| 48 |
+
- Click "Request access to this model"
|
| 49 |
+
- Wait for approval from Google (may take some time)
|
| 50 |
+
|
| 51 |
+
3. **Set Environment Variable**:
|
| 52 |
+
```bash
|
| 53 |
+
export HF_TOKEN="your_huggingface_token_here"
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
### 3. Run the Application
|
| 57 |
+
|
| 58 |
+
```bash
|
| 59 |
+
python3 app.py
|
| 60 |
+
```
|
| 61 |
|
| 62 |
+
The app will start on `http://localhost:7860` (or next available port).
|
| 63 |
|
| 64 |
+
## 🔧 Configuration
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
+
### Environment Variables
|
| 67 |
|
| 68 |
+
- `HF_TOKEN`: Your HuggingFace API token (required for model access)
|
| 69 |
+
- `FORCE_CPU`: Set to `true` to force CPU usage (not needed for API version)
|
| 70 |
|
| 71 |
+
### Model Access Status
|
|
|
|
|
|
|
| 72 |
|
| 73 |
+
The app handles different access scenarios:
|
| 74 |
|
| 75 |
+
- ✅ **Full Access**: MedGemma model available via API
|
| 76 |
+
- ⚠️ **Pending Access**: Waiting for model approval (uses demo mode)
|
| 77 |
+
- ❌ **No Access**: Falls back to demo responses
|
|
|
|
| 78 |
|
| 79 |
+
## 🧪 Testing
|
| 80 |
|
| 81 |
+
Test the API connection:
|
| 82 |
|
| 83 |
```bash
|
| 84 |
+
python3 test_api.py
|
|
|
|
| 85 |
```
|
| 86 |
|
| 87 |
+
This will verify:
|
| 88 |
+
- HuggingFace API connectivity
|
| 89 |
+
- Token validity
|
| 90 |
+
- Model access permissions
|
| 91 |
+
|
| 92 |
+
## 📋 Usage
|
| 93 |
+
|
| 94 |
+
### Web Interface
|
| 95 |
+
|
| 96 |
+
1. Open the app in your browser
|
| 97 |
+
2. Enter patient symptoms in the text area
|
| 98 |
+
3. Adjust creativity slider if desired
|
| 99 |
+
4. Click "Analyze Symptoms"
|
| 100 |
+
5. Review the comprehensive medical analysis
|
| 101 |
+
|
| 102 |
+
### Example Symptoms
|
| 103 |
+
|
| 104 |
+
Try these example symptom descriptions:
|
| 105 |
+
|
| 106 |
+
- **Flu-like**: "Fever, headache, body aches, and fatigue for 3 days"
|
| 107 |
+
- **Chest pain**: "Sharp chest pain worsening with breathing, shortness of breath"
|
| 108 |
+
- **Digestive**: "Abdominal pain, nausea, and diarrhea after eating"
|
| 109 |
+
|
| 110 |
+
## 🔒 Medical Disclaimer
|
| 111 |
+
|
| 112 |
+
**⚠️ IMPORTANT**: This tool is for educational purposes only. It should never replace professional medical advice, diagnosis, or treatment. Always consult qualified healthcare professionals for medical concerns.
|
| 113 |
+
|
| 114 |
+
## 🏗️ Architecture
|
| 115 |
+
|
| 116 |
+
### API-Based Design
|
| 117 |
+
|
| 118 |
+
The app now uses HuggingFace Inference API instead of local model loading:
|
| 119 |
|
| 120 |
+
- **Advantages**:
|
| 121 |
+
- No local GPU/CPU requirements
|
| 122 |
+
- Faster startup time
|
| 123 |
+
- Always up-to-date model
|
| 124 |
+
- Reduced memory usage
|
| 125 |
|
| 126 |
+
- **Requirements**:
|
| 127 |
+
- Internet connection
|
| 128 |
+
- Valid HuggingFace token
|
| 129 |
+
- Model access approval
|
| 130 |
+
|
| 131 |
+
### File Structure
|
| 132 |
+
|
| 133 |
+
```
|
| 134 |
+
medgemma-symptomps/
|
| 135 |
+
├── app.py # Main Gradio application
|
| 136 |
+
├── test_api.py # API connection test script
|
| 137 |
+
├── requirements.txt # Python dependencies
|
| 138 |
+
├── README.md # This file
|
| 139 |
+
└── medgemma_app.log # Application logs
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
## 🛠️ Development
|
| 143 |
+
|
| 144 |
+
### Key Components
|
| 145 |
+
|
| 146 |
+
1. **MedGemmaSymptomAnalyzer**: Main class handling API connections
|
| 147 |
+
2. **Gradio Interface**: Web UI with symptom input and analysis display
|
| 148 |
+
3. **Demo Responses**: Fallback functionality for offline use
|
| 149 |
+
|
| 150 |
+
### API Integration
|
| 151 |
+
|
| 152 |
+
```python
|
| 153 |
+
from huggingface_hub import InferenceClient
|
| 154 |
+
|
| 155 |
+
client = InferenceClient(token=hf_token)
|
| 156 |
+
response = client.text_generation(
|
| 157 |
+
prompt=medical_prompt,
|
| 158 |
+
model="google/medgemma-4b-it",
|
| 159 |
+
max_new_tokens=400,
|
| 160 |
+
temperature=0.7
|
| 161 |
+
)
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
## 🔍 Troubleshooting
|
| 165 |
+
|
| 166 |
+
### Common Issues
|
| 167 |
+
|
| 168 |
+
1. **404 Model Not Found**:
|
| 169 |
+
- Ensure you have requested access to MedGemma
|
| 170 |
+
- Wait for Google's approval
|
| 171 |
+
- Verify your HuggingFace token is valid
|
| 172 |
+
|
| 173 |
+
2. **Demo Mode Only**:
|
| 174 |
+
- Check your internet connection
|
| 175 |
+
- Verify HF_TOKEN environment variable
|
| 176 |
+
- Confirm model access approval status
|
| 177 |
+
|
| 178 |
+
3. **Slow Responses**:
|
| 179 |
+
- API responses may take 10-30 seconds
|
| 180 |
+
- Consider adjusting max_tokens parameter
|
| 181 |
+
|
| 182 |
+
### Getting Help
|
| 183 |
+
|
| 184 |
+
- Check the application logs: `tail -f medgemma_app.log`
|
| 185 |
+
- Test API connection: `python3 test_api.py`
|
| 186 |
+
- Verify model access: Visit the HuggingFace model page
|
| 187 |
+
|
| 188 |
+
## 📚 Resources
|
| 189 |
+
|
| 190 |
+
- [MedGemma Model Card](https://huggingface.co/google/medgemma-4b-it)
|
| 191 |
+
- [HuggingFace Inference API](https://huggingface.co/docs/api-inference/index)
|
| 192 |
+
- [Gradio Documentation](https://gradio.app/docs/)
|
| 193 |
+
|
| 194 |
+
## 📄 License
|
| 195 |
+
|
| 196 |
+
This project uses the MedGemma model which has its own licensing terms. Please review the [model license](https://huggingface.co/google/medgemma-4b-it) before use.
|
| 197 |
+
|
| 198 |
+
---
|
| 199 |
|
| 200 |
+
**Remember**: Always prioritize patient safety and consult healthcare professionals for medical decisions.
|
|
|
|
|
|
app.py
CHANGED
|
@@ -1,11 +1,9 @@
|
|
| 1 |
import gradio as gr
|
| 2 |
-
import
|
| 3 |
-
from transformers import AutoProcessor, AutoModelForImageTextToText
|
| 4 |
-
from PIL import Image
|
| 5 |
import requests
|
| 6 |
import re
|
| 7 |
import logging
|
| 8 |
-
import
|
| 9 |
|
| 10 |
# Configure logging
|
| 11 |
logging.basicConfig(
|
|
@@ -20,221 +18,109 @@ logger = logging.getLogger(__name__)
|
|
| 20 |
|
| 21 |
class MedGemmaSymptomAnalyzer:
|
| 22 |
def __init__(self):
|
| 23 |
-
self.
|
| 24 |
-
self.
|
| 25 |
-
self.
|
| 26 |
-
logger.info("Initializing MedGemma Symptom Analyzer...")
|
| 27 |
|
| 28 |
-
def
|
| 29 |
-
"""
|
| 30 |
-
if self.
|
| 31 |
return True
|
| 32 |
|
| 33 |
-
|
| 34 |
-
logger.info(f"Loading model: {model_name}")
|
| 35 |
-
|
| 36 |
-
# Check if CPU-only mode is forced via environment variable
|
| 37 |
-
force_cpu = os.getenv("FORCE_CPU", "false").lower() == "true"
|
| 38 |
-
|
| 39 |
-
# Detect available device and log system info
|
| 40 |
-
if force_cpu:
|
| 41 |
-
device = "cpu"
|
| 42 |
-
logger.info("Forcing CPU usage via FORCE_CPU environment variable")
|
| 43 |
-
else:
|
| 44 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 45 |
-
logger.info(f"Device detected: {device}")
|
| 46 |
-
|
| 47 |
-
if device == "cpu":
|
| 48 |
-
logger.info(f"CPU threads available: {torch.get_num_threads()}")
|
| 49 |
-
else:
|
| 50 |
-
logger.info(f"CUDA device: {torch.cuda.get_device_name()}")
|
| 51 |
|
| 52 |
try:
|
| 53 |
-
# Get HF token from environment
|
| 54 |
hf_token = os.getenv("HF_TOKEN")
|
| 55 |
-
if hf_token:
|
| 56 |
-
logger.info("Using HF_TOKEN for authentication")
|
| 57 |
-
else:
|
| 58 |
-
logger.warning("HF_TOKEN not found in environment variables")
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
logger.info("Configuring for CPU-optimized loading...")
|
| 63 |
-
torch_dtype = torch.float32 # Use float32 for better CPU compatibility
|
| 64 |
-
device_map = "cpu" # Explicit CPU device mapping
|
| 65 |
-
# Set optimal number of threads for CPU inference
|
| 66 |
-
torch.set_num_threads(4) # Use 4 threads for better performance
|
| 67 |
-
|
| 68 |
-
loading_kwargs = {
|
| 69 |
-
"torch_dtype": torch_dtype,
|
| 70 |
-
"device_map": device_map,
|
| 71 |
-
"low_cpu_mem_usage": True, # Optimize memory usage on CPU
|
| 72 |
-
}
|
| 73 |
else:
|
| 74 |
-
logger.
|
| 75 |
-
|
| 76 |
-
device_map = "auto"
|
| 77 |
-
loading_kwargs = {
|
| 78 |
-
"torch_dtype": torch_dtype,
|
| 79 |
-
"device_map": device_map,
|
| 80 |
-
}
|
| 81 |
-
|
| 82 |
-
logger.info("Loading processor...")
|
| 83 |
-
self.processor = AutoProcessor.from_pretrained(
|
| 84 |
-
model_name,
|
| 85 |
-
token=hf_token
|
| 86 |
-
)
|
| 87 |
-
|
| 88 |
-
logger.info(f"Loading model with dtype={torch_dtype}, device_map={device_map}...")
|
| 89 |
-
# Force garbage collection before loading
|
| 90 |
-
import gc
|
| 91 |
-
gc.collect()
|
| 92 |
-
|
| 93 |
-
self.model = AutoModelForImageTextToText.from_pretrained(
|
| 94 |
-
model_name,
|
| 95 |
-
token=hf_token,
|
| 96 |
-
trust_remote_code=False, # Security best practice
|
| 97 |
-
**loading_kwargs
|
| 98 |
-
)
|
| 99 |
|
| 100 |
-
#
|
|
|
|
| 101 |
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
self.model = self.model.to('cpu')
|
| 105 |
-
logger.info("Model confirmed on CPU")
|
| 106 |
-
# Force garbage collection after loading
|
| 107 |
-
import gc
|
| 108 |
-
gc.collect()
|
| 109 |
-
|
| 110 |
-
self.model_loaded = True
|
| 111 |
-
logger.info(f"Model loaded successfully on {device}!")
|
| 112 |
return True
|
| 113 |
|
| 114 |
-
except torch.cuda.OutOfMemoryError as e:
|
| 115 |
-
logger.error(f"GPU out of memory: {str(e)}")
|
| 116 |
-
logger.info("Attempting CPU fallback due to GPU memory constraints...")
|
| 117 |
-
try:
|
| 118 |
-
# Force CPU loading if GPU fails - use correct model class
|
| 119 |
-
self.model = AutoModelForImageTextToText.from_pretrained(
|
| 120 |
-
model_name,
|
| 121 |
-
token=hf_token,
|
| 122 |
-
trust_remote_code=False,
|
| 123 |
-
torch_dtype=torch.float32,
|
| 124 |
-
device_map="cpu",
|
| 125 |
-
low_cpu_mem_usage=True
|
| 126 |
-
)
|
| 127 |
-
self.model = self.model.to('cpu')
|
| 128 |
-
self.model_loaded = True
|
| 129 |
-
logger.info("Model loaded successfully on CPU after GPU failure!")
|
| 130 |
-
return True
|
| 131 |
-
except Exception as fallback_e:
|
| 132 |
-
logger.error(f"CPU fallback also failed: {str(fallback_e)}")
|
| 133 |
-
self.model = None
|
| 134 |
-
self.processor = None # Fixed: was self.tokenizer
|
| 135 |
-
self.model_loaded = False
|
| 136 |
-
return False
|
| 137 |
-
except ImportError as e:
|
| 138 |
-
logger.error(f"Missing dependency for model loading: {str(e)}")
|
| 139 |
-
logger.info("Please ensure all required packages are installed: pip install -r requirements.txt")
|
| 140 |
-
self.model = None
|
| 141 |
-
self.processor = None
|
| 142 |
-
self.model_loaded = False
|
| 143 |
-
return False
|
| 144 |
-
except OSError as e:
|
| 145 |
-
if "disk quota exceeded" in str(e).lower() or "no space left" in str(e).lower():
|
| 146 |
-
logger.error("Insufficient disk space for model loading")
|
| 147 |
-
logger.info("Please free up disk space and try again")
|
| 148 |
-
elif "connection" in str(e).lower() or "timeout" in str(e).lower():
|
| 149 |
-
logger.error("Network connection issue during model download")
|
| 150 |
-
logger.info("Please check your internet connection and try again")
|
| 151 |
-
else:
|
| 152 |
-
logger.error(f"OS error during model loading: {str(e)}")
|
| 153 |
-
self.model = None
|
| 154 |
-
self.processor = None
|
| 155 |
-
self.model_loaded = False
|
| 156 |
-
return False
|
| 157 |
except Exception as e:
|
| 158 |
-
logger.error(f"Failed to
|
| 159 |
-
logger.warning("Falling back to demo mode due to
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
if device == "cpu":
|
| 163 |
-
logger.info("CPU loading troubleshooting tips:")
|
| 164 |
-
logger.info("- Ensure sufficient RAM (minimum 8GB recommended)")
|
| 165 |
-
logger.info("- Check that PyTorch CPU version is installed")
|
| 166 |
-
logger.info("- Verify HuggingFace token is valid")
|
| 167 |
-
|
| 168 |
-
self.model = None
|
| 169 |
-
self.processor = None
|
| 170 |
-
self.model_loaded = False
|
| 171 |
return False
|
| 172 |
|
| 173 |
def analyze_symptoms(self, symptoms_text, max_length=512, temperature=0.7):
|
| 174 |
-
"""Analyze symptoms
|
| 175 |
-
# Try to
|
| 176 |
-
if not self.
|
| 177 |
-
if not self.
|
| 178 |
-
# Fallback to demo response if
|
| 179 |
return self._get_demo_response(symptoms_text)
|
| 180 |
|
| 181 |
-
if not self.
|
| 182 |
return self._get_demo_response(symptoms_text)
|
| 183 |
|
| 184 |
-
# Format
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
"content": [{"type": "text", "text": "You are an expert medical AI assistant."}]
|
| 189 |
-
},
|
| 190 |
-
{
|
| 191 |
-
"role": "user",
|
| 192 |
-
"content": [{
|
| 193 |
-
"type": "text",
|
| 194 |
-
"text": f"""Patient presents with the following symptoms: {symptoms_text}
|
| 195 |
|
| 196 |
-
Based on these symptoms, provide a medical analysis including:
|
| 197 |
-
1. Possible
|
| 198 |
-
2. Recommended
|
| 199 |
-
3. When to
|
|
|
|
| 200 |
|
| 201 |
Medical Analysis:"""
|
| 202 |
-
}]
|
| 203 |
-
}
|
| 204 |
-
]
|
| 205 |
|
| 206 |
try:
|
| 207 |
-
|
| 208 |
-
inputs = self.processor.apply_chat_template(
|
| 209 |
-
messages,
|
| 210 |
-
add_generation_prompt=True,
|
| 211 |
-
tokenize=True,
|
| 212 |
-
return_dict=True,
|
| 213 |
-
return_tensors="pt"
|
| 214 |
-
)
|
| 215 |
-
|
| 216 |
-
# Move inputs to model device
|
| 217 |
-
inputs = {k: v.to(self.model.device) for k, v in inputs.items()}
|
| 218 |
-
|
| 219 |
-
input_len = inputs["input_ids"].shape[-1]
|
| 220 |
|
| 221 |
-
#
|
| 222 |
-
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
|
| 226 |
-
|
| 227 |
-
|
| 228 |
-
|
| 229 |
-
generation = generation[0][input_len:]
|
| 230 |
-
|
| 231 |
-
# Decode response
|
| 232 |
-
generated_text = self.processor.decode(generation, skip_special_tokens=True)
|
| 233 |
|
| 234 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 235 |
|
| 236 |
except Exception as e:
|
| 237 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
|
| 239 |
def _get_demo_response(self, symptoms_text):
|
| 240 |
"""Provide a demo response when model is not available"""
|
|
@@ -242,7 +128,7 @@ Medical Analysis:"""
|
|
| 242 |
|
| 243 |
# Simple keyword-based demo responses
|
| 244 |
if any(word in symptoms_lower for word in ['fever', 'headache', 'fatigue', 'body aches']):
|
| 245 |
-
return """**DEMO MODE -
|
| 246 |
|
| 247 |
Based on the symptoms described (fever, headache, fatigue), here's a general analysis:
|
| 248 |
|
|
@@ -265,10 +151,10 @@ Based on the symptoms described (fever, headache, fatigue), here's a general ana
|
|
| 265 |
- Persistent vomiting
|
| 266 |
- Symptoms worsen rapidly
|
| 267 |
|
| 268 |
-
*Note: This is a demo response. For actual medical analysis, the
|
| 269 |
|
| 270 |
elif any(word in symptoms_lower for word in ['chest pain', 'breathing', 'shortness']):
|
| 271 |
-
return """**DEMO MODE -
|
| 272 |
|
| 273 |
Based on chest-related symptoms, here's a general analysis:
|
| 274 |
|
|
@@ -291,10 +177,10 @@ Based on chest-related symptoms, here's a general analysis:
|
|
| 291 |
- Dizziness or fainting
|
| 292 |
- These symptoms require immediate medical care
|
| 293 |
|
| 294 |
-
*Note: This is a demo response. For actual medical analysis, the
|
| 295 |
|
| 296 |
else:
|
| 297 |
-
return f"""**DEMO MODE -
|
| 298 |
|
| 299 |
Thank you for describing your symptoms. In demo mode, I can provide general guidance:
|
| 300 |
|
|
@@ -310,9 +196,9 @@ Thank you for describing your symptoms. In demo mode, I can provide general guid
|
|
| 310 |
- You have underlying health conditions
|
| 311 |
- You're unsure about the severity
|
| 312 |
|
| 313 |
-
For a proper AI-powered analysis of your specific symptoms: "{symptoms_text[:100]}...", the
|
| 314 |
|
| 315 |
-
*Note: This is a demo response. For actual medical analysis, the
|
| 316 |
|
| 317 |
# Initialize the analyzer
|
| 318 |
analyzer = MedGemmaSymptomAnalyzer()
|
|
|
|
| 1 |
import gradio as gr
|
| 2 |
+
import os
|
|
|
|
|
|
|
| 3 |
import requests
|
| 4 |
import re
|
| 5 |
import logging
|
| 6 |
+
from huggingface_hub import InferenceClient
|
| 7 |
|
| 8 |
# Configure logging
|
| 9 |
logging.basicConfig(
|
|
|
|
| 18 |
|
| 19 |
class MedGemmaSymptomAnalyzer:
|
| 20 |
def __init__(self):
|
| 21 |
+
self.client = None
|
| 22 |
+
self.model_name = "google/medgemma-4b-it"
|
| 23 |
+
self.api_connected = False
|
| 24 |
+
logger.info("Initializing MedGemma Symptom Analyzer with HuggingFace Inference API...")
|
| 25 |
|
| 26 |
+
def connect_to_api(self):
|
| 27 |
+
"""Connect to HuggingFace Inference API"""
|
| 28 |
+
if self.api_connected:
|
| 29 |
return True
|
| 30 |
|
| 31 |
+
logger.info("Connecting to HuggingFace Inference API...")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
try:
|
| 34 |
+
# Get HF token from environment or use provided token
|
| 35 |
hf_token = os.getenv("HF_TOKEN")
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
+
if hf_token:
|
| 38 |
+
logger.info("Using HuggingFace token for API authentication")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
else:
|
| 40 |
+
logger.warning("No HuggingFace token found")
|
| 41 |
+
return False
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
# Initialize the InferenceClient
|
| 44 |
+
self.client = InferenceClient(token=hf_token)
|
| 45 |
|
| 46 |
+
self.api_connected = True
|
| 47 |
+
logger.info("✅ Connected to HuggingFace Inference API successfully!")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
return True
|
| 49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
except Exception as e:
|
| 51 |
+
logger.error(f"Failed to connect to HuggingFace API: {str(e)}")
|
| 52 |
+
logger.warning("Falling back to demo mode due to API connection failure")
|
| 53 |
+
self.client = None
|
| 54 |
+
self.api_connected = False
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
return False
|
| 56 |
|
| 57 |
def analyze_symptoms(self, symptoms_text, max_length=512, temperature=0.7):
|
| 58 |
+
"""Analyze symptoms using HuggingFace Inference API"""
|
| 59 |
+
# Try to connect to API if not already connected
|
| 60 |
+
if not self.api_connected:
|
| 61 |
+
if not self.connect_to_api():
|
| 62 |
+
# Fallback to demo response if API connection fails
|
| 63 |
return self._get_demo_response(symptoms_text)
|
| 64 |
|
| 65 |
+
if not self.client:
|
| 66 |
return self._get_demo_response(symptoms_text)
|
| 67 |
|
| 68 |
+
# Format prompt for text generation
|
| 69 |
+
prompt = f"""You are an expert medical AI assistant trained to analyze symptoms and provide comprehensive medical insights.
|
| 70 |
+
|
| 71 |
+
Patient presents with the following symptoms: {symptoms_text}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
+
Based on these symptoms, provide a comprehensive medical analysis including:
|
| 74 |
+
1. **Possible Differential Diagnoses**: List the most likely conditions based on the symptoms
|
| 75 |
+
2. **Recommended Next Steps**: Suggest appropriate diagnostic tests or evaluations
|
| 76 |
+
3. **When to Seek Immediate Medical Attention**: Identify red flags requiring urgent care
|
| 77 |
+
4. **General Care Recommendations**: Provide supportive care and lifestyle advice
|
| 78 |
|
| 79 |
Medical Analysis:"""
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
try:
|
| 82 |
+
logger.info("Sending request to HuggingFace Inference API...")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
+
# Make API call using text generation
|
| 85 |
+
response_text = self.client.text_generation(
|
| 86 |
+
prompt=prompt,
|
| 87 |
+
model=self.model_name,
|
| 88 |
+
max_new_tokens=max_length,
|
| 89 |
+
temperature=temperature,
|
| 90 |
+
return_full_text=False
|
| 91 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
+
# Check if we got a response
|
| 94 |
+
if response_text:
|
| 95 |
+
logger.info("✅ Successfully received response from API")
|
| 96 |
+
return response_text
|
| 97 |
+
else:
|
| 98 |
+
logger.warning("No response received from API")
|
| 99 |
+
return self._get_demo_response(symptoms_text)
|
| 100 |
|
| 101 |
except Exception as e:
|
| 102 |
+
error_msg = str(e)
|
| 103 |
+
logger.error(f"Error during API analysis: {error_msg}")
|
| 104 |
+
|
| 105 |
+
# Provide specific error messages for common issues
|
| 106 |
+
if "404" in error_msg and "medgemma" in error_msg.lower():
|
| 107 |
+
logger.warning("MedGemma model may require special access approval (gated model)")
|
| 108 |
+
return f"""**API ACCESS REQUIRED**
|
| 109 |
+
|
| 110 |
+
The MedGemma model appears to require special access approval from Google/HuggingFace.
|
| 111 |
+
|
| 112 |
+
To use the actual MedGemma model:
|
| 113 |
+
1. Visit: https://huggingface.co/google/medgemma-4b-it
|
| 114 |
+
2. Request access to the gated model
|
| 115 |
+
3. Wait for approval from Google
|
| 116 |
+
4. Ensure your HuggingFace token has the necessary permissions
|
| 117 |
+
|
| 118 |
+
**Current Status**: Using demo mode while waiting for model access.
|
| 119 |
+
|
| 120 |
+
{self._get_demo_response(symptoms_text)}"""
|
| 121 |
+
else:
|
| 122 |
+
# Fallback to demo response on API error
|
| 123 |
+
return self._get_demo_response(symptoms_text)
|
| 124 |
|
| 125 |
def _get_demo_response(self, symptoms_text):
|
| 126 |
"""Provide a demo response when model is not available"""
|
|
|
|
| 128 |
|
| 129 |
# Simple keyword-based demo responses
|
| 130 |
if any(word in symptoms_lower for word in ['fever', 'headache', 'fatigue', 'body aches']):
|
| 131 |
+
return """**DEMO MODE - API not connected**
|
| 132 |
|
| 133 |
Based on the symptoms described (fever, headache, fatigue), here's a general analysis:
|
| 134 |
|
|
|
|
| 151 |
- Persistent vomiting
|
| 152 |
- Symptoms worsen rapidly
|
| 153 |
|
| 154 |
+
*Note: This is a demo response. For actual medical analysis, the HuggingFace Inference API needs to be connected.*"""
|
| 155 |
|
| 156 |
elif any(word in symptoms_lower for word in ['chest pain', 'breathing', 'shortness']):
|
| 157 |
+
return """**DEMO MODE - API not connected**
|
| 158 |
|
| 159 |
Based on chest-related symptoms, here's a general analysis:
|
| 160 |
|
|
|
|
| 177 |
- Dizziness or fainting
|
| 178 |
- These symptoms require immediate medical care
|
| 179 |
|
| 180 |
+
*Note: This is a demo response. For actual medical analysis, the HuggingFace Inference API needs to be connected.*"""
|
| 181 |
|
| 182 |
else:
|
| 183 |
+
return f"""**DEMO MODE - API not connected**
|
| 184 |
|
| 185 |
Thank you for describing your symptoms. In demo mode, I can provide general guidance:
|
| 186 |
|
|
|
|
| 196 |
- You have underlying health conditions
|
| 197 |
- You're unsure about the severity
|
| 198 |
|
| 199 |
+
For a proper AI-powered analysis of your specific symptoms: "{symptoms_text[:100]}...", the HuggingFace Inference API would need to be successfully connected.
|
| 200 |
|
| 201 |
+
*Note: This is a demo response. For actual medical analysis, the HuggingFace Inference API needs to be connected.*"""
|
| 202 |
|
| 203 |
# Initialize the analyzer
|
| 204 |
analyzer = MedGemmaSymptomAnalyzer()
|