Spaces:
Running
Running
| title: MedGemma Symptom Analyzer | |
| emoji: π₯ | |
| colorFrom: blue | |
| colorTo: green | |
| sdk: gradio | |
| sdk_version: 5.35.0 | |
| app_file: app.py | |
| pinned: false | |
| license: apache-2.0 | |
| # MedGemma Symptom Analyzer | |
| A modern medical AI application using Google's MedGemma model via HuggingFace Inference API for symptom analysis and medical consultation. | |
| ## π₯ Features | |
| - **AI-Powered Symptom Analysis**: Uses Google's MedGemma-4B model for medical insights | |
| - **Comprehensive Medical Reports**: Provides differential diagnoses, next steps, and red flags | |
| - **Interactive Web Interface**: Built with Gradio for easy use | |
| - **Demo Mode**: Fallback functionality when API is unavailable | |
| - **Medical Safety**: Includes appropriate disclaimers and safety guidance | |
| ## π Quick Start | |
| ### 1. Installation | |
| ```bash | |
| # Clone the repository | |
| git clone <your-repo-url> | |
| cd medgemma-symptomps | |
| # Install dependencies | |
| pip install -r requirements.txt | |
| ``` | |
| ### 2. HuggingFace Access Setup | |
| The app uses Google's MedGemma model, which requires special access: | |
| 1. **Get HuggingFace Token**: | |
| - Visit [HuggingFace Settings](https://huggingface.co/settings/tokens) | |
| - Create a new token with `read` permissions | |
| 2. **Request MedGemma Access**: | |
| - Visit [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it) | |
| - Click "Request access to this model" | |
| - Wait for approval from Google (may take some time) | |
| 3. **Set Environment Variable**: | |
| ```bash | |
| export HF_TOKEN="your_huggingface_token_here" | |
| ``` | |
| ### 3. Run the Application | |
| ```bash | |
| python3 app.py | |
| ``` | |
| The app will start on `http://localhost:7860` (or next available port). | |
| ## π§ Configuration | |
| ### Environment Variables | |
| - `HF_TOKEN`: Your HuggingFace API token (required for model access) | |
| - `FORCE_CPU`: Set to `true` to force CPU usage (not needed for API version) | |
| ### Model Access Status | |
| The app handles different access scenarios: | |
| - β **Full Access**: MedGemma model available via API | |
| - β οΈ **Pending Access**: Waiting for model approval (uses demo mode) | |
| - β **No Access**: Falls back to demo responses | |
| ## π§ͺ Testing | |
| Test the API connection: | |
| ```bash | |
| python3 test_api.py | |
| ``` | |
| This will verify: | |
| - HuggingFace API connectivity | |
| - Token validity | |
| - Model access permissions | |
| ## π Usage | |
| ### Web Interface | |
| 1. Open the app in your browser | |
| 2. Enter patient symptoms in the text area | |
| 3. Adjust creativity slider if desired | |
| 4. Click "Analyze Symptoms" | |
| 5. Review the comprehensive medical analysis | |
| ### Example Symptoms | |
| Try these example symptom descriptions: | |
| - **Flu-like**: "Fever, headache, body aches, and fatigue for 3 days" | |
| - **Chest pain**: "Sharp chest pain worsening with breathing, shortness of breath" | |
| - **Digestive**: "Abdominal pain, nausea, and diarrhea after eating" | |
| ## π Medical Disclaimer | |
| **β οΈ IMPORTANT**: This tool is for educational purposes only. It should never replace professional medical advice, diagnosis, or treatment. Always consult qualified healthcare professionals for medical concerns. | |
| ## ποΈ Architecture | |
| ### API-Based Design | |
| The app now uses HuggingFace Inference API instead of local model loading: | |
| - **Advantages**: | |
| - No local GPU/CPU requirements | |
| - Faster startup time | |
| - Always up-to-date model | |
| - Reduced memory usage | |
| - **Requirements**: | |
| - Internet connection | |
| - Valid HuggingFace token | |
| - Model access approval | |
| ### File Structure | |
| ``` | |
| medgemma-symptomps/ | |
| βββ app.py # Main Gradio application | |
| βββ test_api.py # API connection test script | |
| βββ requirements.txt # Python dependencies | |
| βββ README.md # This file | |
| βββ medgemma_app.log # Application logs | |
| ``` | |
| ## π οΈ Development | |
| ### Key Components | |
| 1. **MedGemmaSymptomAnalyzer**: Main class handling API connections | |
| 2. **Gradio Interface**: Web UI with symptom input and analysis display | |
| 3. **Demo Responses**: Fallback functionality for offline use | |
| ### API Integration | |
| ```python | |
| from huggingface_hub import InferenceClient | |
| client = InferenceClient(token=hf_token) | |
| response = client.text_generation( | |
| prompt=medical_prompt, | |
| model="google/medgemma-4b-it", | |
| max_new_tokens=400, | |
| temperature=0.7 | |
| ) | |
| ``` | |
| ## π Troubleshooting | |
| ### Common Issues | |
| 1. **404 Model Not Found**: | |
| - Ensure you have requested access to MedGemma | |
| - Wait for Google's approval | |
| - Verify your HuggingFace token is valid | |
| 2. **Demo Mode Only**: | |
| - Check your internet connection | |
| - Verify HF_TOKEN environment variable | |
| - Confirm model access approval status | |
| 3. **Slow Responses**: | |
| - API responses may take 10-30 seconds | |
| - Consider adjusting max_tokens parameter | |
| ### Getting Help | |
| - Check the application logs: `tail -f medgemma_app.log` | |
| - Test API connection: `python3 test_api.py` | |
| - Verify model access: Visit the HuggingFace model page | |
| ## π Resources | |
| - [MedGemma Model Card](https://huggingface.co/google/medgemma-4b-it) | |
| - [HuggingFace Inference API](https://huggingface.co/docs/api-inference/index) | |
| - [Gradio Documentation](https://gradio.app/docs/) | |
| ## π License | |
| This project uses the MedGemma model which has its own licensing terms. Please review the [model license](https://huggingface.co/google/medgemma-4b-it) before use. | |
| --- | |
| **Remember**: Always prioritize patient safety and consult healthcare professionals for medical decisions. |