MezayaAI - Qwen2.5-3B-Instruct (Fine-tuned)

This repository hosts a fine-tuned version of the Qwen/Qwen2.5-3B-Instruct model, specifically adapted for MezayaAI, a role-aware AI assistant designed for project/product/agile delivery management.

MezayaAI aims to assist in various project delivery contexts, such as PI Planning, Backlog Refinement, and Governance meetings, by extracting key information, answering questions, and generating role-specific outputs based on a Single Source of Truth (SSOT).

Model Details

  • Base Model: Qwen/Qwen2.5-3B-Instruct
  • Fine-tuning: This model has been fine-tuned to incorporate deterministic routing logic, refined SSOT handling for partial decisions and versioning, and strict hallucination guardrails.
  • Languages: English

Usage

This model is designed to be integrated into applications that manage project delivery lifecycle. It functions best when provided with context (e.g., meeting transcripts) and clear instructions for extracting structured data or generating role-specific reports.

Core Functionality:

  • INGEST: Extracts decisions, actions, risks, dependencies, open questions, and story candidates from unstructured text (e.g., meeting notes).
  • QNA: Answers questions based only on the ingested Single Source of Truth (SSOT), stating when information is missing.
  • GENERATE: Produces role-specific outputs (e.g., executive summaries, action plans, user stories) based on the SSOT, adhering to strict output contracts and hallucination guardrails.

Example Gradio Demo (app.py)

The accompanying app.py provides a basic Gradio interface for interacting with the model. It demonstrates how to send user messages and receive responses, integrating the hf_chat function to leverage the loaded model.

To run the Gradio demo locally:

  1. Ensure you have gradio, torch, transformers, accelerate, and sentencepiece installed (pip install -r requirements.txt).
  2. Run python app.py in your terminal.

requirements.txt

The requirements.txt file lists the Python dependencies required to run this model and the Gradio demo:

torch
transformers
accelerate
sentencepiece
huggingface_hub
gradio
Downloads last month
35
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Mezaya/MezayaAI

Base model

Qwen/Qwen2.5-3B
Finetuned
(1159)
this model