Akshar Nano V2 (Llama 3.2 1B) ๐Ÿ‡ฎ๐Ÿ‡ณ

Akshar Nano V2 is a sovereign, highly efficient 1-Billion parameter AI model developed by Shreyas Chavan at Vyom AI. It is natively bilingual, trained to understand and reason in English, Hindi, and Marathi.

Model Details

  • Developer: Shreyas Chavan (Vyom AI)
  • Architecture: 1 Billion Parameters (Llama 3.2 base)
  • Training: Fine-tuned on a custom 25,000+ row trilingual dataset using Unsloth.
  • Languages: English, Hindi, Marathi
  • Format: 16-bit Safetensors (Ready for GGUF conversion)

Capabilities

Akshar Nano V2 has been meticulously fine-tuned to excel in several key areas:

  1. Indic Mathematics & Logical Reasoning: Capable of solving math problems and logical puzzles in Hindi and Marathi.
  2. Fluent Conversational AI: Understands cultural nuances and can hold natural, flowing conversations in native Indic languages without sounding robotic.
  3. Identity & Sovereignty: Deeply understands its identity as Akshar Nano, built by Vyom AI, ensuring a consistent and proud persona.
  4. General English Knowledge: Retains the vast general knowledge of the base Llama 3.2 model while significantly boosting its Indic language capabilities.

Usage

This model is exported in standard Hugging Face 16-bit format. It includes the proper GenerationConfig, tokenizer, and special_tokens_map.json to ensure it stops generating correctly and formats chat templates flawlessly.

To use this model in Ollama or llama.cpp, you can easily convert this repository to GGUF using the Hugging Face "GGUF My Repo" Space!

Downloads last month
245
Safetensors
Model size
1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ 1 Ask for provider support

Model tree for shreyaschavan11/Akshar-Nano-V2

Quantizations
2 models

Space using shreyaschavan11/Akshar-Nano-V2 1

Collection including shreyaschavan11/Akshar-Nano-V2