--- base_model: MiniMaxAI/MiniMax-M2.1 base_model_relation: quantized language: - en - zh library_name: gguf license: other license_name: modified-mit license_link: https://github.com/MiniMax-AI/MiniMax-M2.1/blob/main/LICENSE pipeline_tag: text-generation tags: - text-generation-inference - minimax - agent - code - gguf --- # MiniMax-M2.1-GGUF
> [!IMPORTANT] > **I am currently looking for open positions!** 🤗 > If you find this model useful or are looking for a talented AI/LLM Engineer, please reach out to me on LinkedIn: **[Aaryan Kapoor](https://www.linkedin.com/in/theaaryankapoor/)**. ## Description This repository contains **GGUF** format model files for [MiniMaxAI's MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1). **MiniMax-M2.1** is a state-of-the-art agentic model optimized for coding, tool use, and long-horizon planning. It demonstrates exceptional performance on benchmarks like SWE-bench Verified and VIBE, outperforming or matching models like Claude Sonnet 4.5 in multilingual coding tasks. ### About GGUF GGUF is a new format introduced by the llama.cpp team. It is a replacement for GGML, which is no longer supported by llama.cpp. ## How to Run (llama.cpp) **Recommended Parameters:** The original developers recommend the following settings for best performance: * **Temperature:** `1.0` * **Top-P:** `0.95` * **Top-K:** `40` ### CLI Example ![image](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F64e1a459ff3fd4fd8eedb456%2FF58GJZi50d46WdUFC7LCd.png) ```bash ./llama-cli -m MiniMax-M2.1.Q4_K_M.gguf \ -c 8192 \ --temp 1.0 \ --top-p 0.95 \ --top-k 40 \ -p "You are a helpful assistant. Your name is MiniMax-M2.1 and is built by MiniMax.\n\nUser: Write a Python script to analyze a CSV file.\nAssistant:" \ -cnv