mof-class3-qualified

Granite-4.1-8B-Base

Model Summary: Granite‑4.1‑8B‑Base is a decoder‑only language model with long‑context capabilities, designed to support a broad range of text‑to‑text generation tasks. In addition to standard generation, it supports Fill‑in‑the‑Middle (FIM) code completion through specialized prefix and suffix tokens. The model is trained from scratch on approximately 15 trillion tokens using a five‑phase training strategy: 10 trillion tokens in phase one, 2 trillion tokens each in phases two and three, and 0.5 trillion tokens in phase four. In the final phase, long‑context extension is applied to expand the model’s context window to 512K tokens.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.1 models for languages beyond these languages.

Intended Use: Prominent use cases of LLMs in text-to-text generation include summarization, text classification, extraction, question-answering, code-completion (including FIM), and long-context generation tasks. All Granite Base models are able to handle these tasks as they were trained on a large amount of data from various domains. Moreover, they can serve as baseline to create specialized models for specific application scenarios.

Generation: This is a simple example of how to use Granite-4.1-8B-Base model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the code snippet below to run the example.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"

model_path = "ibm-granite/granite-4.1-8b-base"

tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "The capital of France is"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, max_length=10)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

The capital of France is Paris.

Evaluation Results:

Benchmarks Metric 3B Dense 8B Dense 30B Dense
General Tasks
MMLU 5-shot 66.47 73.60 78.44
MMLU-Pro 5-shot,CoT 37.16 44.58 49.51
BBH 3-shot, CoT 63.84 73.83 80.66
AGI EVAL 3-shot 54.32 61.68 69.20
DROP 5-shot 66.04 72.36 78.57
SimpleQA no-judge-short-form 6.85 7.92 10.54
Math Tasks
GSM8K 8-shot 72.93 73.54 83.78
Minerva Math 4-shot 38.00 43.42 45.66
Code Tasks
HumanEval pass@1 [StarCoder Prompt] 76.19 79.24 81.52
HumanEval pass@1 59.76 68.29 67.68
HumanEval+ pass@1 54.27 62.80 62.20
MBPP pass@1 81.48 63.76 83.60
MBPP+ pass@1 68.25 53.97 69.58
Eval+ Avg 65.94 62.21 70.76
Multilingual Tasks
MMMLU 5-shot 56.59 64.73 73.36
INCLUDE 5-shot 51.77 57.60 67.07
MGSM 8-shot 58.48 63.68 74.40
Multilingual Benchmarks and the included languages:
Benchmarks # Langs Languages
MMMLU 11 ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE 14 hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM 5 en, es, fr, ja, zh

Model Architecture:

Granite-4.1-8B-Base is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA, RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.

Model 3B Dense 8B Dense 30B Dense
Embedding size 2560 4096 4096
Number of layers 40 40 64
Attention head size 64 128 128
Number of attention heads 40 32 32
Number of KV heads 8 8 8
MLP / Shared expert hidden size 8192 12800 32768
MLP activation SwiGLU SwiGLU SwiGLU
Sequence length 131072 131072 131072
Position embedding RoPE RoPE RoPE
# Parameters 3B 8B 30B

Training Data: This model is trained on a mix of open source and proprietary data following a five-phase training strategy. We refer to phase-1 and phase-2 as pre-training and phase-3, phase-4, and phase-5 as mid-training.

Stage Characteristics 3B Dense 8B Dense 30B Dense
I General mixture of training data, warmup, and power scheduler for learning rate. 10T 10T 10T
II General mixture of training data with higher percentages of code and math with power scheduler for learning rate. 2T 2T 2T
III High quality training data, exponential decay of learning rate. 2T 2T 2T
IV High quality training data, linear decay to zero for learning rate. 500B 500B 500B
V Long Context Extension with exponential learning rate schedule. 396B 396B 396B

Infrastructure: We trained the Granite 4.1 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: The use of Large Language Models involves risks and ethical considerations people must be aware of, including but not limited to: bias and fairness, misinformation, and autonomous decision-making. Granite-4.1-8B-Base model is not an exception in this regard. Even though this model is suited for multiple generative AI tasks, it has not undergone any safety alignment and it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying text verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use Granite-4.1-8B-Base model with ethical intentions and in a responsible way. To enhance safety in enterprise deployments, we recommend using Granite 4.1 Language models alongside Granite Guardian, a model designed to detect and flag risks in inputs and outputs across key dimensions outlined in the IBM AI Risk Atlas.

Resources

Downloads last month
688
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ibm-granite/granite-4.1-8b-base

Paper for ibm-granite/granite-4.1-8b-base