Eland Sentiment GGUF - Chinese Multi-Domain Sentiment Analysis
GGUF quantized versions of the Eland Sentiment model for Ollama and llama.cpp deployment.
Available Files
| File | Size | Quantization | Use Case |
|---|---|---|---|
eland-sentiment-zh-q8_0.gguf |
4.0 GB | Q8_0 | Recommended - Best balance of size and quality |
eland-sentiment-zh-f16.gguf |
7.5 GB | F16 | Full precision, larger but highest quality |
Performance
Financial Domain (92.03%)
| Metric | Score |
|---|---|
| Overall Sentiment | 95.00% |
| Entity Sentiment | 93.10% |
| Opinion Sentiment | 85.00% |
| Agrees with Text | 95.00% |
Multi-Domain (86.85%)
| Metric | Score |
|---|---|
| Overall Sentiment | 71.00% |
| Entity Sentiment | 78.57% |
| Opinion Sentiment | 97.83% |
| Agrees with Text | 100.00% |
โ Ollama: No System Prompt Required
ไฝฟ็จ Ollama ๆไธ้่ฆ้กๅคๆไพ System Prompt๏ผModelfile ๅทฒๅ งๅปบใ
็ดๆฅ่ผธๅ ฅๆๆฌๅณๅฏ๏ผ
ollama run eland-sentiment-zh "ๅฐ็ฉ้ป่กๅนๅคงๆผฒ"
# ่ผธๅบ๏ผๆญฃ้ข
โ ๏ธ ๅฆๆไฝฟ็จ vLLM ๆ Transformers๏ผ่ซๅ่ vLLM ็ๆฌ ็ System Prompt ่ชชๆใ
Usage with Ollama
Quick Start
# Download GGUF and Modelfile
wget https://huggingface.co/p988744/eland-sentiment-zh-gguf/resolve/main/eland-sentiment-zh-q8_0.gguf
wget https://huggingface.co/p988744/eland-sentiment-zh-gguf/resolve/main/Modelfile
# Create Ollama model
ollama create eland-sentiment-zh -f Modelfile
# Run inference
ollama run eland-sentiment-zh "ๅฐ็ฉ้ปไปๆฅ่กๅนๅคงๆผฒ๏ผๅธๅ ด็ๅฅฝAI้ๆฑๆ็บๆ้ทใ"
Custom Modelfile
The included Modelfile is configured for sentiment analysis:
FROM ./eland-sentiment-zh-q8_0.gguf
TEMPLATE """{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
PARAMETER temperature 0.1
PARAMETER top_p 0.9
SYSTEM """ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌ็ๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ"""
LICENSE """Apache 2.0"""
API Usage
# Start Ollama server (if not running)
ollama serve
# Query via API
curl http://localhost:11434/api/generate -d '{
"model": "eland-sentiment-zh",
"prompt": "ๅฐ็ฉ้ป่กๅนๅคงๆผฒ",
"stream": false
}'
Python with Ollama
import ollama
response = ollama.generate(
model='eland-sentiment-zh',
prompt='ๅฐ็ฉ้ปไปๆฅ่กๅนๅคงๆผฒ๏ผๅธๅ ด็ๅฅฝAI้ๆฑๆ็บๆ้ทใ'
)
print(response['response']) # Expected: ๆญฃ้ข
Usage with llama.cpp
CLI
# Download GGUF
wget https://huggingface.co/p988744/eland-sentiment-zh-gguf/resolve/main/eland-sentiment-zh-q8_0.gguf
# Run inference
./llama-cli -m eland-sentiment-zh-q8_0.gguf \
-p "<|im_start|>system
ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌ็ๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ<|im_end|}
<|im_start|>user
ๅฐ็ฉ้ปไปๆฅ่กๅนๅคงๆผฒ<|im_end|>
<|im_start|>assistant
" \
-n 10 \
--temp 0.1
llama-cpp-python
from llama_cpp import Llama
llm = Llama(
model_path="eland-sentiment-zh-q8_0.gguf",
n_ctx=2048,
n_threads=4
)
prompt = """<|im_start|>system
ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌ็ๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ<|im_end|>
<|im_start|>user
ๅฐ็ฉ้ปไปๆฅ่กๅนๅคงๆผฒ๏ผๅธๅ ด็ๅฅฝAI้ๆฑๆ็บๆ้ทใ<|im_end|>
<|im_start|>assistant
"""
output = llm(prompt, max_tokens=10, temperature=0.1)
print(output['choices'][0]['text']) # Expected: ๆญฃ้ข
Task Prompts
Overall Sentiment:
System: ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌ็ๆด้ซๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ
User: [your text]
Entity Sentiment:
System: ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌไธญๅฐใ{entity}ใ็ๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ
User: [your text]
Opinion Sentiment:
System: ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅคๆทไปฅไธ่ง้ป็ๆ
ๆๅพๅ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ
User: ๆๆฌ๏ผ[text]
่ง้ป๏ผ[opinion]
Model Variants
| Version | Repository | Use Case |
|---|---|---|
| LoRA Adapter | p988744/eland-sentiment-zh | HuggingFace + PEFT |
| GGUF | p988744/eland-sentiment-zh-gguf | Ollama / llama.cpp (this repo) |
| Full Merged | p988744/eland-sentiment-zh-vllm | vLLM |
Model Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-4B |
| Parameters | 4.05B |
| Tensors | 398 |
| Context Length | 2048 |
Dataset
Trained on p988744/eland-sentiment-zh-data:
- Financial: 1,887 training samples (Taiwan stock market)
- Multi-domain: 600 training samples (product, brand, organization, social)
- Total: 2,487 training samples
License
Apache 2.0
- Downloads last month
- 23
Hardware compatibility
Log In to add your hardware
8-bit
16-bit