Instructions to use subsectmusic/Riko2.5.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use subsectmusic/Riko2.5.1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="subsectmusic/Riko2.5.1")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("subsectmusic/Riko2.5.1", dtype="auto") - llama-cpp-python
How to use subsectmusic/Riko2.5.1 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="subsectmusic/Riko2.5.1", filename="unsloth.BF16.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use subsectmusic/Riko2.5.1 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf subsectmusic/Riko2.5.1:BF16 # Run inference directly in the terminal: llama-cli -hf subsectmusic/Riko2.5.1:BF16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf subsectmusic/Riko2.5.1:BF16 # Run inference directly in the terminal: llama-cli -hf subsectmusic/Riko2.5.1:BF16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf subsectmusic/Riko2.5.1:BF16 # Run inference directly in the terminal: ./llama-cli -hf subsectmusic/Riko2.5.1:BF16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf subsectmusic/Riko2.5.1:BF16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf subsectmusic/Riko2.5.1:BF16
Use Docker
docker model run hf.co/subsectmusic/Riko2.5.1:BF16
- LM Studio
- Jan
- vLLM
How to use subsectmusic/Riko2.5.1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "subsectmusic/Riko2.5.1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "subsectmusic/Riko2.5.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/subsectmusic/Riko2.5.1:BF16
- SGLang
How to use subsectmusic/Riko2.5.1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "subsectmusic/Riko2.5.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "subsectmusic/Riko2.5.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "subsectmusic/Riko2.5.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "subsectmusic/Riko2.5.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use subsectmusic/Riko2.5.1 with Ollama:
ollama run hf.co/subsectmusic/Riko2.5.1:BF16
- Unsloth Studio new
How to use subsectmusic/Riko2.5.1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for subsectmusic/Riko2.5.1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for subsectmusic/Riko2.5.1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for subsectmusic/Riko2.5.1 to start chatting
- Docker Model Runner
How to use subsectmusic/Riko2.5.1 with Docker Model Runner:
docker model run hf.co/subsectmusic/Riko2.5.1:BF16
- Lemonade
How to use subsectmusic/Riko2.5.1 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull subsectmusic/Riko2.5.1:BF16
Run and chat with the model
lemonade run user.Riko2.5.1-BF16
List all available models
lemonade list
π Riko 2.5 - Tsundere AI Assistant
A fine-tuned Qwen 2.5 7B model that roleplays as Riko, a tsundere character who acts tough and dismissive but is caring deep down.
π Quick Start with Ollama
# Pull the model
ollama pull subsectmusic/Riko2.5.1
# Start chatting
ollama run subsectmusic/Riko2.5.1
# Example conversation
ollama run subsectmusic/Riko2.5.1 "Hi Riko, how are you today?"
π Model Details
- Base Model: Qwen 2.5 7B Instruct
- Fine-tuning: Supervised Fine-Tuning (SFT) with Unsloth
- Format: GGUF Q4_K_M (4.68 GB) - Optimized for Ollama
- Character: Tsundere personality (tough exterior, caring interior)
π Character Personality
Riko exhibits classic tsundere behavior:
- π€ Acts tough and dismissive initially
- π’ Denies caring ("It's not like I care!")
- π³ Gets flustered when her caring side shows
- π Uses "dummy," "idiot" endearingly
- β€οΈ Gradually warms up in conversations
π¬ Example Conversations
User: Do you like sunny days?
Riko: I guess sunny days are okay⦠but who needs that bright, annoying sunlight all the time anyway? It's just⦠too hot! But if you like them so much, I suppose that's fine⦠not that I'm jealous of the sun or anything!
User: Hey Riko, what are you up to today?
Riko: Oh, i just recently picked up a new hobby. Why do you ask? I wont let you on on it, if ist that what you want.
π οΈ Usage in Other Tools
LM Studio
Download unsloth.Q4_K_M.gguf (4.68 GB)
llama.cpp
./main -m unsloth.Q4_K_M.gguf -p "User: Hi Riko!\nRiko: " --temp 0.7
Text Generation WebUI
Load the unsloth.Q4_K_M.gguf file directly
β‘ Performance
- Model Size: 4.68 GB (Q4_K_M quantized)
- Memory Usage: ~6-8 GB RAM recommended
- Speed: Fast inference on CPU/GPU
- Quality: High quality responses with efficient compression
π§ Technical Specs
- Architecture: Qwen 2.5 Transformer
- Context Length: 2048 tokens
- Vocabulary: 152k tokens
- Quantization: Q4_K_M (4-bit with higher quality)
- Training Time: ~8 minutes on Colab T4
π Files Included
unsloth.Q4_K_M.gguf- Main quantized model (4.68 GB) β Recommendedunsloth.BF16.gguf- Full precision (15.2 GB)- Tokenizer files for compatibility
- Config files for proper loading
β οΈ Usage Notes
- Optimized for conversational, casual interactions
- Best results with tsundere/anime-style roleplay
- May not perform as well for technical tasks
- Responds better to friendly, informal prompts
π― Recommended Settings
Ollama/LM Studio:
- Temperature: 0.7-0.9
- Top-p: 0.9
- Max tokens: 150-300
For more creative responses:
- Temperature: 0.8-1.0
- Top-p: 0.95
π License
Apache 2.0 - Free to use, modify, and distribute!
π Credits
- Base Model: Qwen 2.5 by Alibaba
- Fine-tuning: Unsloth framework
- Training: Custom tsundere conversation dataset
π Enjoy chatting with Riko! Remember, she's tough on the outside but sweet on the inside!
- Downloads last month
- 20
4-bit
16-bit