Text Generation
Transformers
Safetensors
feature-extraction
llama-factory
full
conversational
custom_code
Instructions to use qingy2024/Ling-Mini-2.0-Identity with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use qingy2024/Ling-Mini-2.0-Identity with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="qingy2024/Ling-Mini-2.0-Identity", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("qingy2024/Ling-Mini-2.0-Identity", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use qingy2024/Ling-Mini-2.0-Identity with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "qingy2024/Ling-Mini-2.0-Identity" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qingy2024/Ling-Mini-2.0-Identity", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/qingy2024/Ling-Mini-2.0-Identity
- SGLang
How to use qingy2024/Ling-Mini-2.0-Identity with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "qingy2024/Ling-Mini-2.0-Identity" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qingy2024/Ling-Mini-2.0-Identity", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "qingy2024/Ling-Mini-2.0-Identity" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "qingy2024/Ling-Mini-2.0-Identity", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use qingy2024/Ling-Mini-2.0-Identity with Docker Model Runner:
docker model run hf.co/qingy2024/Ling-Mini-2.0-Identity
Ling Mini 2.0 Identity
This model is a fine-tuned version of inclusionAI/Ling-mini-2.0 on the identity dataset (from LLaMA-Factory).
Training procedure
Full fine tuning with DeepSpeed Zero3 offloading and 4 x A100 80GB. For a faster setup, you can use the qingy1337/llamafactory-cu128:latest docker image.
Training hyperparameters
The following hyperparameters were used during training:
model_name_or_path: inclusionAI/Ling-mini-2.0
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: identity
template: bailing_v2
cutoff_len: 8192
max_samples: 10000000000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: ./outputs/
logging_steps: 1
save_steps: 10000000000
save_only_model: true
plot_loss: true
overwrite_output_dir: true
report_to: wandb
run_name: Test-FT
### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-6
num_train_epochs: 10.0
lr_scheduler_type: cosine
warmup_ratio: 0.2
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.1
- Downloads last month
- 10