Instructions to use Tiiny/SmallThinker-3B-Preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Tiiny/SmallThinker-3B-Preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Tiiny/SmallThinker-3B-Preview") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Tiiny/SmallThinker-3B-Preview") model = AutoModelForCausalLM.from_pretrained("Tiiny/SmallThinker-3B-Preview") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Tiiny/SmallThinker-3B-Preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Tiiny/SmallThinker-3B-Preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tiiny/SmallThinker-3B-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Tiiny/SmallThinker-3B-Preview
- SGLang
How to use Tiiny/SmallThinker-3B-Preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Tiiny/SmallThinker-3B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tiiny/SmallThinker-3B-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Tiiny/SmallThinker-3B-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tiiny/SmallThinker-3B-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Tiiny/SmallThinker-3B-Preview with Docker Model Runner:
docker model run hf.co/Tiiny/SmallThinker-3B-Preview
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Tiiny/SmallThinker-3B-Preview")
model = AutoModelForCausalLM.from_pretrained("Tiiny/SmallThinker-3B-Preview")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))SmallThinker-3B-preview
We introduce SmallThinker-3B-preview, a new model fine-tuned from the Qwen2.5-3b-Instruct model.
Now you can directly deploy SmallThinker On your phones with PowerServe.
Benchmark Performance
| Model | AIME24 | AMC23 | GAOKAO2024_I | GAOKAO2024_II | MMLU_STEM | AMPS_Hard | math_comp |
|---|---|---|---|---|---|---|---|
| Qwen2.5-3B-Instruct | 6.67 | 45 | 50 | 35.8 | 59.8 | - | - |
| SmallThinker | 16.667 | 57.5 | 64.2 | 57.1 | 68.2 | 70 | 46.8 |
| GPT-4o | 9.3 | - | - | - | 64.2 | 57 | 50 |
Limitation: Due to SmallThinker's current limitations in instruction following, for math_comp we adopt a more lenient evaluation method where only correct answers are required, without constraining responses to follow the specified AAAAA format.
Colab Link: Colab
Intended Use Cases
SmallThinker is designed for the following use cases:
- Edge Deployment: Its small size makes it ideal for deployment on resource-constrained devices.
- Draft Model for QwQ-32B-Preview: SmallThinker can serve as a fast and efficient draft model for the larger QwQ-32B-Preview model. From my test, in llama.cpp we can get 70% speedup (from 40 tokens/s to 70 tokens/s).
Training Details
The model was trained using 8 H100 GPUs with a global batch size of 16. The specific configuration is as follows:
The SFT (Supervised Fine-Tuning) process was conducted in two phases:
- First Phase:
- Used only the PowerInfer/QWQ-LONGCOT-500K dataset
- Trained for 1.5 epochs
### model
model_name_or_path: /home/syx/Qwen2.5-3B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: o1-v2
template: qwen
neat_packing: true
cutoff_len: 16384
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/qwen2-01-qat/full/sft
logging_steps: 1
save_steps: 1000
plot_loss: true
overwrite_output_dir: true
- Second Phase:
- Combined training with PowerInfer/QWQ-LONGCOT-500K and PowerInfer/LONGCOT-Refine datasets
- Continued training for 2 additional epochs
### model
model_name_or_path: saves/qwen2-01-qat/full/sft/checkpoint-24000
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: o1-v2, o1-v3
template: qwen
neat_packing: true
cutoff_len: 16384
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/qwen2-01-qat/full/sft
logging_steps: 1
save_steps: 1000
plot_loss: true
overwrite_output_dir: true
Limitations & Disclaimer
Please be aware of the following limitations:
- Language Limitation: The model has only been trained on English-language datasets, hence its capabilities in other languages are still lacking.
- Limited Knowledge: Due to limited SFT data and the model's relatively small scale, its reasoning capabilities are constrained by its knowledge base.
- Unpredictable Outputs: The model may produce unexpected outputs due to its size and probabilistic generation paradigm. Users should exercise caution and validate the model's responses.
- Repetition Issue: The model tends to repeat itself when answering high-difficulty questions. Please increase the
repetition_penaltyto mitigate this issue.
- Downloads last month
- 877
Model tree for Tiiny/SmallThinker-3B-Preview
Base model
Qwen/Qwen2.5-3B
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Tiiny/SmallThinker-3B-Preview") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)