File size: 1,658 Bytes
cb87745 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ---
language:
- en
license: apache-2.0
tags:
- text-generation
- nlp
datasets:
- DeepMath103K
metrics:
- avg@1 / pass@k
base_model:
- Deepseek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
library_name: transformers
arxiv: 2604.10688
---
# Model Name
SCOPE-Deepseek-R1-Distill-Qwen-1.5B
This model is introduced in the paper
[SCOPE: Signal-Calibrated On-Policy Distillation Enhancement with Dual-Path Adaptive Weighting](https://arxiv.org/abs/2604.10688)
and is developed by the **Longcat Interaction Team**.
## Model Details
### Model Description
- **Developed by:** Longcat Interaction Team
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Deepseek-R1-Distill-Qwen-1.5B
- **Paper:** [arxiv.org/abs/2604.10688](https://arxiv.org/abs/2604.10688)
### Model Sources
- **Repository:** [https://github.com/machine981/SCOPE](https://github.com/machine981/SCOPE)
- **Paper:** [https://arxiv.org/abs/2604.10688](https://arxiv.org/abs/2604.10688)
## Uses
### Direct Use
This model can be used directly for text generation (like MATH reasoning) without any additional fine-tuning.
## How to Get Started with the Model
Use the code below to get started with the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM # adjust as needed
tokenizer = AutoTokenizer.from_pretrained("Machine981/SCOPE-Deepseek-R1-Distill-Qwen-1.5B")
model = AutoModelForCausalLM.from_pretrained("Machine981/SCOPE-Deepseek-R1-Distill-Qwen-1.5B")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |