Machine981's picture
Create README.md
cb87745 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation
  - nlp
datasets:
  - DeepMath103K
metrics:
  - avg@1 / pass@k
base_model:
  - Deepseek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
library_name: transformers
arxiv: 2604.10688

Model Name

SCOPE-Deepseek-R1-Distill-Qwen-1.5B

This model is introduced in the paper SCOPE: Signal-Calibrated On-Policy Distillation Enhancement with Dual-Path Adaptive Weighting and is developed by the Longcat Interaction Team.

Model Details

Model Description

  • Developed by: Longcat Interaction Team
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model: Deepseek-R1-Distill-Qwen-1.5B
  • Paper: arxiv.org/abs/2604.10688

Model Sources

Uses

Direct Use

This model can be used directly for text generation (like MATH reasoning) without any additional fine-tuning.

How to Get Started with the Model

Use the code below to get started with the model:

from transformers import AutoTokenizer, AutoModelForCausalLM  # adjust as needed

tokenizer = AutoTokenizer.from_pretrained("Machine981/SCOPE-Deepseek-R1-Distill-Qwen-1.5B")
model = AutoModelForCausalLM.from_pretrained("Machine981/SCOPE-Deepseek-R1-Distill-Qwen-1.5B")

inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))