MathGPT (GPT-OSS-120B-Olympiad)

MathGPT is a high-performance reasoning model fine-tuned from GPT-OSS 120B. It is engineered specifically for solving complex mathematical theorems, competition-level problems (AIME/IMO), and advanced scientific reasoning.

  • Developed by: Azmainadeeb
  • Model Type: Causal Language Model (Fine-tuned for Mathematical Reasoning)
  • Base Model: unsloth/gpt-oss-120b-unsloth-bnb-4bit
  • Training Framework: Unsloth + TRL

🧩 Model Architecture

MathGPT leverages the Mixture-of-Experts (MoE) architecture of the GPT-OSS family, utilizing 117B total parameters with 5.1B active parameters per token. This allows the model to maintain state-of-the-art reasoning depth while remaining computationally efficient during inference.

📚 Training Data

The model was trained on a massive synthesis of reasoning-dense datasets to ensure "Chain of Thought" consistency:

Primary Thinking Dataset

Olympiad & Competition Sets

  • OlympiadBench & MathOlympiadBench: High-difficulty benchmark problems.
  • IMO Math Boxed: Problems curated from the International Mathematical Olympiad.
  • AoPS (Art of Problem Solving): Diverse competition-style math problems.
  • AIMO External Data: Specific sets designed for the AI Mathematical Olympiad.

🚀 Quickstart Usage

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Azmainadeeb/MathGPT",
    max_seq_length = 4096,
    load_in_4bit = True,
)

messages = [
    {"role": "user", "content": "Find all real numbers x such that 8^x + 2^x = 130."}
]

# Apply the template with reasoning_effort to trigger the "Thinking" mode
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt = True,
    reasoning_effort = "medium", # Options: low, medium, high
    return_tensors = "pt"
).to("cuda")

outputs = model.generate(inputs, max_new_tokens = 1024)
print(tokenizer.decode(outputs[0]))
Downloads last month
20
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Azmainadeeb/MathGPT

Finetuned
(7)
this model

Datasets used to train Azmainadeeb/MathGPT