Banner

NTX-350m is a compact assistant model built to bring a familiar GRaPE-style feel into an edge-sized model.

In simple terms: this is an attempt to capture the habitual GRaPE-style tone and interaction pattern in a lightweight 350M-class model that is easier to run in constrained environments.

The model was fine-tuned on SL-AI/GRaPE-Base-Mix, with a total of approximately 3.03M rows. The training focus emphasized assistant behavior, conversational tone, instruction following, and consistent identity in a small-footprint setting.

NTX-350m is intended for:

  • edge deployment
  • low-memory experimentation
  • lightweight assistant tasks
  • fast local inference
  • compact multilingual interaction

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="DedeProGames/NTX-350m-Preview", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with SFT.

Framework versions

  • TRL: 0.24.0
  • Transformers: 5.5.3
  • Pytorch: 2.11.0+cu128
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2
Downloads last month
951
Safetensors
Model size
0.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DedeProGames/NTX-350m-Preview

Finetuned
(18)
this model
Quantizations
2 models

Dataset used to train DedeProGames/NTX-350m-Preview

Collection including DedeProGames/NTX-350m-Preview