Instructions to use tngtech/DeepSeek-R1T-Chimera with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use tngtech/DeepSeek-R1T-Chimera with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="tngtech/DeepSeek-R1T-Chimera", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tngtech/DeepSeek-R1T-Chimera", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("tngtech/DeepSeek-R1T-Chimera", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use tngtech/DeepSeek-R1T-Chimera with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "tngtech/DeepSeek-R1T-Chimera" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tngtech/DeepSeek-R1T-Chimera", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/tngtech/DeepSeek-R1T-Chimera
- SGLang
How to use tngtech/DeepSeek-R1T-Chimera with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "tngtech/DeepSeek-R1T-Chimera" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tngtech/DeepSeek-R1T-Chimera", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "tngtech/DeepSeek-R1T-Chimera" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tngtech/DeepSeek-R1T-Chimera", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use tngtech/DeepSeek-R1T-Chimera with Docker Model Runner:
docker model run hf.co/tngtech/DeepSeek-R1T-Chimera
Can't find this model anywhere.
I'm not sure where else to put this, but I absolutely fell in love with this model and now there's no one that hosts it. It was pulled off OR and then Chutes completely. I know a lot of people that feel the same and miss it dearly. Are there any plans in trying to get this model hosted elsewhere in the future?
The successor https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera
is available on Chutes via OpenRouter. They had to pull the free version to abuse or "artificially high demand" but it's cheap.
https://openrouter.ai/tngtech/deepseek-r1t2-chimera
The successor https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera
is available on Chutes via OpenRouter. They had to pull the free version to abuse or "artificially high demand" but it's cheap.
https://openrouter.ai/tngtech/deepseek-r1t2-chimera
Hi, thanks for the response! I've tried R1T2 extensively and I still prefer R1T.
Hello Lion,
thanks for your kind words. Can you send an email to info@tngtech.com (with a subject "for Henrik") please and I can respond to you?
Cheers,
HK
Hello Lion,
thanks for your kind words. Can you send an email to info@tngtech.com (with a subject "for Henrik") please and I can respond to you?
Cheers,
HK
Hi HK!
Of course! Email has been sent. 😄
Same, I can't find it anywhere :( The most obedient model of all that I've used. Extremely upset(
Did you try our experimental 1.3T model "The Artist"?
(PS: You can also send an email to info@tngtech.com and we can give you a test access)
I'm having the exact same issue, trying to use other services than OR and chutes to get access to R1T... I'm on VoidAI right now but no cigar... It was my favorite hands down, and I don't like any other as much as I did R1T...
Did you try our experimental 1.3T model "The Artist"?
(PS: You can also send an email to info@tngtech.com and we can give you a test access)
Hi! I sent an email to this address and haven't heard back yet.
I'm in the same boat! I prefer R1T and T2 even over the new V4 Pro, and probably in that order. They had probably my favorite personality of all the models I've used. Losing inference providers has been a real bummer. Chutes seems to be all that's left, and I worry that they'll kill all the older Deepseek models eventually, because their official Reddit account said those models are 'money losing garbage'. Here's a link to the comment: https://www.reddit.com/r/chutesAI/comments/1t0k6hv/comment/ojdz18o/
I'm not posting that to slam them. They might not be referring to Chimera, and I'll give them the benefit of the doubt and say they aren't, but still. If they're having trouble with money, then things get stressful. And when models get low or no utilization but they're still having to spend money on renting GPUs, it only makes that stress all the worse. I get their frustration. And I think they should get some credit for hosting TEE models and being the last man standing for Chimera. I'm just worried we'll lose them someday and not have anyone to take their place. Chimera still gets good utilization on their page, so it won't be soon, but it's an anxious thought that sits in the back of my mind whenever I use Chimera. I won't pester you guys to find another inference provider for Chimera like NanoGPT, Venice, etc. That would just be annoying. But I would like to ask, would The Artist be a model that someone like me would enjoy? Knowing we've got more cool models on the way would be a nice light at the end of the tunnel.