Instructions to use TencentARC/LLaMA-Pro-8B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TencentARC/LLaMA-Pro-8B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TencentARC/LLaMA-Pro-8B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct") model = AutoModelForCausalLM.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use TencentARC/LLaMA-Pro-8B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TencentARC/LLaMA-Pro-8B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TencentARC/LLaMA-Pro-8B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/TencentARC/LLaMA-Pro-8B-Instruct
- SGLang
How to use TencentARC/LLaMA-Pro-8B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TencentARC/LLaMA-Pro-8B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TencentARC/LLaMA-Pro-8B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TencentARC/LLaMA-Pro-8B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TencentARC/LLaMA-Pro-8B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use TencentARC/LLaMA-Pro-8B-Instruct with Docker Model Runner:
docker model run hf.co/TencentARC/LLaMA-Pro-8B-Instruct
Add chat_template from allenai/tulu-2-dpo-70b to tokenizer_config.json
Add chat_template from allenai/tulu-2-dpo-70b/tokenizer_config.json
This change includes a chat_template in tokenizer_config.json. For more information please see Templates for Chat Models
.
To demonstrate the outcome of this change please see before and after:
Before
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct", legacy=False)
chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
{"role": "assistant", "content": "Great, please let me know if I can help."},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
output:
$ python3 main.py
No chat template is defined for this tokenizer - using the default template for the LlamaTokenizerFast class. If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST] Great, please let me know if I can help. </s>
After
If we modify the tokenizer to use a chat_template, we can see the difference:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct", legacy=False)
+ tokenizer.chat_template = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
{"role": "assistant", "content": "Great, please let me know if I can help."},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
Which outputs:
$ python3 main.py
<|user|>
Hello, how are you?
<|assistant|>
I'm doing great. How can I help you today?</s>
<|user|>
I'd like to show off how chat templating works!
<|assistant|>
Great, please let me know if I can help.</s>
Please see TencentARC/LLaMA-Pro-8B-Instruct/discussions/3.
Thank you for opening this, beat me to it!
Super nitpicky: the keys in tokenizer_config.json are otherwise in alphabetical order π
Thanks for this PR! I really appreciate it!