Model Overview

Description:

The NVIDIA Llama-3.3-70B Eagle model is the Eagle head of Meta's Llama-3.3-70B model, which is an auto-regressive language model that uses a dense multilayer perceptron (MLP) architecture with 70 billion parameters. For more information, please check here. The NVIDIA Llama-3.3-70B Eagle3 model incorporates Eagle speculative decoding with TensorRT Model Optimizer.

This model is ready for commercial/non-commercial use.

License/Terms of Use:

Use of this model is governed by the NVIDIA Open Model License. ADDITIONAL INFORMATION: Llama 3.3 Community Model License. Built with Llama.

Deployment Geography:

Global

Use Case:

Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks.

Release Date:

Hugging Face 12/16/2025 via [https://huggingface.co/nvidia/Llama-3.3-70B-Instruct-Eagle3]

Model Architecture:

Architecture Type: Transformers
Network Architecture: Llama-3.3-70B
This model was developed based on [https://huggingface.co/nvidia/Llama-3.3-70B-Instruct-NVFP4]
Number of model parameters 3.2*10^9

Input:

Input Type(s): Text
Input Format(s): String
Input Parameters: One Dimensional (1D): Sequences
Other Properties Related to Input: 128k max context

Output:

Output Type(s): Text
Output Format: String
Output Parameters: One-Dimensional (1D): Sequences
Other Properties Related to Output: 128k max output

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Supported Runtime Engine(s):

  • TensorRT-LLM

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Blackwell

Preferred Operating System(s):

  • Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s):

  • v1.0-BF16: December 16th, 2026

Training and Evaluation Datasets:

The total size (in number of data points): 503.3K
Total number of datasets: 2

** Dataset partition: Training 100%

Training Dataset:

Link: ultrachat_200k and Magpie-Llama-3.1-Pro-300K-Filtered, only prompts from the datasets were used for data synthesis, (the original responses from GPT were not used), which is then used to train the Eagle modules. Click the links above for more information regarding the dataset.

** Data Modality [Text]

** Data Collection Method by dataset

  • Hybrid: Synthetic, Human, Automated

** Labeling Method by dataset

  • Hybrid: Synthetic, Human, Automated
    Properties: 500K samples, majority synthetic, others sourced from commercially-friendly datasets.

Evaluation Dataset:

Link: MTBench, for more details, see here

** Data Collection Method by dataset

  • Hybrid: Human, Synthetic

** Labeling Method by dataset

  • Hybrid: Human, Synthetic

Properties: 3,300 multi-turn dialogue sequences, each annotated with expert preference votes.

Inference:

Engine: TensorRT-LLM 1.2.0rc0
Test Hardware: B200

Eagle Speculative Decoding

Synthesized data was obtained from Meta's Llama-3.3-70B model, which is then used to finetune the Eagle modules. This model is ready for inference with TensorRT-LLM in Eagle speculative decoding mode. Eagle modules are used to predict candidate tokens beyond the next token. In the generation step, each forward Eagle module generates a distribution of tokens beyond the previous. Then, a tree-based attention mechanism samples some candidate sequences for the original model to validate. The longest accepted candidate sequence is selected so that more than 1 token is returned in the generation step. The number of tokens generated in each step is called acceptance rate.

Usage

To serve the checkpoint with TensorRT-LLM, follow the sample commands below with the TensorRT-LLM GitHub repo:

trtllm-serve <Llama-3.3-70B checkpoint> --host 0.0.0.0 --port 8000 --backend pytorch --max_batch_size 32 --tp_size 8 --extra_llm_api_options extra-llm-api-config.yml

extra-llm-api-config.yml is like this

enable_attention_dp: false
enable_autotuner: false

cuda_graph_config:
    max_batch_size: 32
    enable_padding: true

speculative_config:
    decoding_type: Eagle
    max_draft_len: 3
    speculative_model_dir: <eagle3 checkpoint>
    eagle3_layers_to_capture: [-1]

kv_cache_config:
    enable_block_reuse: false

Note that the only layer from the target passed to the draft is the final hidden state post LayerNorm and pre-LMHead.

Evaluation

The Eagle acceptance rate benchmark results (MT-Bench) with draft length 3 are presented in the table below for medium reasoning:

Category MT Bench Acceptance Rate
writing 2.10
roleplay 2.12
reasoning 2.40
math 3.25
coding 3.18
extraction 2.60
stem 2.53
humanities 2.30

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here. SUBCARDS:

Explainability

Field: Response:
Intended Task/Domain:: Text generation, reasoning, summarization, and question answering.
Model Type: Text and Image-to-text transformer
Intended Users: This model is intended for developers, researchers, and customers building/utilizing LLMs, while balancing accuracy and efficiency.
Output: Text String(s)
Describe how the model works: Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers
Technical Limitations & Mitigation: The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. Therefore, before deploying any applications of this model, developers should perform safety testing and tuning tailored to their specific applications of the model.
Verified to have met prescribed NVIDIA quality standards: Yes
Performance Metrics: Accuracy, Throughput, and user-side throughput
Potential Known Risk The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
Licensing: Use of this model is governed by the NVIDIA Open Model License. ADDITIONAL INFORMATION: Llama 3.3 Community Model License. Built with Llama.

Bias

Field: Response:
Participation considerations from adversely impacted groups protected classes in model design and testing: None
Measures taken to mitigate against unwanted bias: None

Safety & Security

Field: Response:
Model Application Field(s): Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning
Describe the life critical impact (if present) Not Applicable
Use Case Restrictions: Abide by the NVIDIA Open Model License. ADDITIONAL INFORMATION: Llama 3.3 Community Model License. Built with Llama.
Model and Dataset Restrictions: The Principle of least privilege (PoLP) is applied limiting access for dataset generation. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.

Privacy

Field: Response:
Generatable or reverse engineerable personal data? No
Was consent obtained for any personal data used? Not Applicable
Personal data used to create this model? None Known
How often is dataset reviewed? Before Release
Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? No
Is there provenance for all datasets used in training? Yes
Does data labeling (annotation, metadata) comply with privacy laws? Yes
Is data compliant with data subject requests for data correction or removal, if such a request was made? No, not possible with externally-sourced data.
Applicable NVIDIA Privacy Policy https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
Downloads last month
41
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nvidia/Llama-3.3-70B-Instruct-Eagle3

Collection including nvidia/Llama-3.3-70B-Instruct-Eagle3