AI & ML interests
Large language models
Recent Activity
Papers
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling
Articles
Vision encoders distilled from DINOv3 and SigLIP2 (MoE & Dense). Stems from the CVPR 2026 AMoE paper.
7B models built on top of Falcon3-7B
Arabic benchmark datasets https://arxiv.org/pdf/2507.15850
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
-
Falcon Mamba Playground
🐍65Generate chat responses using FalconMamba-7b model
-
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 35 -
tiiuae/falcon-mamba-7b
Text Generation • Updated • 17.3k • 242 -
tiiuae/falcon-mamba-7b-instruct
Text Generation • 7B • Updated • 39.4k • 70
Leveraging Contextual Web Data for Fine-tuning Vision Language Models (https://arxiv.org/abs/2502.10250)
A series of extremely small, yet powerful language models redefining capabilities at small scale
-
Falcon-H1-Tiny: A series of extremely small, yet powerful language models redefining capabilities at small scale
📝39Generate text using extremely small yet powerful language models
-
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 43 -
tiiuae/Falcon-H1-Tiny-90M-Instruct
Text Generation • 91.1M • Updated • 15.3k • 38 -
tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF
91.1M • Updated • 1.19k • 13
Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned).
-
Falcon H1 Playground
🦅33Chat with Falcon-H1 language models
-
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Paper • 2507.22448 • Published • 70 -
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 43 -
tiiuae/Falcon-H1-0.5B-Base
Text Generation • 0.5B • Updated • 33k • 16
A series of powerful, universal and fine-tunable small Language Models
-
Falcon E Playground
💻8Chat with an advanced language model to get answers and engage in conversation
-
tiiuae/Falcon-E-3B-Base
Text Generation • 0.9B • Updated • 189 • 13 -
tiiuae/Falcon-E-3B-Instruct
Text Generation • 0.9B • Updated • 1.01k • 37 -
tiiuae/Falcon-E-3B-Instruct-GGUF
3B • Updated • 113 • 14
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 43 -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 14.9k • 897 -
tiiuae/falcon-rw-1b
Text Generation • Updated • 11.5k • 118 -
tiiuae/falcon-rw-7b
Text Generation • 8B • Updated • 227 • 17
Vision encoders distilled from DINOv3 and SigLIP2 (MoE & Dense). Stems from the CVPR 2026 AMoE paper.
A series of extremely small, yet powerful language models redefining capabilities at small scale
-
Falcon-H1-Tiny: A series of extremely small, yet powerful language models redefining capabilities at small scale
📝39Generate text using extremely small yet powerful language models
-
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 43 -
tiiuae/Falcon-H1-Tiny-90M-Instruct
Text Generation • 91.1M • Updated • 15.3k • 38 -
tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF
91.1M • Updated • 1.19k • 13
Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned).
-
Falcon H1 Playground
🦅33Chat with Falcon-H1 language models
-
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Paper • 2507.22448 • Published • 70 -
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 43 -
tiiuae/Falcon-H1-0.5B-Base
Text Generation • 0.5B • Updated • 33k • 16
7B models built on top of Falcon3-7B
A series of powerful, universal and fine-tunable small Language Models
-
Falcon E Playground
💻8Chat with an advanced language model to get answers and engage in conversation
-
tiiuae/Falcon-E-3B-Base
Text Generation • 0.9B • Updated • 189 • 13 -
tiiuae/Falcon-E-3B-Instruct
Text Generation • 0.9B • Updated • 1.01k • 37 -
tiiuae/Falcon-E-3B-Instruct-GGUF
3B • Updated • 113 • 14
Arabic benchmark datasets https://arxiv.org/pdf/2507.15850
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
-
Falcon Mamba Playground
🐍65Generate chat responses using FalconMamba-7b model
-
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 35 -
tiiuae/falcon-mamba-7b
Text Generation • Updated • 17.3k • 242 -
tiiuae/falcon-mamba-7b-instruct
Text Generation • 7B • Updated • 39.4k • 70
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 43 -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 14.9k • 897 -
tiiuae/falcon-rw-1b
Text Generation • Updated • 11.5k • 118 -
tiiuae/falcon-rw-7b
Text Generation • 8B • Updated • 227 • 17
Leveraging Contextual Web Data for Fine-tuning Vision Language Models (https://arxiv.org/abs/2502.10250)