An Efficient Training Framework for Diffusion Language Models
AI & ML interests
LLM
Recent Activity
View all activity
Papers
Beyond Real: Imaginary Extension of Rotary Position Embeddings for Long-Context LLMs
SRPO: Self-Referential Policy Optimization for Vision-Language-Action Models
True Speech-to-Speech Langugage Model
-
OpenMOSS-Team/Embodied_R1-ScienceWorld
8B ⢠Updated ⢠9 -
OpenMOSS-Team/Embodied_Planner-R1-Alfworld
8B ⢠Updated ⢠7 -
Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning
Paper ⢠2506.23127 ⢠Published ⢠1 -
World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
Paper ⢠2503.10480 ⢠Published ⢠55
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_8-refactor
Text Generation ⢠0.1B ⢠Updated ⢠21 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_32-refactor
Text Generation ⢠0.1B ⢠Updated ⢠11 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_16-refactor
Text Generation ⢠0.1B ⢠Updated ⢠12 -
OpenMOSS-Team/SmolLM-360M-MLA-d_kv_8-refactor
Text Generation ⢠0.3B ⢠Updated ⢠11
-
OpenMOSS-Team/moss-moon-003-sft-plugin
Text Generation ⢠Updated ⢠42 ⢠69 -
OpenMOSS-Team/moss-moon-003-sft
Text Generation ⢠Updated ⢠84 ⢠127 -
OpenMOSS-Team/moss-moon-003-base
Text Generation ⢠Updated ⢠254 ⢠131 -
OpenMOSS-Team/moss-moon-003-sft-int4
Text Generation ⢠Updated ⢠48 ⢠40
Proactive Robot Manipulation in Omni-modal Context
Open source weights of Lorsa modules introduced in "Towards Understanding the Nature of Attention with Low-Rank Sparse Decomposition".
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
Paper ⢠2502.14837 ⢠Published ⢠3 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_16
Text Generation ⢠6B ⢠Updated ⢠17 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_32
Text Generation ⢠6B ⢠Updated ⢠13 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_64
Text Generation ⢠7B ⢠Updated ⢠13
An Efficient Training Framework for Diffusion Language Models
Proactive Robot Manipulation in Omni-modal Context
True Speech-to-Speech Langugage Model
-
OpenMOSS-Team/Embodied_R1-ScienceWorld
8B ⢠Updated ⢠9 -
OpenMOSS-Team/Embodied_Planner-R1-Alfworld
8B ⢠Updated ⢠7 -
Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning
Paper ⢠2506.23127 ⢠Published ⢠1 -
World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
Paper ⢠2503.10480 ⢠Published ⢠55
Open source weights of Lorsa modules introduced in "Towards Understanding the Nature of Attention with Low-Rank Sparse Decomposition".
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_8-refactor
Text Generation ⢠0.1B ⢠Updated ⢠21 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_32-refactor
Text Generation ⢠0.1B ⢠Updated ⢠11 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_16-refactor
Text Generation ⢠0.1B ⢠Updated ⢠12 -
OpenMOSS-Team/SmolLM-360M-MLA-d_kv_8-refactor
Text Generation ⢠0.3B ⢠Updated ⢠11
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
Paper ⢠2502.14837 ⢠Published ⢠3 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_16
Text Generation ⢠6B ⢠Updated ⢠17 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_32
Text Generation ⢠6B ⢠Updated ⢠13 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_64
Text Generation ⢠7B ⢠Updated ⢠13
-
OpenMOSS-Team/moss-moon-003-sft-plugin
Text Generation ⢠Updated ⢠42 ⢠69 -
OpenMOSS-Team/moss-moon-003-sft
Text Generation ⢠Updated ⢠84 ⢠127 -
OpenMOSS-Team/moss-moon-003-base
Text Generation ⢠Updated ⢠254 ⢠131 -
OpenMOSS-Team/moss-moon-003-sft-int4
Text Generation ⢠Updated ⢠48 ⢠40