Dasheng Tokenizer
Collection
Audio Tokenizers for understanding and Generation
•
1 item
•
Updated
•
1
DashengTokenizer is a high-performance continious audio tokenizer designed for audio understanding and generation tasks. Compared to previous works, our framework trains a single linear layer to enable audio generation for semantically strong encoders.
Achievements:
uv pip install transformers torch torchaudio einops
import torch
import torchaudio
from transformers import AutoModel
# Load the model
model = AutoModel.from_pretrained("mispeech/dashengtokenizer", trust_remote_code=True)
model.eval()
# Load audio file (only 16kHz supported!)
audio, sr = torchaudio.load("path/to/audio.wav")
# Optional: Create attention mask for variable-length inputs
# attention_mask = torch.ones(audio.shape[0], audio.shape[1]) # All ones for full audio
# attention_mask[0, 8000:] = 0 # Example: mask second half of first sample
# Method 1: End-to-end processing (encode + decode)
with torch.no_grad(), torch.autocast(device_type='cuda')::
outputs = model(audio) # Optionally pass attention_mask=attention_mask
reconstructed_audio = outputs["audio"]
embeddings = outputs['embeddings']
# Method 2: Separate encoding and decoding
with torch.no_grad(), torch.autocast(device_type='cuda')::
# Encode audio to embeddings
embeddings = model.encode(audio) # Optionally pass attention_mask=attention_mask
# Decode embeddings back to audio
reconstructed_audio = model.decode(embeddings)
# Save reconstructed audio
torchaudio.save("reconstructed_audio.wav", reconstructed_audio, sr)
embeddings = model.encode(audio)
reconstructed = model.decode(embeddings)
# Extract rich audio features for downstream tasks
features = model.encode(audio)
# Use features for classification, clustering, etc.
If you use DashengTokenizer in your research, please cite:
@misc{dinkel_dashengtokenizer_2026,
title={DashengTokenizer: One layer is enough for unified audio understanding and generation},
author={MiLM Plus, Xiaomi},
year={2026},
url={https://huggingface.co/mispeech/dashengtokenizer}
}
Apache 2.0 License