Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
name: string
description: string
op_type: string
tags: list<item: string>
axes: struct<num_tokens: struct<type: string, description: string>, num_qo_heads: struct<type: string, value: int64, description: string>, head_dim_ckv: struct<type: string, value: int64, description: string>, head_dim_kpe: struct<type: string, value: int64, description: string>, page_size: struct<type: string, value: int64, description: string>, topk: struct<type: string, value: int64, description: string>, num_pages: struct<type: string, description: string>>
constraints: list<item: string>
inputs: struct<q_nope: struct<shape: list<item: string>, dtype: string, description: string>, q_pe: struct<shape: list<item: string>, dtype: string, description: string>, ckv_cache: struct<shape: list<item: string>, dtype: string, description: string>, kpe_cache: struct<shape: list<item: string>, dtype: string, description: string>, sparse_indices: struct<shape: list<item: string>, dtype: string, description: string>, sm_scale: struct<shape: null, dtype: string, description: string>>
outputs: struct<output: struct<shape: list<item: string>, dtype: string>, lse: struct<shape: list<item: string>, dtype: string, description: string>>
reference: string
vs
name: string
description: string
op_type: string
tags: list<item: string>
axes: struct<batch_size: struct<type: string>, num_index_heads: struct<type: string, value: int64, description: string>, index_head_dim: struct<type: string, value: int64, description: string>, page_size: struct<type: string, value: int64, description: string>, topk: struct<type: string, value: int64, description: string>, max_num_pages: struct<type: string, description: string>, num_pages: struct<type: string, description: string>, kv_cache_num_heads: struct<type: string, value: int64, description: string>, head_dim_with_scale: struct<type: string, value: int64, description: string>>
constraints: list<item: string>
inputs: struct<q_index_fp8: struct<shape: list<item: string>, dtype: string, description: string>, k_index_cache_fp8: struct<shape: list<item: string>, dtype: string, description: string>, weights: struct<shape: list<item: string>, dtype: string, description: string>, seq_lens: struct<shape: list<item: string>, dtype: string, description: string>, block_table: struct<shape: list<item: string>, dtype: string, description: string>>
outputs: struct<topk_indices: struct<shape: list<item: string>, dtype: string, description: string>>
reference: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 572, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              name: string
              description: string
              op_type: string
              tags: list<item: string>
              axes: struct<num_tokens: struct<type: string, description: string>, num_qo_heads: struct<type: string, value: int64, description: string>, head_dim_ckv: struct<type: string, value: int64, description: string>, head_dim_kpe: struct<type: string, value: int64, description: string>, page_size: struct<type: string, value: int64, description: string>, topk: struct<type: string, value: int64, description: string>, num_pages: struct<type: string, description: string>>
              constraints: list<item: string>
              inputs: struct<q_nope: struct<shape: list<item: string>, dtype: string, description: string>, q_pe: struct<shape: list<item: string>, dtype: string, description: string>, ckv_cache: struct<shape: list<item: string>, dtype: string, description: string>, kpe_cache: struct<shape: list<item: string>, dtype: string, description: string>, sparse_indices: struct<shape: list<item: string>, dtype: string, description: string>, sm_scale: struct<shape: null, dtype: string, description: string>>
              outputs: struct<output: struct<shape: list<item: string>, dtype: string>, lse: struct<shape: list<item: string>, dtype: string, description: string>>
              reference: string
              vs
              name: string
              description: string
              op_type: string
              tags: list<item: string>
              axes: struct<batch_size: struct<type: string>, num_index_heads: struct<type: string, value: int64, description: string>, index_head_dim: struct<type: string, value: int64, description: string>, page_size: struct<type: string, value: int64, description: string>, topk: struct<type: string, value: int64, description: string>, max_num_pages: struct<type: string, description: string>, num_pages: struct<type: string, description: string>, kv_cache_num_heads: struct<type: string, value: int64, description: string>, head_dim_with_scale: struct<type: string, value: int64, description: string>>
              constraints: list<item: string>
              inputs: struct<q_index_fp8: struct<shape: list<item: string>, dtype: string, description: string>, k_index_cache_fp8: struct<shape: list<item: string>, dtype: string, description: string>, weights: struct<shape: list<item: string>, dtype: string, description: string>, seq_lens: struct<shape: list<item: string>, dtype: string, description: string>, block_table: struct<shape: list<item: string>, dtype: string, description: string>>
              outputs: struct<topk_indices: struct<shape: list<item: string>, dtype: string, description: string>>
              reference: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MLSys 2026 FlashInfer-Bench Challenge Dataset

This repository contains the FlashInfer-Bench dataset for the MLSys 2026 Kenrel Generation Challenge.

This dataset targets to be used in the FlashInfer-Bench benchmark system.

It follows the FlashInfer Trace Schema. To use the dataset in the competition, please refer to our starter kit.

Download

Use this command to download the dataset:

git lfs install
git clone https://huggingface.co/datasets/flashinfer-ai/mlsys26-contest

Set the environment variable so that FlashInfer-Bench can find the dataset:

export FIB_DATASET_PATH=/path/to/mlsys26-contest

Tasks

This dataset contains the definitions and workloads for these kernels:

  • Fused Mixture of Experts (MoE)
  • Gated Delta Network (GDN)
  • DeepSeek Sparse Attention (DSA)

Dataset Structure

It is organized as follows:

mlsys26-contest/
├── definitions/
└── workloads/

These components are provided in the dataset:

  • Definition: describes the input, output, and computation logic of a kernel task.
  • Workload: describes the inputs for a definition during real inference. This will be used to benchmark the Solution you provided.

During benchmarking, these components should be provided or generated:

  • Solution: provided by participants, your implementation of the kernel task.
  • Trace: generated by FlashInfer-Bench, the performance and correctness results of your solution on the workloads.
Downloads last month
125