Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
EcomMMMU / README.md
xin10's picture
Update README.md
22a9bcb verified
---
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
tags:
- e-commerce
size_categories:
- 100K<n<1M
---
## Introduction
EcomMMMU is a large-scale multimodal multitask understanding dataset for e-commerce applications,
containing 406,190 samples and 8,989,510 product images across 34 product categories.
It is designed to systematically evaluate how multimodal large language models (MLLMs)
utilize visual information in real-world shopping scenarios.
Unlike prior datasets that treat all images equally,
EcomMMMU explicitly investigates when and how multiple product images contribute to understanding.
It includes a specialized vision-salient subset (VSS),
designed to test scenarios where textual information alone is insufficient and visuals are crucial.
## Dataset Sources
- **Repository:** [GitHub](https://github.com/ninglab/EcomMMMU)
<!-- ## Data Split
The statistics for the MMECInstruct Dataset are shown in the table below.
| Split | Size |
| --- | --- |
| Train | 56,000 |
| Validation | 7,000 |
-->
## Quick Start
Run the following command to get the data:
```python
from datasets import load_dataset
dataset = load_dataset("NingLab/EcomMMMU")
```
## License
Please check the license of each subset in our curated dataset ECInstruct.
| Dataset | License Type |
| --- | --- |
| [Amazon Review](https://amazon-reviews-2023.github.io/) | Non listed |
| [AmazonQA](https://github.com/amazonqa/amazonqa) | Non listed |
| [Shopping Queries Dataset](https://github.com/amazon-science/esci-data) | Apache License 2.0 |
## Citation
```bibtex
@article{ling2025ecommmmu,
title={EcomMMMU: Strategic Utilization of Visuals for Robust Multimodal E-Commerce Models},
author={Ling, Xinyi and Du, Hanwen and Zhu, Zhihui and Ning, Xia},
journal={arXiv preprint arXiv:2508.15721},
year={2025}
}
```