Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Pile Preshuffled Seeds
This dataset contains precomputed index maps for loading the preshuffled Pile dataset with different random seeds. These index maps are used by GPT-NeoX's MMapIndexedDataset to control the order in which training data is presented to the model, enabling reproducible training with different data orders without reprocessing the underlying data.
These index maps were used to train the PolyPythia model suite, which trains Pythia-scale models with multiple random seeds to study the effect of data order on learning dynamics.
Contents
Root Files
| File | Size | Description |
|---|---|---|
pile_20B_tokenizer_text_document.idx |
4.21 GB | The base index file for the tokenized Pile |
dataset.py |
11.1 KB | Example code for loading the dataset with these index maps |
Seed Directories
There are 10 seed directories (seed0 through seed9), each containing three NumPy index map files:
| File | Size | Description |
|---|---|---|
*_doc_idx.npy |
842 MB | Document index mapping |
*_sample_idx.npy |
1.3 GB | Sample index mapping |
*_shuffle_idx.npy |
649 MB | Shuffle order mapping |
The filenames encode the index map parameters: 147,164,160 samples, 2048 sequence length, seed 1234 (the base seed used to generate the maps).
Total size: 32.1 GB
Usage
Download the preshuffled Pile data from either:
- EleutherAI/pile-deduped-pythia-preshuffled (deduplicated)
- EleutherAI/pile-standard-pythia-preshuffled (standard)
Download the seed directory you want to use.
Follow the instructions in the Pythia repo README to configure training with the index maps.
See dataset.py in this repo for an example of how to load the dataset with these index maps.
Related Datasets
- EleutherAI/pile-deduped-pythia-preshuffled — deduplicated Pile in MMap format
- EleutherAI/pile-standard-pythia-preshuffled — standard Pile in MMap format
Citation
@article{biderman2023pythia,
title={Pythia: A suite for analyzing large language models across training and scaling},
author={Biderman, Stella and Schoelkopf, Hailey and Anthony, Quentin Gregory and Bradley, Herbie and O'Brien, Kyle and Hallahan, Eric and Khan, Mohammad Aflah and Purohit, Shivanshu and Prashanth, USVSN Sai and Raff, Edward and others},
journal={International Conference on Machine Learning},
year={2023}
}
- Downloads last month
- 269