Update README.md
Browse files
README.md
CHANGED
|
@@ -445,23 +445,48 @@ configs:
|
|
| 445 |
|
| 446 |
# Reactive AI / Beta Pre-Train Corpus
|
| 447 |
Pre-training corpus for RxT-Beta models, created from public & open datasets. Includes high-quality english and polish web crawl data, mathematic and scientific subsets,
|
| 448 |
-
and code in different programming languages
|
|
|
|
|
|
|
| 449 |
|
| 450 |
## Subsets & original datasets
|
| 451 |
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
|
| 452 |
- `fineweb-edu-s100` (51.3M examples) - 50% of 'sample-100BT' subset
|
| 453 |
- `fineweb-edu-2025-26` (14M examples) - CC-MAIN-2025-26 subset - latest crawl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 454 |
- [FineWiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki)
|
| 455 |
- `finewiki-en` (6.6M examples) - english wikipedia subset
|
| 456 |
- `finewiki-pl` (1.5M examples) - polish wikipedia subset
|
|
|
|
|
|
|
| 457 |
- [FineWeb2-HQ](https://huggingface.co/datasets/epfml/FineWeb2-HQ)
|
| 458 |
- `fineweb2-hq-pl` (13.3M examples) - high-quality filtered polish web crawl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 459 |
- [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
|
| 460 |
-
- `finemath-4plus` (6.7M examples)
|
| 461 |
-
- `infiwebmath-4plus` (6.3M examples)
|
|
|
|
|
|
|
| 462 |
- [ProofPile-2](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
|
| 463 |
- `pp2-arxiv` (4M examples) - arXiv research papers
|
|
|
|
| 464 |
- Notebooks:
|
| 465 |
- `kaggle` (0.58M examples) - from [HuggingFaceTB/issues-kaggle-notebooks](https://huggingface.co/datasets/HuggingFaceTB/issues-kaggle-notebooks)
|
| 466 |
- `github-jupyter` (0.05M examples) - from [codeparrot/github-jupyter-code-to-text](https://huggingface.co/datasets/codeparrot/github-jupyter-code-to-text)
|
| 467 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 445 |
|
| 446 |
# Reactive AI / Beta Pre-Train Corpus
|
| 447 |
Pre-training corpus for RxT-Beta models, created from public & open datasets. Includes high-quality english and polish web crawl data, mathematic and scientific subsets,
|
| 448 |
+
and code in different programming languages.
|
| 449 |
+
|
| 450 |
+
`2k` subsets are filtered for 1024-2048 tokens, except MegaMath Web Pro and GitHub Code subsets, that were filtered for 512-2048 tokens
|
| 451 |
|
| 452 |
## Subsets & original datasets
|
| 453 |
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
|
| 454 |
- `fineweb-edu-s100` (51.3M examples) - 50% of 'sample-100BT' subset
|
| 455 |
- `fineweb-edu-2025-26` (14M examples) - CC-MAIN-2025-26 subset - latest crawl
|
| 456 |
+
- `fineweb-edu-s100-sh3-2k` (4.69M examples) - third 25% of 'sample-100BT' subset filtered for 1024-2048 tokens examples
|
| 457 |
+
- `fineweb-edu-s100-sh4-2k` (3.90M examples) - fourth 25% of 'sample-100BT' subset filtered for 1024-2048 tokens examples
|
| 458 |
+
- `fineweb-edu-2025-08-2k` (3.77M examples) - CC-MAIN-2025-08 subset, filtered for 1024-2048 tokens examples
|
| 459 |
+
- `fineweb-edu-2025-13-2k` (3.95M examples) - CC-MAIN-2025-13 subset, filtered for 1024-2048 tokens examples
|
| 460 |
+
- `fineweb-edu-2025-18-2k` (4.13M examples) - CC-MAIN-2025-18 subset, filtered for 1024-2048 tokens examples
|
| 461 |
+
- `fineweb-edu-2025-21-2k` (3.76M examples) - CC-MAIN-2025-21 subset, filtered for 1024-2048 tokens examples
|
| 462 |
- [FineWiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki)
|
| 463 |
- `finewiki-en` (6.6M examples) - english wikipedia subset
|
| 464 |
- `finewiki-pl` (1.5M examples) - polish wikipedia subset
|
| 465 |
+
- [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
|
| 466 |
+
- `fineweb-s10-2k` (1.73M examples) - 'sample-10BT' subset, filtered for 1024-2048 tokens examples
|
| 467 |
- [FineWeb2-HQ](https://huggingface.co/datasets/epfml/FineWeb2-HQ)
|
| 468 |
- `fineweb2-hq-pl` (13.3M examples) - high-quality filtered polish web crawl
|
| 469 |
+
- [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2)
|
| 470 |
+
- `fineweb2-pl-2k` (2.13M examples) - latest 30% of polish subset, filtered for 1024-2048 tokens examples
|
| 471 |
+
- [DCLM-Edu](https://huggingface.co/datasets/HuggingFaceTB/dclm-edu)
|
| 472 |
+
- `dclm-edu-2k` (3.59M examples) - examples filtered for 3 and higher educational score and 1024-2048 tokens
|
| 473 |
+
- [FinePdfs-Edu](https://huggingface.co/datasets/HuggingFaceFW/finepdfs-edu)
|
| 474 |
+
- `finepdfs-edu-en-2k` (4.38M examples) - english subset filtered for 1024-2048 tokens
|
| 475 |
+
- `finepfds-edu-pl-2k` (0.18M examples) - polish subset filtered for 1024-2048 tokens (not used, as it's duplicated in base `finepdfs-pl-2k`)
|
| 476 |
+
- [FinePdfs](https://huggingface.co/datasets/HuggingFaceFW/finepdfs)
|
| 477 |
+
- `finepfds-pl-2k` (1.65M examples) - polish subset filtered for 1024-2048 tokens
|
| 478 |
- [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
|
| 479 |
+
- `finemath-4plus` (6.7M examples) - full subset with best math quality
|
| 480 |
+
- `infiwebmath-4plus` (6.3M examples) - full subset with best math quality
|
| 481 |
+
- [MegaMath](https://github.com/LLM360/MegaMath)
|
| 482 |
+
- `megamath-web-pro-2k` (5.38M examples) - filtered for 512-2048 tokens MegaMath Web Pro subset
|
| 483 |
- [ProofPile-2](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
|
| 484 |
- `pp2-arxiv` (4M examples) - arXiv research papers
|
| 485 |
+
- `pp2-algebraic-stack` (3.37M examples) - mathematic code examples
|
| 486 |
- Notebooks:
|
| 487 |
- `kaggle` (0.58M examples) - from [HuggingFaceTB/issues-kaggle-notebooks](https://huggingface.co/datasets/HuggingFaceTB/issues-kaggle-notebooks)
|
| 488 |
- `github-jupyter` (0.05M examples) - from [codeparrot/github-jupyter-code-to-text](https://huggingface.co/datasets/codeparrot/github-jupyter-code-to-text)
|
| 489 |
+
- `notebooks-all` (0.63M examples) - `kaggle` and `github-jupyter` combined
|
| 490 |
+
- Code:
|
| 491 |
+
- `beta-code-short` (4.51M examples) - filtered and combined code from [codeparrot/codeparrot-clean](https://huggingface.co/datasets/codeparrot/github-jupyter-code-to-text) and [codeparrot/github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean)
|
| 492 |
+
- `github-code-clean-small-2k` (8.17M examples) - filtered for 512-2048 tokens code from [loubnabnl/github-code-clean-small](https://huggingface.co/datasets/loubnabnl/github-code-clean-small)
|