Commit
·
8f3bc4f
1
Parent(s):
cd457fc
Add uv-script tag, fix Key Finding to use yearly averages
Browse files- Add uv-script tag to generated dataset cards
- Key Finding now shows yearly averages (more representative than single dumps)
- Update docstring to reflect 50M+ docs
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <[email protected]>
- HuggingFaceFW_fineweb-edu_summary.json +58 -0
- README.md +259 -0
- basic-stats.py +338 -0
- finepdfs-stats.py +21 -8
- stats_output/detailed_stats.parquet +3 -0
- stats_output/dump_stats.parquet +3 -0
- stats_output/extractor_stats.parquet +3 -0
- stats_output/global_stats.parquet +3 -0
- stats_output/language_stats.parquet +3 -0
- stats_output/temporal_stats.parquet +3 -0
HuggingFaceFW_fineweb-edu_summary.json
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset": "HuggingFaceFW/fineweb-edu",
|
| 3 |
+
"split": "train",
|
| 4 |
+
"text_column": "text",
|
| 5 |
+
"total_samples": 10,
|
| 6 |
+
"statistics": {
|
| 7 |
+
"character_count": {
|
| 8 |
+
"count": 10,
|
| 9 |
+
"mean": 3761.2,
|
| 10 |
+
"std": 2456.61,
|
| 11 |
+
"min": 396,
|
| 12 |
+
"max": 7966
|
| 13 |
+
},
|
| 14 |
+
"word_count": {
|
| 15 |
+
"count": 10,
|
| 16 |
+
"mean": 591.2,
|
| 17 |
+
"std": 385.27,
|
| 18 |
+
"min": 56,
|
| 19 |
+
"max": 1272
|
| 20 |
+
},
|
| 21 |
+
"line_count": {
|
| 22 |
+
"count": 10,
|
| 23 |
+
"mean": 31.2,
|
| 24 |
+
"std": 27.54,
|
| 25 |
+
"min": 2,
|
| 26 |
+
"max": 93
|
| 27 |
+
},
|
| 28 |
+
"sentence_count": {
|
| 29 |
+
"count": 10,
|
| 30 |
+
"mean": 25.7,
|
| 31 |
+
"std": 18.8,
|
| 32 |
+
"min": 5,
|
| 33 |
+
"max": 71
|
| 34 |
+
},
|
| 35 |
+
"mean_word_length": {
|
| 36 |
+
"count": 10,
|
| 37 |
+
"mean": 5.45,
|
| 38 |
+
"std": 0.46,
|
| 39 |
+
"min": 4.7,
|
| 40 |
+
"max": 6.09
|
| 41 |
+
}
|
| 42 |
+
},
|
| 43 |
+
"character_type_distribution": {
|
| 44 |
+
"alphanumeric": 0.8164,
|
| 45 |
+
"alphabetic": 0.8093,
|
| 46 |
+
"digit": 0.0071,
|
| 47 |
+
"uppercase": 0.0293,
|
| 48 |
+
"lowercase": 0.78,
|
| 49 |
+
"whitespace": 0.1554,
|
| 50 |
+
"punctuation": 0.0276,
|
| 51 |
+
"special": 0.0006
|
| 52 |
+
},
|
| 53 |
+
"derived_metrics": {
|
| 54 |
+
"avg_words_per_line": 18.95,
|
| 55 |
+
"avg_chars_per_word": 6.36,
|
| 56 |
+
"avg_words_per_sentence": 23.0
|
| 57 |
+
}
|
| 58 |
+
}
|
README.md
ADDED
|
@@ -0,0 +1,259 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
viewer: false
|
| 3 |
+
tags:
|
| 4 |
+
- uv-script
|
| 5 |
+
- dataset-statistics
|
| 6 |
+
- data-quality
|
| 7 |
+
- text-analysis
|
| 8 |
+
license: apache-2.0
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# Dataset Statistics
|
| 12 |
+
|
| 13 |
+
Calculate essential text statistics for HuggingFace datasets using streaming mode. No ML models, pure Python, works on datasets of any size.
|
| 14 |
+
|
| 15 |
+
## Scripts
|
| 16 |
+
|
| 17 |
+
### `basic-stats.py` - Essential Text Statistics
|
| 18 |
+
|
| 19 |
+
Calculate fundamental text statistics using pure Python (no ML dependencies). Uses streaming mode by default, so it works on datasets of any size without downloading the full dataset.
|
| 20 |
+
|
| 21 |
+
**Statistics calculated:**
|
| 22 |
+
- Character, word, line, sentence counts (per sample and total)
|
| 23 |
+
- Streaming mean and standard deviation using Welford's algorithm
|
| 24 |
+
- Character type distributions (alphanumeric, digits, punctuation, whitespace, special characters)
|
| 25 |
+
- Length statistics (min, max)
|
| 26 |
+
- Derived metrics (words per line, chars per word, words per sentence)
|
| 27 |
+
|
| 28 |
+
**Features:**
|
| 29 |
+
- ✅ Pure Python (no ML models required)
|
| 30 |
+
- ✅ Streaming mode (constant memory usage)
|
| 31 |
+
- ✅ Progress tracking with tqdm
|
| 32 |
+
- ✅ Optional per-sample CSV output
|
| 33 |
+
- ✅ Works on datasets of any size
|
| 34 |
+
- ✅ Fast: ~10k-50k samples/sec on CPU
|
| 35 |
+
|
| 36 |
+
## Installation
|
| 37 |
+
|
| 38 |
+
No installation needed! Just use `uv run`:
|
| 39 |
+
|
| 40 |
+
```bash
|
| 41 |
+
# Run directly with uv
|
| 42 |
+
uv run https://huggingface.co/datasets/uv-scripts/dataset-stats/raw/main/basic-stats.py --help
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## Usage Examples
|
| 46 |
+
|
| 47 |
+
### Quick Test (10k samples)
|
| 48 |
+
|
| 49 |
+
```bash
|
| 50 |
+
uv run basic-stats.py HuggingFaceFW/fineweb-edu --max-samples 10000
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
### Full Dataset Statistics
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
uv run basic-stats.py allenai/c4 --split train
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
### Different Text Column
|
| 60 |
+
|
| 61 |
+
```bash
|
| 62 |
+
uv run basic-stats.py username/dataset --text-column content
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
### Save Per-Sample Statistics
|
| 66 |
+
|
| 67 |
+
```bash
|
| 68 |
+
uv run basic-stats.py username/dataset --per-sample --output-file my-stats.csv
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
### Using HF Jobs (for large datasets)
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
hf jobs uv run \
|
| 75 |
+
-e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
|
| 76 |
+
https://huggingface.co/datasets/uv-scripts/dataset-stats/raw/main/basic-stats.py \
|
| 77 |
+
username/very-large-dataset --max-samples 100000
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
## Example Output
|
| 81 |
+
|
| 82 |
+
```json
|
| 83 |
+
{
|
| 84 |
+
"dataset": "HuggingFaceFW/fineweb-edu",
|
| 85 |
+
"split": "train",
|
| 86 |
+
"text_column": "text",
|
| 87 |
+
"total_samples": 10000,
|
| 88 |
+
"statistics": {
|
| 89 |
+
"character_count": {
|
| 90 |
+
"count": 10000,
|
| 91 |
+
"mean": 3542.18,
|
| 92 |
+
"std": 2134.52,
|
| 93 |
+
"min": 120.0,
|
| 94 |
+
"max": 45231.0
|
| 95 |
+
},
|
| 96 |
+
"word_count": {
|
| 97 |
+
"count": 10000,
|
| 98 |
+
"mean": 642.34,
|
| 99 |
+
"std": 387.21,
|
| 100 |
+
"min": 18.0,
|
| 101 |
+
"max": 8234.0
|
| 102 |
+
},
|
| 103 |
+
"line_count": {
|
| 104 |
+
"count": 10000,
|
| 105 |
+
"mean": 28.5,
|
| 106 |
+
"std": 16.3,
|
| 107 |
+
"min": 2.0,
|
| 108 |
+
"max": 234.0
|
| 109 |
+
},
|
| 110 |
+
"sentence_count": {
|
| 111 |
+
"count": 10000,
|
| 112 |
+
"mean": 24.7,
|
| 113 |
+
"std": 14.2,
|
| 114 |
+
"min": 1.0,
|
| 115 |
+
"max": 187.0
|
| 116 |
+
},
|
| 117 |
+
"mean_word_length": {
|
| 118 |
+
"count": 10000,
|
| 119 |
+
"mean": 5.52,
|
| 120 |
+
"std": 0.87,
|
| 121 |
+
"min": 2.1,
|
| 122 |
+
"max": 12.4
|
| 123 |
+
}
|
| 124 |
+
},
|
| 125 |
+
"character_type_distribution": {
|
| 126 |
+
"alphanumeric": 0.8234,
|
| 127 |
+
"alphabetic": 0.7891,
|
| 128 |
+
"digit": 0.0343,
|
| 129 |
+
"uppercase": 0.0456,
|
| 130 |
+
"lowercase": 0.9544,
|
| 131 |
+
"whitespace": 0.1523,
|
| 132 |
+
"punctuation": 0.0187,
|
| 133 |
+
"special": 0.0056
|
| 134 |
+
},
|
| 135 |
+
"derived_metrics": {
|
| 136 |
+
"avg_words_per_line": 22.54,
|
| 137 |
+
"avg_chars_per_word": 5.52,
|
| 138 |
+
"avg_words_per_sentence": 26.01
|
| 139 |
+
}
|
| 140 |
+
}
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
## Performance
|
| 144 |
+
|
| 145 |
+
- **Speed**: ~10,000-50,000 samples/sec on CPU (depending on text length)
|
| 146 |
+
- **Memory**: Constant O(1) memory usage (streaming statistics)
|
| 147 |
+
- **Dependencies**: Pure Python + datasets library
|
| 148 |
+
- **GPU**: Not needed
|
| 149 |
+
|
| 150 |
+
## Use Cases
|
| 151 |
+
|
| 152 |
+
### Understanding Dataset Characteristics
|
| 153 |
+
|
| 154 |
+
Get a quick overview of your dataset's basic properties:
|
| 155 |
+
```bash
|
| 156 |
+
uv run basic-stats.py username/my-dataset --max-samples 10000
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
### Comparing Datasets
|
| 160 |
+
|
| 161 |
+
Generate statistics for multiple datasets to compare their characteristics:
|
| 162 |
+
```bash
|
| 163 |
+
for dataset in "allenai/c4" "HuggingFaceFW/fineweb" "cerebras/SlimPajama-627B"; do
|
| 164 |
+
uv run basic-stats.py $dataset --max-samples 50000
|
| 165 |
+
done
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
### Quality Checking
|
| 169 |
+
|
| 170 |
+
Check if your dataset has reasonable statistics before training:
|
| 171 |
+
- Are word counts within expected range?
|
| 172 |
+
- Is the character distribution reasonable?
|
| 173 |
+
- Are there too many special characters (potential quality issues)?
|
| 174 |
+
|
| 175 |
+
### Setting Filter Thresholds
|
| 176 |
+
|
| 177 |
+
Use the statistics to inform filtering decisions:
|
| 178 |
+
- If mean word count is 500, you might filter out samples < 50 or > 10,000 words
|
| 179 |
+
- If punctuation ratio is very low, might indicate low-quality text
|
| 180 |
+
- Character type distributions can reveal encoding issues
|
| 181 |
+
|
| 182 |
+
## Command-Line Options
|
| 183 |
+
|
| 184 |
+
```
|
| 185 |
+
usage: basic-stats.py [-h] [--split SPLIT] [--text-column TEXT_COLUMN]
|
| 186 |
+
[--max-samples MAX_SAMPLES] [--per-sample]
|
| 187 |
+
[--output-file OUTPUT_FILE] [--streaming]
|
| 188 |
+
dataset
|
| 189 |
+
|
| 190 |
+
positional arguments:
|
| 191 |
+
dataset Dataset name (e.g., 'HuggingFaceFW/fineweb-edu') or local path
|
| 192 |
+
|
| 193 |
+
optional arguments:
|
| 194 |
+
-h, --help show this help message and exit
|
| 195 |
+
--split SPLIT Dataset split to process (default: train)
|
| 196 |
+
--text-column TEXT_COLUMN
|
| 197 |
+
Name of the text column (default: text)
|
| 198 |
+
--max-samples MAX_SAMPLES
|
| 199 |
+
Maximum number of samples to process (for testing)
|
| 200 |
+
--per-sample Save per-sample statistics to CSV file
|
| 201 |
+
--output-file OUTPUT_FILE
|
| 202 |
+
Output file for per-sample stats (default: dataset-stats.csv)
|
| 203 |
+
--streaming Use streaming mode (default: True)
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
## Technical Details
|
| 207 |
+
|
| 208 |
+
### Welford's Algorithm
|
| 209 |
+
|
| 210 |
+
The script uses Welford's algorithm for calculating streaming mean and variance. This provides:
|
| 211 |
+
- Numerical stability (no catastrophic cancellation)
|
| 212 |
+
- Constant memory usage (O(1))
|
| 213 |
+
- Single-pass computation
|
| 214 |
+
- Accurate results even for very large datasets
|
| 215 |
+
|
| 216 |
+
### Character Type Classification
|
| 217 |
+
|
| 218 |
+
Character types are classified as:
|
| 219 |
+
- **Alphanumeric**: Letters + digits
|
| 220 |
+
- **Alphabetic**: Letters only
|
| 221 |
+
- **Digit**: Numbers (0-9)
|
| 222 |
+
- **Uppercase/Lowercase**: Case ratios (relative to total letters)
|
| 223 |
+
- **Whitespace**: Spaces, tabs, newlines
|
| 224 |
+
- **Punctuation**: Standard ASCII punctuation
|
| 225 |
+
- **Special**: Everything else (emojis, symbols, etc.)
|
| 226 |
+
|
| 227 |
+
### Sentence Counting
|
| 228 |
+
|
| 229 |
+
Simple heuristic-based sentence boundary detection using `.!?` as terminators. This is fast but not as accurate as NLP-based sentence tokenization. Good enough for statistical analysis.
|
| 230 |
+
|
| 231 |
+
## Related Scripts
|
| 232 |
+
|
| 233 |
+
Check out other scripts in the `uv-scripts` organization:
|
| 234 |
+
- **dataset-creation**: Create datasets from PDFs and other formats
|
| 235 |
+
- **vllm**: GPU-accelerated classification and inference
|
| 236 |
+
- **ocr**: Document OCR using vision-language models
|
| 237 |
+
|
| 238 |
+
## Contributing
|
| 239 |
+
|
| 240 |
+
Have ideas for additional statistics or improvements? Feel free to:
|
| 241 |
+
1. Fork this repository
|
| 242 |
+
2. Add your script or improvements
|
| 243 |
+
3. Submit a pull request
|
| 244 |
+
|
| 245 |
+
Or open an issue on the [uv-scripts organization](https://huggingface.co/uv-scripts).
|
| 246 |
+
|
| 247 |
+
## License
|
| 248 |
+
|
| 249 |
+
Apache 2.0
|
| 250 |
+
|
| 251 |
+
## Why UV Scripts?
|
| 252 |
+
|
| 253 |
+
UV scripts are self-contained Python scripts that:
|
| 254 |
+
- Run with a single `uv run` command (no setup required)
|
| 255 |
+
- Include all dependencies in PEP 723 inline metadata
|
| 256 |
+
- Work seamlessly on both local machines and HF Jobs
|
| 257 |
+
- Serve as educational examples of best practices
|
| 258 |
+
|
| 259 |
+
Learn more about UV: https://docs.astral.sh/uv/
|
basic-stats.py
ADDED
|
@@ -0,0 +1,338 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
# /// script
|
| 3 |
+
# requires-python = ">=3.10"
|
| 4 |
+
# dependencies = [
|
| 5 |
+
# "datasets",
|
| 6 |
+
# "huggingface-hub",
|
| 7 |
+
# "tqdm",
|
| 8 |
+
# ]
|
| 9 |
+
# ///
|
| 10 |
+
|
| 11 |
+
"""Calculate basic text statistics for HuggingFace datasets.
|
| 12 |
+
|
| 13 |
+
This script computes essential text statistics using pure Python (no ML models).
|
| 14 |
+
It uses streaming mode by default, so it works on datasets of any size without
|
| 15 |
+
downloading the full dataset.
|
| 16 |
+
|
| 17 |
+
Statistics calculated:
|
| 18 |
+
- Character, word, line, sentence counts (per sample and total)
|
| 19 |
+
- Streaming mean and standard deviation (Welford's algorithm)
|
| 20 |
+
- Character type distributions (alphanumeric, digits, punctuation, whitespace, special)
|
| 21 |
+
- Length statistics (min, max, approximate percentiles)
|
| 22 |
+
|
| 23 |
+
Examples:
|
| 24 |
+
# Quick test on 10k samples
|
| 25 |
+
uv run basic-stats.py HuggingFaceFW/fineweb-edu --max-samples 10000
|
| 26 |
+
|
| 27 |
+
# Full dataset statistics
|
| 28 |
+
uv run basic-stats.py allenai/c4 --split train
|
| 29 |
+
|
| 30 |
+
# Save per-sample statistics to CSV
|
| 31 |
+
uv run basic-stats.py username/dataset --per-sample --output-file stats.csv
|
| 32 |
+
|
| 33 |
+
# Use with HF Jobs (GPU not needed)
|
| 34 |
+
hf jobs uv run \
|
| 35 |
+
-e HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
|
| 36 |
+
https://huggingface.co/datasets/uv-scripts/dataset-stats/raw/main/basic-stats.py \
|
| 37 |
+
username/large-dataset --max-samples 100000
|
| 38 |
+
|
| 39 |
+
Performance:
|
| 40 |
+
~10,000-50,000 samples/sec on CPU (depending on text length)
|
| 41 |
+
Pure Python, minimal memory usage (constant O(1) for streaming stats)
|
| 42 |
+
"""
|
| 43 |
+
|
| 44 |
+
import argparse
|
| 45 |
+
import json
|
| 46 |
+
import re
|
| 47 |
+
import string
|
| 48 |
+
import sys
|
| 49 |
+
from collections import defaultdict
|
| 50 |
+
from dataclasses import asdict, dataclass
|
| 51 |
+
from pathlib import Path
|
| 52 |
+
from typing import Optional
|
| 53 |
+
|
| 54 |
+
from datasets import load_dataset
|
| 55 |
+
from tqdm import tqdm
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
@dataclass
|
| 59 |
+
class StreamingStats:
|
| 60 |
+
"""Track streaming statistics using Welford's algorithm for numerical stability."""
|
| 61 |
+
|
| 62 |
+
count: int = 0
|
| 63 |
+
mean: float = 0.0
|
| 64 |
+
m2: float = 0.0 # Sum of squared differences from mean
|
| 65 |
+
min_val: float = float('inf')
|
| 66 |
+
max_val: float = float('-inf')
|
| 67 |
+
|
| 68 |
+
def update(self, value: float):
|
| 69 |
+
"""Update statistics with new value."""
|
| 70 |
+
self.count += 1
|
| 71 |
+
delta = value - self.mean
|
| 72 |
+
self.mean += delta / self.count
|
| 73 |
+
delta2 = value - self.mean
|
| 74 |
+
self.m2 += delta * delta2
|
| 75 |
+
self.min_val = min(self.min_val, value)
|
| 76 |
+
self.max_val = max(self.max_val, value)
|
| 77 |
+
|
| 78 |
+
@property
|
| 79 |
+
def variance(self) -> float:
|
| 80 |
+
"""Calculate variance."""
|
| 81 |
+
return self.m2 / self.count if self.count > 1 else 0.0
|
| 82 |
+
|
| 83 |
+
@property
|
| 84 |
+
def std(self) -> float:
|
| 85 |
+
"""Calculate standard deviation."""
|
| 86 |
+
return self.variance ** 0.5
|
| 87 |
+
|
| 88 |
+
def to_dict(self) -> dict:
|
| 89 |
+
"""Convert to dictionary for JSON output."""
|
| 90 |
+
return {
|
| 91 |
+
"count": self.count,
|
| 92 |
+
"mean": round(self.mean, 2),
|
| 93 |
+
"std": round(self.std, 2),
|
| 94 |
+
"min": round(self.min_val, 2),
|
| 95 |
+
"max": round(self.max_val, 2),
|
| 96 |
+
}
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
def count_sentences(text: str) -> int:
|
| 100 |
+
"""Count sentences using simple heuristic (. ! ?)."""
|
| 101 |
+
# Simple sentence boundary detection
|
| 102 |
+
sentence_endings = re.findall(r'[.!?]+', text)
|
| 103 |
+
return max(1, len(sentence_endings)) # At least 1 sentence
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
def calculate_char_type_distribution(text: str) -> dict:
|
| 107 |
+
"""Calculate distribution of character types."""
|
| 108 |
+
if not text:
|
| 109 |
+
return {
|
| 110 |
+
"alphanumeric": 0.0,
|
| 111 |
+
"alphabetic": 0.0,
|
| 112 |
+
"digit": 0.0,
|
| 113 |
+
"uppercase": 0.0,
|
| 114 |
+
"lowercase": 0.0,
|
| 115 |
+
"whitespace": 0.0,
|
| 116 |
+
"punctuation": 0.0,
|
| 117 |
+
"special": 0.0,
|
| 118 |
+
}
|
| 119 |
+
|
| 120 |
+
total_chars = len(text)
|
| 121 |
+
alpha_count = sum(1 for c in text if c.isalpha())
|
| 122 |
+
digit_count = sum(1 for c in text if c.isdigit())
|
| 123 |
+
upper_count = sum(1 for c in text if c.isupper())
|
| 124 |
+
lower_count = sum(1 for c in text if c.islower())
|
| 125 |
+
whitespace_count = sum(1 for c in text if c.isspace())
|
| 126 |
+
punct_count = sum(1 for c in text if c in string.punctuation)
|
| 127 |
+
|
| 128 |
+
return {
|
| 129 |
+
"alphanumeric": round((alpha_count + digit_count) / total_chars, 4),
|
| 130 |
+
"alphabetic": round(alpha_count / total_chars, 4),
|
| 131 |
+
"digit": round(digit_count / total_chars, 4),
|
| 132 |
+
"uppercase": round(upper_count / total_chars, 4) if alpha_count > 0 else 0.0,
|
| 133 |
+
"lowercase": round(lower_count / total_chars, 4) if alpha_count > 0 else 0.0,
|
| 134 |
+
"whitespace": round(whitespace_count / total_chars, 4),
|
| 135 |
+
"punctuation": round(punct_count / total_chars, 4),
|
| 136 |
+
"special": round((total_chars - alpha_count - digit_count - whitespace_count - punct_count) / total_chars, 4),
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
|
| 140 |
+
def calculate_basic_stats(text: str) -> dict:
|
| 141 |
+
"""Calculate basic statistics for a single text sample."""
|
| 142 |
+
if not text:
|
| 143 |
+
return {
|
| 144 |
+
"char_count": 0,
|
| 145 |
+
"word_count": 0,
|
| 146 |
+
"line_count": 0,
|
| 147 |
+
"sentence_count": 0,
|
| 148 |
+
"mean_word_length": 0.0,
|
| 149 |
+
}
|
| 150 |
+
|
| 151 |
+
char_count = len(text)
|
| 152 |
+
words = text.split()
|
| 153 |
+
word_count = len(words)
|
| 154 |
+
line_count = len(text.splitlines())
|
| 155 |
+
sentence_count = count_sentences(text)
|
| 156 |
+
mean_word_length = sum(len(w) for w in words) / word_count if word_count > 0 else 0.0
|
| 157 |
+
|
| 158 |
+
return {
|
| 159 |
+
"char_count": char_count,
|
| 160 |
+
"word_count": word_count,
|
| 161 |
+
"line_count": line_count,
|
| 162 |
+
"sentence_count": sentence_count,
|
| 163 |
+
"mean_word_length": round(mean_word_length, 2),
|
| 164 |
+
}
|
| 165 |
+
|
| 166 |
+
|
| 167 |
+
def main():
|
| 168 |
+
parser = argparse.ArgumentParser(
|
| 169 |
+
description="Calculate basic text statistics for HuggingFace datasets",
|
| 170 |
+
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 171 |
+
epilog=__doc__,
|
| 172 |
+
)
|
| 173 |
+
parser.add_argument(
|
| 174 |
+
"dataset",
|
| 175 |
+
help="Dataset name (e.g., 'HuggingFaceFW/fineweb-edu') or local path",
|
| 176 |
+
)
|
| 177 |
+
parser.add_argument(
|
| 178 |
+
"--split",
|
| 179 |
+
default="train",
|
| 180 |
+
help="Dataset split to process (default: train)",
|
| 181 |
+
)
|
| 182 |
+
parser.add_argument(
|
| 183 |
+
"--text-column",
|
| 184 |
+
default="text",
|
| 185 |
+
help="Name of the text column (default: text)",
|
| 186 |
+
)
|
| 187 |
+
parser.add_argument(
|
| 188 |
+
"--max-samples",
|
| 189 |
+
type=int,
|
| 190 |
+
help="Maximum number of samples to process (for testing)",
|
| 191 |
+
)
|
| 192 |
+
parser.add_argument(
|
| 193 |
+
"--per-sample",
|
| 194 |
+
action="store_true",
|
| 195 |
+
help="Save per-sample statistics to CSV file",
|
| 196 |
+
)
|
| 197 |
+
parser.add_argument(
|
| 198 |
+
"--output-file",
|
| 199 |
+
help="Output file for per-sample stats (default: dataset-stats.csv)",
|
| 200 |
+
)
|
| 201 |
+
parser.add_argument(
|
| 202 |
+
"--streaming",
|
| 203 |
+
action="store_true",
|
| 204 |
+
default=True,
|
| 205 |
+
help="Use streaming mode (default: True)",
|
| 206 |
+
)
|
| 207 |
+
|
| 208 |
+
args = parser.parse_args()
|
| 209 |
+
|
| 210 |
+
# Load dataset in streaming mode
|
| 211 |
+
print(f"Loading dataset: {args.dataset} (split: {args.split})")
|
| 212 |
+
print(f"Streaming mode: {args.streaming}")
|
| 213 |
+
|
| 214 |
+
try:
|
| 215 |
+
dataset = load_dataset(
|
| 216 |
+
args.dataset,
|
| 217 |
+
split=args.split,
|
| 218 |
+
streaming=args.streaming,
|
| 219 |
+
)
|
| 220 |
+
except Exception as e:
|
| 221 |
+
print(f"Error loading dataset: {e}")
|
| 222 |
+
sys.exit(1)
|
| 223 |
+
|
| 224 |
+
# Check if text column exists
|
| 225 |
+
if args.text_column not in dataset.column_names:
|
| 226 |
+
print(f"Error: Column '{args.text_column}' not found in dataset.")
|
| 227 |
+
print(f"Available columns: {dataset.column_names}")
|
| 228 |
+
sys.exit(1)
|
| 229 |
+
|
| 230 |
+
# Initialize streaming statistics
|
| 231 |
+
char_stats = StreamingStats()
|
| 232 |
+
word_stats = StreamingStats()
|
| 233 |
+
line_stats = StreamingStats()
|
| 234 |
+
sentence_stats = StreamingStats()
|
| 235 |
+
word_length_stats = StreamingStats()
|
| 236 |
+
|
| 237 |
+
# Character type distribution accumulator
|
| 238 |
+
char_type_totals = defaultdict(float)
|
| 239 |
+
|
| 240 |
+
# For per-sample output
|
| 241 |
+
per_sample_data = []
|
| 242 |
+
|
| 243 |
+
# Process dataset
|
| 244 |
+
total_samples = args.max_samples if args.max_samples else "unknown"
|
| 245 |
+
with tqdm(total=args.max_samples, desc="Processing samples") as pbar:
|
| 246 |
+
for i, sample in enumerate(dataset):
|
| 247 |
+
if args.max_samples and i >= args.max_samples:
|
| 248 |
+
break
|
| 249 |
+
|
| 250 |
+
text = sample[args.text_column]
|
| 251 |
+
|
| 252 |
+
# Calculate stats for this sample
|
| 253 |
+
stats = calculate_basic_stats(text)
|
| 254 |
+
char_dist = calculate_char_type_distribution(text)
|
| 255 |
+
|
| 256 |
+
# Update streaming statistics
|
| 257 |
+
char_stats.update(stats["char_count"])
|
| 258 |
+
word_stats.update(stats["word_count"])
|
| 259 |
+
line_stats.update(stats["line_count"])
|
| 260 |
+
sentence_stats.update(stats["sentence_count"])
|
| 261 |
+
word_length_stats.update(stats["mean_word_length"])
|
| 262 |
+
|
| 263 |
+
# Accumulate character type distributions
|
| 264 |
+
for key, value in char_dist.items():
|
| 265 |
+
char_type_totals[key] += value
|
| 266 |
+
|
| 267 |
+
# Store per-sample data if requested
|
| 268 |
+
if args.per_sample:
|
| 269 |
+
sample_data = {**stats, **char_dist}
|
| 270 |
+
per_sample_data.append(sample_data)
|
| 271 |
+
|
| 272 |
+
pbar.update(1)
|
| 273 |
+
|
| 274 |
+
# Calculate final statistics
|
| 275 |
+
num_samples = char_stats.count
|
| 276 |
+
|
| 277 |
+
if num_samples == 0:
|
| 278 |
+
print("No samples processed!")
|
| 279 |
+
sys.exit(1)
|
| 280 |
+
|
| 281 |
+
# Average character type distributions
|
| 282 |
+
char_type_means = {
|
| 283 |
+
key: round(value / num_samples, 4)
|
| 284 |
+
for key, value in char_type_totals.items()
|
| 285 |
+
}
|
| 286 |
+
|
| 287 |
+
# Create summary report
|
| 288 |
+
summary = {
|
| 289 |
+
"dataset": args.dataset,
|
| 290 |
+
"split": args.split,
|
| 291 |
+
"text_column": args.text_column,
|
| 292 |
+
"total_samples": num_samples,
|
| 293 |
+
"statistics": {
|
| 294 |
+
"character_count": char_stats.to_dict(),
|
| 295 |
+
"word_count": word_stats.to_dict(),
|
| 296 |
+
"line_count": line_stats.to_dict(),
|
| 297 |
+
"sentence_count": sentence_stats.to_dict(),
|
| 298 |
+
"mean_word_length": word_length_stats.to_dict(),
|
| 299 |
+
},
|
| 300 |
+
"character_type_distribution": char_type_means,
|
| 301 |
+
"derived_metrics": {
|
| 302 |
+
"avg_words_per_line": round(word_stats.mean / line_stats.mean, 2) if line_stats.mean > 0 else 0.0,
|
| 303 |
+
"avg_chars_per_word": round(char_stats.mean / word_stats.mean, 2) if word_stats.mean > 0 else 0.0,
|
| 304 |
+
"avg_words_per_sentence": round(word_stats.mean / sentence_stats.mean, 2) if sentence_stats.mean > 0 else 0.0,
|
| 305 |
+
}
|
| 306 |
+
}
|
| 307 |
+
|
| 308 |
+
# Print summary
|
| 309 |
+
print("\n" + "="*60)
|
| 310 |
+
print("BASIC TEXT STATISTICS SUMMARY")
|
| 311 |
+
print("="*60)
|
| 312 |
+
print(json.dumps(summary, indent=2))
|
| 313 |
+
|
| 314 |
+
# Save per-sample data if requested
|
| 315 |
+
if args.per_sample:
|
| 316 |
+
output_file = args.output_file or f"{args.dataset.replace('/', '_')}_stats.csv"
|
| 317 |
+
|
| 318 |
+
# Save as CSV
|
| 319 |
+
import csv
|
| 320 |
+
|
| 321 |
+
if per_sample_data:
|
| 322 |
+
with open(output_file, 'w', newline='') as f:
|
| 323 |
+
writer = csv.DictWriter(f, fieldnames=per_sample_data[0].keys())
|
| 324 |
+
writer.writeheader()
|
| 325 |
+
writer.writerows(per_sample_data)
|
| 326 |
+
|
| 327 |
+
print(f"\nPer-sample statistics saved to: {output_file}")
|
| 328 |
+
|
| 329 |
+
# Save summary as JSON
|
| 330 |
+
summary_file = f"{args.dataset.replace('/', '_')}_summary.json"
|
| 331 |
+
with open(summary_file, 'w') as f:
|
| 332 |
+
json.dump(summary, f, indent=2)
|
| 333 |
+
|
| 334 |
+
print(f"Summary saved to: {summary_file}")
|
| 335 |
+
|
| 336 |
+
|
| 337 |
+
if __name__ == "__main__":
|
| 338 |
+
main()
|
finepdfs-stats.py
CHANGED
|
@@ -12,7 +12,7 @@ Analyze educational quality trends across CommonCrawl dumps using Polars streami
|
|
| 12 |
|
| 13 |
Answers: "Is the web getting more educational over time?"
|
| 14 |
|
| 15 |
-
Demonstrates Polars HF Hub integration - process
|
| 16 |
|
| 17 |
Example usage:
|
| 18 |
# Analyze English PDFs (default)
|
|
@@ -180,9 +180,21 @@ def create_readme(
|
|
| 180 |
total_docs = stats.get("total_docs", 0)
|
| 181 |
docs_per_sec = total_docs / scan_time if scan_time > 0 else 0
|
| 182 |
|
| 183 |
-
# Get first and last
|
| 184 |
-
|
| 185 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 186 |
|
| 187 |
scope = (
|
| 188 |
"all languages"
|
|
@@ -192,6 +204,7 @@ def create_readme(
|
|
| 192 |
|
| 193 |
return f"""---
|
| 194 |
tags:
|
|
|
|
| 195 |
- statistics
|
| 196 |
- polars
|
| 197 |
- finepdfs-edu
|
|
@@ -217,10 +230,10 @@ Temporal analysis of educational quality across {stats.get("num_dumps", 0)} Comm
|
|
| 217 |
|
| 218 |
## Key Finding
|
| 219 |
|
| 220 |
-
|
|
| 221 |
-
|
| 222 |
-
| {
|
| 223 |
-
| {
|
| 224 |
|
| 225 |
## Performance
|
| 226 |
|
|
|
|
| 12 |
|
| 13 |
Answers: "Is the web getting more educational over time?"
|
| 14 |
|
| 15 |
+
Demonstrates Polars HF Hub integration - process 50M+ docs without downloading 300GB+.
|
| 16 |
|
| 17 |
Example usage:
|
| 18 |
# Analyze English PDFs (default)
|
|
|
|
| 180 |
total_docs = stats.get("total_docs", 0)
|
| 181 |
docs_per_sec = total_docs / scan_time if scan_time > 0 else 0
|
| 182 |
|
| 183 |
+
# Get first and last year averages for trend (more representative than single dumps)
|
| 184 |
+
yearly = (
|
| 185 |
+
temporal_stats.with_columns(
|
| 186 |
+
pl.col("dump").str.extract(r"CC-MAIN-(\d{4})", 1).alias("year")
|
| 187 |
+
)
|
| 188 |
+
.group_by("year")
|
| 189 |
+
.agg(
|
| 190 |
+
pl.col("doc_count").sum(),
|
| 191 |
+
pl.col("avg_edu_score").mean(),
|
| 192 |
+
pl.col("high_edu_rate").mean(),
|
| 193 |
+
)
|
| 194 |
+
.sort("year")
|
| 195 |
+
)
|
| 196 |
+
first_year = yearly.head(1).to_dicts()[0]
|
| 197 |
+
last_year = yearly.tail(1).to_dicts()[0]
|
| 198 |
|
| 199 |
scope = (
|
| 200 |
"all languages"
|
|
|
|
| 204 |
|
| 205 |
return f"""---
|
| 206 |
tags:
|
| 207 |
+
- uv-script
|
| 208 |
- statistics
|
| 209 |
- polars
|
| 210 |
- finepdfs-edu
|
|
|
|
| 230 |
|
| 231 |
## Key Finding
|
| 232 |
|
| 233 |
+
| Year | Avg Edu Score | High Edu Rate |
|
| 234 |
+
|------|---------------|---------------|
|
| 235 |
+
| {first_year["year"]} | {first_year["avg_edu_score"]:.2f} | {first_year["high_edu_rate"] * 100:.1f}% |
|
| 236 |
+
| {last_year["year"]} | {last_year["avg_edu_score"]:.2f} | {last_year["high_edu_rate"] * 100:.1f}% |
|
| 237 |
|
| 238 |
## Performance
|
| 239 |
|
stats_output/detailed_stats.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:40d8da68fd7af8ed27f28a7c2c7ff218e818dfd217a7c13ac99abcb032090a1d
|
| 3 |
+
size 9770
|
stats_output/dump_stats.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a14fac91a0776f89c406e662aede9fc73535df26c9de0dbe3edbec16895f4db7
|
| 3 |
+
size 3879
|
stats_output/extractor_stats.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c26882ae83e18b22cd4538962a938216d90d49a85b5aad9f7a70dc12844c1b6
|
| 3 |
+
size 2770
|
stats_output/global_stats.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:278ea6044f2d1d39fffb5a20afc248227d4870d5c397b97dcee0e4e14443c0da
|
| 3 |
+
size 1936
|
stats_output/language_stats.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:619cc9e3e61a5b37a17b7b5cd3f073af1c7c781e1a591a67e85b0513e8caf3c4
|
| 3 |
+
size 4021
|
stats_output/temporal_stats.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f1c643f4eff59c20344fd68c8a53b7f87f6d5df828f1eb7635c4943f8420df06
|
| 3 |
+
size 3994
|