Datasets:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- zero-shot-classification
|
| 5 |
+
size_categories:
|
| 6 |
+
- n<1K
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
# MMVP-VLM Benchmark Datacard
|
| 10 |
+
|
| 11 |
+
## Basic Information
|
| 12 |
+
|
| 13 |
+
**Title:** MMVP-VLM Benchmark
|
| 14 |
+
|
| 15 |
+
**Description:** The MMVP-VLM (Multimodal Visual Patterns - Visual Language Models) Benchmark is designed to systematically evaluate the performance of recent CLIP-based models in understanding and processing visual patterns. It distills a subset of questions from the original MMVP benchmark into simpler language descriptions, categorizing them into distinct visual patterns. Each visual pattern is represented by 15 text-image pairs. The benchmark assesses whether CLIP models can accurately match these image-text combinations, providing insights into the capabilities and limitations of these models.
|
| 16 |
+
|
| 17 |
+
## Dataset Details
|
| 18 |
+
|
| 19 |
+
- **Content Types:** Text-Image Pairs
|
| 20 |
+
- **Volume:** Balanced number of questions for each visual pattern, with each pattern represented by 15 pairs.
|
| 21 |
+
- **Source of Data:** Subset from MMVP benchmark, supplemented with additional questions for balance
|
| 22 |
+
- **Data Collection Method:** Distillation and categorization of questions from MMVP benchmark into simpler language
|
| 23 |
+
|
| 24 |
+
## Usage
|
| 25 |
+
|
| 26 |
+
### Intended Use
|
| 27 |
+
|
| 28 |
+
- Evaluation of CLIP models' ability to understand and process various visual patterns.
|