StreamGaze / README.md
danaleee's picture
Update README.md
b30c649 verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - en

StreamGaze Dataset

StreamGaze is a comprehensive streaming video benchmark for evaluating MLLMs on gaze-based QA tasks across past, present, and future contexts.

πŸ“ Dataset Structure

streamgaze/
β”œβ”€β”€ metadata/
β”‚   β”œβ”€β”€ egtea.csv              # EGTEA fixation metadata
β”‚   β”œβ”€β”€ egoexolearn.csv        # EgoExoLearn fixation metadata
β”‚   └── holoassist.csv         # HoloAssist fixation metadata
β”‚
β”œβ”€β”€ qa/
β”‚   β”œβ”€β”€ past_gaze_sequence_matching.json
β”‚   β”œβ”€β”€ past_non_fixated_object_identification.json
β”‚   β”œβ”€β”€ past_object_transition_prediction.json
β”‚   β”œβ”€β”€ past_scene_recall.json
β”‚   β”œβ”€β”€ present_future_action_prediction.json
β”‚   β”œβ”€β”€ present_object_attribute_recognition.json
β”‚   β”œβ”€β”€ present_object_identification_easy.json
β”‚   β”œβ”€β”€ present_object_identification_hard.json
β”‚   β”œβ”€β”€ proactive_gaze_triggered_alert.json
β”‚   └── proactive_object_appearance_alert.json
β”‚
└── videos/
    β”œβ”€β”€ videos_egtea_original.tar.gz         # EGTEA original videos
    β”œβ”€β”€ videos_egtea_viz.tar.gz              # EGTEA with gaze visualization
    β”œβ”€β”€ videos_egoexolearn_original.tar.gz   # EgoExoLearn original videos
    β”œβ”€β”€ videos_egoexolearn_viz.tar.gz        # EgoExoLearn with gaze visualization
    β”œβ”€β”€ videos_holoassist_original.tar.gz    # HoloAssist original videos
    └── videos_holoassist_viz.tar.gz         # HoloAssist with gaze visualization

🎯 Task Categories

Past (Historical Context)

  • Gaze Sequence Matching: Match gaze patterns to action sequences
  • Non-Fixated Object Identification: Identify objects outside gaze
  • Object Transition Prediction: Predict object state changes
  • Scene Recall: Recall scene details from memory

Present (Current Context)

  • Object Identification (Easy/Hard): Identify objects in/outside FOV
  • Object Attribute Recognition: Recognize object attributes
  • Future Action Prediction: Predict upcoming actions

Proactive (Future-Oriented)

  • Gaze-Triggered Alert: Alert based on gaze patterns
  • Object Appearance Alert: Alert on object appearance

πŸ“₯ Usage

Extract Videos

# Extract EGTEA videos
tar -xzf videos_egtea_original.tar.gz -C videos/egtea/original/
tar -xzf videos_egtea_viz.tar.gz -C videos/egtea/viz/

# Extract EgoExoLearn videos
tar -xzf videos_egoexolearn_original.tar.gz -C videos/egoexolearn/original/
tar -xzf videos_egoexolearn_viz.tar.gz -C videos/egoexolearn/viz/

# Extract HoloAssist videos
tar -xzf videos_holoassist_original.tar.gz -C videos/holoassist/original/
tar -xzf videos_holoassist_viz.tar.gz -C videos/holoassist/viz/

πŸ”‘ Metadata Format

Each metadata CSV contains:

  • video_source: Video identifier
  • fixation_id: Fixation segment ID
  • start_time_seconds / end_time_seconds: Temporal boundaries
  • center_x / center_y: Gaze center coordinates (normalized)
  • representative_object: Primary object at gaze point
  • other_objects_in_cropped_area: Objects within FOV
  • other_objects_outside_fov: Objects outside FOV
  • scene_caption: Scene description
  • action_caption: Action description

πŸ“ QA Format

Each QA JSON file contains:

  {
    "response_time": "[00:08 - 09:19]",
    "questions": [
      {
        "question": "Among {milk, spoon, pan, phone}, which did the user never gaze at?",
        "time_stamp": "03:14",
        "answer": "A",
        "options": [
          "A. milk",
          "B. spoon",
          "C. pan",
          "D. phone"
        ],
      }
    ],
    "video_path": "OP01-R03-BaconAndEggs.mp4"
  }

πŸ“„ License

This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

See https://creativecommons.org/licenses/by/4.0/

πŸ”— Links

πŸ“§ Contact

For questions or issues, please contact: [email protected]