Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
SWE-Game Datasets
A collection of web-based mini-games used to build a software-engineering benchmark for coding agents. The core task is: given a small, self-contained game project, write Playwright end-to-end tests for it.
This repository ships three components of the data pipeline so that downstream users can reproduce filtering, re-label, or extend the benchmark.
Dataset Structure
swe-game/
├── README.md
├── quality_results.json # 316 manual quality labels (high / low)
├── raw_games/ # 1,069 raw mini-game projects (HTML / JS / CSS)
│ ├── [1000]吃豆豆/
│ ├── [1001]各种测试/
│ └── ...
└── benchmark_91/ # 91 curated samples, each with a Playwright task
├── [1008]小鸟飞飞飞/
│ ├── xiaoniaofeifei/ # game source code
│ └── task.md # task description (write 10 Playwright tests)
└── ...
Components
1. raw_games/ — Raw Game Pool (1,069 games, ~6.7 GB)
The unfiltered pool. Each directory is a self-contained web project (usually an index.html plus static assets). Folder name pattern is [ID]<chinese_name>.
2. quality_results.json — Manual Quality Labels (316 entries)
Labels assigned by a human annotator who played each game in a browser:
| Label | Count |
|---|---|
high |
91 |
low |
225 |
| Total labeled | 316 |
Games not in this JSON are unlabeled (the annotation run stopped at 316 / ~836 candidates after a first auto prefilter).
{
"[1000]吃豆豆": "low",
"[1008]小鸟飞飞飞": "high",
...
}
3. benchmark_91/ — Curated Benchmark (91 samples)
All games labeled high in quality_results.json, each augmented with a task.md that defines a Playwright testing task (write exactly 10 test() cases covering the game's core loop).
This is the set used in the main experiments.
Data Pipeline
raw crawl quality_results.json
┌───────────────┐ auto prefilter ┌───────────────┐ manual annotation ┌───────────────┐
│ raw_games │ ─────────────────▶ │ 836 candidates│ ───────────────────▶ │ 316 labels │
│ 1,069 games │ │ (intermediate,│ │ high=91 │
│ │ │ not shipped) │ │ low=225 │
└───────────────┘ └───────────────┘ └───────────────┘
│
│ keep `high`
▼
┌───────────────┐
│ benchmark_91 │
│ + task.md │
└───────────────┘
Reproducing the Annotation
The interactive labeling server that produced quality_results.json is available in the companion GitHub repo:
https://github.com/YuyaoGe/swe-game-datasets → scripts/stage0/game_quality_filter.py
Usage:
# Label a folder of games (default args match the original run)
python scripts/stage0/game_quality_filter.py \
--game-dir ./raw_games \
--result-file ./my_labels.json \
--port 8765
# Then open http://localhost:8765 in a browser.
# Keys: 1/L = low, 2/H = high, S = skip, ←/→ = nav
Companion Code
| Resource | Link |
|---|---|
| Experiment & benchmark code | https://github.com/YuyaoGe/swe-game-datasets |
License
Apache 2.0.
Citation
If this dataset is useful to you, please cite the companion repository (see the GitHub README for the preferred BibTeX once the paper is released).
- Downloads last month
- -