--info-extract reports sidecar coverage % per format, but doesn't say
which files are missing. This makes it actionable:
wowee_editor --list-missing-sidecars Data/
Output: one path per line, prefixed with the missing extension so
shell tools can filter:
png Data/Textures/Skybox/Sky01.blp
json Data/DBFilesClient/SoundEntries.dbc
wom Data/Character/Human/Male/HumanMale.m2
Pipe into xargs to drive a targeted re-extract:
wowee_editor --list-missing-sidecars Data/ |
awk '/^png/ {print $2}' |
xargs asset_extract --emit-png-only
Skips WMO group files (Foo_NNN.wmo) since only the parent file gets
a .wob sidecar — they would otherwise inflate the missing list with
hundreds of false positives per WMO.
Exit 1 when anything is missing (so CI can gate). JSON mode emits
arrays per format type for programmatic consumption. Verified
against synthetic dir with 5 files (4 lacking sidecars + 1 with
.png present): all 4 reported, the one with sidecar correctly
omitted.
Round out the per-format validator suite. Open-format zone validation
now covers all four binary formats:
--validate-wom Tree
--validate-wob House
--validate-woc terrain.woc
--validate-whm Zone_28_30
--validate-all custom_zones/Zone1 # runs everything
WOC checks: finite vertex coords on every triangle, no degenerate
triangles (two verts identical), known flag bits only (0x0F mask),
tile coords within WoW grid (< 64), bounds.min <= bounds.max.
WHM/WOT checks: finite heights across all 145 verts/chunk, finite
chunk position vectors, tile coord in [0, 64), reasonable height
envelope ([-10000, 10000] is a generous outer bound — beyond that
suggests units confusion), placements have finite positions and
nameId within doodadNames/wmoNames table size.
validate-all now reports all four format counts (WOM/WOB/WOC/WHM)
and aggregates errors. Verified end-to-end: a fresh scaffolded zone
with --build-woc yields 256/256 chunks loaded, 32768 walkable
triangles, validate-all PASSED. Synthesized WOC with 0xFF flags
correctly fails with 'unknown flag bits 0xFF' and exit 1.
Walks a directory recursively and runs the deep validators on every
.wom and .wob it finds. Single CI gate for an entire zone tree:
wowee_editor --validate-all custom_zones/MyZone --json
Reports per-file failures (capped at first 20 to keep output bounded)
plus aggregate counts so you know which file to drill into with
--validate-wom or --validate-wob individually.
Refactor: pulled the validation bodies out of --validate-wom and
--validate-wob into static helpers (validateWomErrors / validateWobErrors)
returning vector<string>. The per-file commands now share the same
logic as --validate-all — fix one, fix all three. ~200 lines of
duplicate validation code consolidated.
Verified end-to-end: seeded /tmp dir with 2 WOMs (1 with DAG bone
violation) + 1 valid WOB, --validate-all reports 'WOM: 2 total, 1
failed' / 'WOB: 1 total, 0 failed' with the bad file's full path and
error printed below. JSON mode emits per-file failure list for CI.
Companion to --validate-wom; same load-is-lenient story for buildings:
wowee_editor --validate-wob House [--json]
Cross-references checked per group:
- indices.size() divisible by 3
- every index < vertices.size()
- material textures non-empty
- group boundMin <= boundMax per axis
Building-level:
- portal groupA/groupB references real groups (or -1 = exterior)
- portal polygon has >= 3 vertices
- doodad modelPath non-empty
- doodad scale finite and > 0
- boundRadius >= 0
Errors batched (caps to 3 listed plus '... and N more') so a thousand
bad indices in one group don't drown the report. JSON mode emits the
same error list for CI consumption. Exit 1 on any failure so shell
scripts can gate on it.
Verified against a synthesized 2-group building with intentional
flaws across every category — caught all 4 (group indices count,
empty material texture, short portal polygon, empty doodad model)
and exited 1.
The WOM loader is intentionally lenient — it clamps out-of-range
indices to 0, resets bad bone parents to -1, and skips invalid
batches. That keeps broken files from crashing the renderer, but
also hides corruption that authoring scripts should catch BEFORE
the file ships.
wowee_editor --validate-wom Tree [--json]
Cross-references checked (none auto-fixed by load()):
- bone DAG order: parent must be strictly less than self index
- animation boneKeyframes count == bone count when both nonzero
- batch indexStart+Count <= total indexCount
- batch indexCount divisible by 3
- batch textureIndex < texturePaths size
- boundMin <= boundMax per axis, boundRadius >= 0
- header version in [1,3], indices count divisible by 3
Verified against a synthesized 3-bone model with parent=self+1
(invalid DAG order) — load() preserves it as written, validator
reports 'bone 1 parent=2 not strictly less (DAG order)' and
exits 1. JSON mode emits errorCount + errors[] + passed boolean
for CI scripts to gate on.
Duplicate an existing zone to a new slug:
wowee_editor --copy-zone custom_zones/Original "My New Zone"
Workflow this enables: scaffold one base zone, populate it with
creatures/objects/quests, then copy-zone N times to create variants
without re-scaffolding each. Designers can template a 'forest base'
zone and stamp it into Dark Forest, Frozen Forest, etc.
What it does:
- Recursive copy preserves any subdirs (e.g. data/ for DBC sidecars)
- Reads source slug from zone.json (not the dir name) to know what
prefix to rewrite — handles users who renamed dirs without
touching the manifest
- Renames slug-prefixed files (Original_28_30.whm -> NewSlug_28_30.whm,
matches both _-suffixed and .-suffixed forms)
- Saves a fresh zone.json via ZoneManifest::save which rebuilds the
files-block from mapName, so the manifest references the renamed
files correctly
Verified end-to-end: scaffolded Original, added creature + quest,
copied to 'My New Zone'. Result: 2 files renamed, zone.json
mapName/displayName/files all updated, creatures.json + quests.json
copied verbatim.
Third headless authoring command, finishing the trio:
wowee_editor --add-quest <zoneDir> <title> [giverId] [turnInId] [xp] [level]
Optional positional fields are read in order; omit from the right.
Bare 'wowee_editor --add-quest zone Title' produces a valid quest
with default values, matching the editor GUI's New Quest behavior.
Verified: appended 3 quests (250+100+0 XP) to a scaffolded zone,
--info-quests --json reports total=3, totalXp=450, withReward=3.
Mirrors --add-creature for the object placer:
wowee_editor --add-object <zoneDir> <m2|wmo> <gamePath> <x> <y> <z> [scale]
Lets shell scripts populate objects/buildings without the GUI:
for tree in 100,200 150,250 200,300; do
x=${tree%,*}; y=${tree#*,}
wowee_editor --add-object "$zone" m2 'World/Doodad/Tree.m2' $x $y 0
done
Loads existing objects.json first then appends, so multiple
invocations build up. Optional scale slot lets the caller set
non-default size (clamped to >0 by the load-time guard).
Verified end-to-end: scaffolded zone → added m2 + wmo → info-objects
reports total=2 with the correct scale range.
Appends a single creature spawn to a zone's creatures.json. First
real authoring tool that doesn't need the GUI placement system —
useful for batch-populating zones via shell script:
for npc in goblin spider wolf; do
wowee_editor --add-creature "$zone" "$npc" 100 200 50
done
Args: <zoneDir> <name> <x> <y> <z> [displayId] [level]
- displayId 0 → SQL exporter substitutes 11707 (generic humanoid)
- level defaults to 1
- Coordinates are render-space (renderX=wowY, renderY=wowX)
Loads any existing creatures.json first then appends, so multiple
invocations build up the file. The standard NPC spawner caps
(50k creatures) protect against runaway scripts.
Schema:
asset_extract --purge-proprietary Data --json
{
"dir": "Data",
"confirmed": false,
"candidates": 21570,
"removed": 0,
"totalBytes": 17175328768
}
"candidates" = files where a sidecar exists and is at least as
new (would be deleted on --confirm-purge). "removed" reflects the
actual delete count when --confirm-purge is set; 0 in dry-run.
Lets CI scripts gate on the dry-run candidate count before actually
purging:
count=$(asset_extract --purge-proprietary Data --json | jq .candidates)
[ "$count" -gt 1000 ] && asset_extract --purge-proprietary Data --confirm-purge
The asset_extract --json flag now applies to both upgrade and purge
modes.
Mirrors the wowee_editor --info-extract --json schema so CI scripts
can use a single jq filter for both pre-check and post-upgrade:
asset_extract --upgrade-extract Data --json
{
"dir": "Data",
"elapsedSeconds": 47.2,
"skipped": 0,
"png": { "ok": 12340, "failed": 0 },
"jsonDbc": { "ok": 240, "failed": 0 },
"wom": { "ok": 8500, "failed": 0 },
"wob": { "ok": 730, "failed": 0 },
"terrain": { "ok": 5660, "failed": 0 }
}
Now both wowee_editor and asset_extract have JSON-mode summaries
covering every command CI is likely to gate on. The extraction
pipeline is fully scriptable end-to-end.
Adds --json to --info-creatures, --info-objects, --info-quests so
the per-content inspectors match the per-zone aggregator. Schemas
match the existing --zone-summary --json sub-objects.
Now 9 inspectors support --json:
--info-extract --info-wcp --validate
--info-creatures --info-objects --info-quests
--zone-summary --list-zones --diff-wcp
CI can now drill into per-content reports the same way as the
top-level summary, e.g. fail a build if a zone's quests have any
chain errors:
wowee_editor --info-quests "$zone/quests.json" --json \
| jq -e '.chainErrors | length == 0'
Adds JSON mode to the zone discovery scanner. Returns an array of
zone objects, each with name/dir/mapId/author/description/tiles/
hasCreatures/hasQuests.
Lets CI scripts iterate every available zone and run a per-zone
gate, e.g.:
for zone in $(wowee_editor --list-zones --json | jq -r '.[].directory'); do
wowee_editor --validate "$zone" --json | jq -e '.score == 7'
done
Fifth and last commonly-used inspector to gain --json mode (after
--info-extract, --validate, --info-wcp, --zone-summary).
Adds --json output to the one-shot zone-summary aggregator. Refactor
also moves creature/object/quest data reads to a shared step before
either branch so both human and JSON outputs use the same numbers.
Schema:
{
"zone": "custom_zones/Foo",
"score": 3, "maxScore": 7,
"formats": "WOT WHM zone.json ",
"counts": { "wot":1, "whm":1, "wom":0, "wob":0, "woc":0, "png":0 },
"creatures": { "total":N, "hostile":N, "questgiver":N, "vendor":N },
"objects": { "total":N, "m2":N, "wmo":N },
"quests": { "total":N, "chainWarnings":N }
}
Now CI can gate on any combination — open-format coverage, NPC
counts, quest chain health — from a single command. Fourth and
last commonly-CI'd inspector to gain --json mode (after
--info-extract, --validate, --info-wcp).
Mirrors --info-extract --json and --validate --json. Schema:
{
"wcp": "/path/to/zone.wcp",
"name": "Wj",
"author": "...",
"description": "...",
"version": "1.0",
"format": "wcp-1.0",
"mapId": 9000,
"fileCount": 3,
"totalBytes": 177671,
"categories": { "terrain": 2, "data": 1 }
}
Lets CI scripts inspect packed zones — e.g. fail a release if a
zone's WCP doesn't include a creature category, or auto-tag a
release with the totalBytes field.
Adds an optional --json flag that emits a structured nlohmann JSON
object instead of the human-readable text. Schema:
{
"dir": "...",
"totalBytes": N, "proprietaryBytes": N, "openBytes": N,
"overallCoverage": 100.0,
"blp_png": { "proprietary": N, "sidecar": N, "coverage": % },
"dbc_json": { ... },
"m2_wom": { ... },
"wmo_wob": { ... },
"adt_whm": { ... }
}
Lets CI scripts gate on coverage:
cov=$(wowee_editor --info-extract Data --json | jq .overallCoverage)
if [ "$cov" != "100" ]; then asset_extract --upgrade-extract Data; fi
Walks an extracted tree and removes every BLP/DBC/M2/skin/WMO/ADT
that has a confirmed open-format sidecar at least as new. Dry-run
by default — requires --confirm-purge to actually delete:
asset_extract --purge-proprietary Data/expansions/wotlk
Dry-run: would purge proprietary files...
would remove: 21570 files (16380.4 MB)
(re-run with --confirm-purge to actually delete)
asset_extract --purge-proprietary Data/expansions/wotlk --confirm-purge
Purging...
removed: 21570 files (16380.4 MB)
Pairing rules:
.blp → .png
.dbc → .json
.m2 → .wom
.skin → matching .m2's .wom (foo00.skin pairs with foo.wom)
.wmo → .wob (root)
.wmo group sub-files (foo_NNN.wmo) → parent foo.wob
.adt → .whm
Servers can't run without proprietary files, so this only makes
sense for wowee-runtime-only setups. Files without a sidecar are
left untouched.
Adds two summary lines so users can see how big a 'purge proprietary
after open conversion' workflow would shrink their tree (or how
much extra a dual-format extraction costs):
proprietary bytes: 18432.4 MB
open-format bytes: 21340.7 MB (115.8% of proprietary)
Counts every BLP/DBC/M2/WMO/ADT into the proprietary bucket and
every PNG/JSON/WOM/WOB/WHM/WOT/WOC into the open bucket. The
ratio surfaces things like 'PNG is bigger than DXT-compressed BLP'
or 'JSON DBC is much smaller than the binary' without the user
having to run du themselves.
Compares the source file's mtime against the sidecar's; if the
sidecar is newer, the conversion is skipped and counted into
stats.skipped. Re-running --upgrade-extract on a fully-converted
tree is now nearly free (just an mtime check per file).
asset_extract --upgrade-extract Data/expansions/wotlk
Walking ... (first run)
JSON (DBC→JSON) : 240 ok
asset_extract --upgrade-extract Data/expansions/wotlk
Walking ... (second run, all sidecars up to date)
up-to-date (skip) : 240
JSON (DBC→JSON) : 0 ok
emitOpenFormats() takes a new optional 'incremental' flag (default
false to preserve the asset_extract main-loop's overwrite behavior
since fresh extraction always wants new sidecars).
Verified end-to-end with a hand-built DBC: first run converts,
second run reports 'up-to-date (skip): 1'.
emitOpenFormats now takes an optional threadCount parameter (0 =
auto). The asset_extract --upgrade-extract path forwards opts.threads
so users can override the auto-detect when running on a CI machine
with limited cores or wanting deterministic timing.
Also wraps the upgrade pass with a chrono timer and prints elapsed
seconds so the parallelization payoff is visible at a glance:
asset_extract --upgrade-extract Data/expansions/wotlk --threads 8
Walking Data/expansions/wotlk for open-format upgrades...
elapsed : 47.2 s
PNG (BLP→PNG) : 12340 ok
...
Verified end-to-end: --threads 2 on 5 hand-built DBCs converts all
5 in well under a second.
Conversions are CPU-bound (BLP decode, M2/WMO parse, WOM/WOB
serialize) so the serial walk leaves cores idle. Now collects
every job into a vector during the directory walk, then dispatches
across hardware_concurrency() workers via an atomic next-index
queue. Stats use atomics to avoid the per-job mutex.
Expected ~5-8x speedup for full-tree --upgrade-extract on a
modern desktop. Existing test_open_format_emitter still passes
(it exercises both single-file emit*From* helpers and the parallel
emitOpenFormats walker).
Standalone post-extract pass: walks an existing extracted asset
tree and writes open-format sidecars in place, without re-running
MPQ extraction.
asset_extract --upgrade-extract Data/expansions/wotlk
Lets users with old extractions opt into the open-format pipeline
without losing the extracted state. Implies --emit-open if no
individual --emit-* flag is set.
Verified end-to-end: created a hand-built DBC in a temp dir, ran
--upgrade-extract, observed test.json appear with correct
metadata. Servers continue to read .dbc from manifest; the
runtime client picks up the new .json sidecar via the existing
pickup path.
Walks an extracted asset tree and reports per-format counts plus
how many proprietary files have a wowee open-format sidecar.
Lets users (or CI) see at a glance whether asset_extract was run
with --emit-open and how complete the open-format coverage is:
BLP textures : 12340 (12340 PNG sidecar = 100.0% open)
DBC tables : 240 (240 JSON sidecar = 100.0% open)
M2 models : 8500 (0 WOM sidecar = 0.0% open)
...
overall open-format coverage: 41.2%
(run `asset_extract --emit-open` to fill missing sidecars)
Skips _NNN group sub-files when counting WMOs (only the root WMO
ships with a WOB sidecar). The headless CLI is now at 22 commands.
Final piece of the open-format emit pipeline:
--emit-terrain foo.adt → foo.whm + foo.wot + foo.woc
With this, --emit-open now produces a fully open-format zone
alongside every Blizzard MPQ extraction:
BLP → PNG (textures)
DBC → JSON (data tables)
M2 → WOM (models, with skin merge)
WMO → WOB (buildings, with group merge)
ADT → WHM/WOT (terrain heights + metadata)
→ WOC (collision mesh derived from heights)
Originals stay on disk and indexed by manifest.json so private
servers continue to load proprietary formats; wowee runtime/editor
read the open formats directly. One extraction now feeds both
audiences with no separate conversion pass.
Implementation:
- Inline WHM+WOT writer in open_format_emitter.cpp (mirrors the
editor's WoweeTerrain::exportOpen but without the PNG-preview /
normal-map deps so the extractor stays editor-independent).
- Tile coords (x,y) parsed from <map>_<x>_<y>.adt filename.
- Collision mesh derived via WoweeCollisionBuilder::fromTerrain
(terrain triangles only — WMO collision overlays would need
asset manager and aren't worth the extractor complexity).
Extends asset_extract with two more open-format emitters:
--emit-wom foo.m2 (+ foo00.skin) → foo.wom
--emit-wob foo.wmo (+ foo_NNN.wmo groups) → foo.wob
--emit-open now also turns these on
Originals are preserved so private servers still load .m2/.wmo
through the manifest path; the wowee runtime/editor pick up the
.wom/.wob next to them via the existing open-format search rules.
Implementation:
- New WoweeModelLoader::fromM2Bytes(m2Data, skinData) shares the
conversion body with fromM2(path, am) via a static helper
(convertM2ToWom). Lets the extractor convert without standing
up an AssetManager.
- fromM2(path, am) moved to a separate translation unit
(wowee_model_fromm2.cpp) so asset_extract doesn't have to
link the AssetManager dependency.
- WoweeBuildingLoader::fromWMO already takes a WMOModel directly,
so emitWobFromWmo just needs to read root + group files and
call save().
- Group sub-files (<base>_NNN.wmo) are skipped during the walk
since they're merged into the root WMO.
The asset_extract tool now optionally writes wowee open-format
copies next to each extracted proprietary file:
--emit-png foo.blp → foo.png
--emit-json-dbc foo.dbc → foo.json
--emit-open shortcut for both
Originals are left untouched, so private servers (AzerothCore,
TrinityCore) that load from the manifest's .blp/.dbc paths
continue to work unchanged. The wowee runtime / editor can now
consume the open formats directly without an extra conversion pass.
Implementation:
- New tools/asset_extract/open_format_emitter.{hpp,cpp} encapsulates
the post-extract walk + per-file conversion.
- BLP→PNG uses BLPLoader::load + stbi_write_png with the same
dimension/buffer-size sanity guards the editor's texture exporter
applies.
- DBC→JSON mirrors the editor's DBCExporter::exportAsJson schema
(string/float/uint heuristic) so the runtime DBC overlay loader
can consume the output drop-in.
applyBrush already early-outs on NaN brush position, but the stored
worldPos_ would still capture NaN and surface it to UI panels and
the brush-circle indicator (which renders a NaN ring). Reject NaN
at the setter so the editor state itself stays sane.
ADTTerrain.chunks is std::array<MapChunk, 256> — out-of-range
indexing is undefined behaviour. Reject indices outside [0, 255]
and return empty / no-op rather than crashing on a stale undo
record from a future-version terrain layout.
EditorCamera::setSpeed accepted any float, including NaN/inf and
values outside the wheel-zoom clamp range [10, 2000]. NaN speed
would propagate into the per-tick position update (NaN * dt =
NaN) and corrupt the camera view matrix. Match the wheel clamp
so external setters can't bypass the UI's bounds.
Loads + immediately re-saves zone.json, creatures.json,
objects.json, and quests.json. The load-time scrubs (NaN,
out-of-range, oversize) and save-time caps fire on the round-trip,
producing a cleaned-up zone without ever opening the GUI.
Useful when an old zone was created before recent hardening
batches — running this once normalizes the on-disk state to match
what the current loaders expect.
Same defense pattern as the other editor JSON loaders. WoW only
supports 65535 maps total and the editor loads one tile at a
time; 1024 zones per project is plenty. Stale autosave or hand-
edit could otherwise grow zones unbounded and slow the project
picker UI.
readInfo iterated the info JSON's files array without bounding;
a malicious WCP could declare more entries than the header
fileCount allows and grow info.files unbounded. Cap to 1M
matching the header check so both readInfo callers and
--list-wcp/--info-wcp stay bounded.
Same defense pattern as QuestEditor (4096) and ObjectPlacer (100k).
A stale autosave or scatter-runaway could carry millions of NPCs;
each emits creature_template + creature + optional addon/waypoint
rows, drowning the SQL output and the M2 marker mesh.
Every editor JSON loader now has a matched-to-cost upper bound
(NPCs 50k, quests 4k, objects 100k, waypoints 256).
A stale autosave or hand-edited JSON could carry an unbounded list:
- 100k quests would emit 100k quest_template + queststarter/ender
INSERTs (huge SQL, slow validate, slow chain walks).
- 1M+ objects bloats the M2 instance SSBO and drags editor framerate
to single digits.
Caps mirror the 256-waypoint cap added in the previous batch — log
a warning and drop the rest so the editor stays responsive.
A stale autosave or hand-edited creature.json could carry an
unbounded patrolPath. The SQL exporter would emit one waypoint_data
INSERT per entry and produce huge SQL files. 256 waypoints covers
any realistic route.
Same defect as the empty-Patrol case: Wander behavior with 0
radius would spawn a creature that pretends to wander but never
moves. Downgrade to stationary so the export reflects the actual
in-game behavior, matching the Patrol-without-waypoints fix.
A creature with behavior=Patrol but an empty patrolPath would emit
movement_type=2 (waypoint) without any waypoint_data rows.
AzerothCore would log 'creature X has no waypoints' on every spawn
and the NPC would behave erratically. Fall back to stationary so
the spawn appears cleanly; user can fix the missing path after.
Pre-scans the quest list and emits a single header note when any
quest uses ExploreArea / EscortNPC / UseObject — those have no
direct quest_template column and need AzerothCore script_quest
hooks. Prevents silent dropping of objectives leaving an unfinished
quest in-game; the user sees the warning once at the top of
02_spawns.sql instead of having to grep through editor logs.
fs::relative can return '../foo' when the pack source is a symlink
that resolves outside the pack root. The unpacker rejects '..' or
absolute paths wholesale, so a single rogue symlink would ruin the
whole archive. Skip the offending file at pack with a warning so
the rest of the zone still ships.
Quest.nextQuestId was captured by the editor and used by
validateChains for cycle detection, but never made it into the
AzerothCore quest_template SQL. Now resolves the editor-relative
ID to the matching SQL entry (startEntry + nextQuestId) and
writes it to the NextQuestInChain column. Players can now
auto-progress through quest chains in-game.
Quest.reward.itemRewards entries were captured in the editor JSON
but never made it into the AzerothCore SQL export. Parse each
entry as a numeric item ID and emit RewardItem1-4 + count columns;
unparseable entries become 0 (skipped at quest grant time). 4 slot
limit matches AzerothCore's quest_template schema.
Previously only KillCreature and CollectItem objectives translated
to SQL. AzerothCore reuses RequiredNpcOrGo for talk objectives
(count=1 indicates an interaction rather than a kill), so wire that
through and add a comment about which objective types need server
scripts (ExploreArea/EscortNPC/UseObject).
Mirrors the other --info-* family inspectors. Accepts either a
zone directory or the zone.json path directly. Prints every
manifest field: name, mapId, biome, baseHeight, tiles, flags,
audio config. Useful when diffing two zones or auditing the
audio/flag setup before packing.
Same per-cell range-check pattern as the JSON DBC fix: if a
waypoint's waitTime field is negative or > UINT32_MAX, the
.get<uint32_t> throws json::type_error and the outer try-catch
aborts the entire NPC file load on a single bad waypoint. Read
as int64, clamp to [0, 600000ms = 10-min cap].
Pack previously trusted recursive_directory_iterator to terminate
naturally — fine on most zones but a hostile symlink loop or a
giant accidental subdirectory would produce an archive with > 1M
files, which the unpack header check rejects wholesale. Cap at the
unpack limit and log a warning so the resulting WCP is at least
loadable, even if incomplete.
Pack previously accepted any file < 4GB and wrote it raw. Unpack
caps at 256MB and rejects the whole archive on overflow — so a
huge file in the source dir would silently produce an unpackable
WCP. Cap at pack and skip the body (size=0 entry) so the rest of
the pack remains usable.