Walks an extracted asset tree and reports per-format counts plus
how many proprietary files have a wowee open-format sidecar.
Lets users (or CI) see at a glance whether asset_extract was run
with --emit-open and how complete the open-format coverage is:
BLP textures : 12340 (12340 PNG sidecar = 100.0% open)
DBC tables : 240 (240 JSON sidecar = 100.0% open)
M2 models : 8500 (0 WOM sidecar = 0.0% open)
...
overall open-format coverage: 41.2%
(run `asset_extract --emit-open` to fill missing sidecars)
Skips _NNN group sub-files when counting WMOs (only the root WMO
ships with a WOB sidecar). The headless CLI is now at 22 commands.
Closes the loop on the asset_extract --emit-terrain pipeline. The
runtime terrain loader now probes for a .whm/.wot/.woc trio in the
same directory as the resolved ADT (e.g. <data>/world/maps/foo/
foo_30_30.{whm,wot,woc}) before falling back to ADTLoader.
Hits the open-format path when:
custom_zones/<map>/<map>_X_Y.{whm,wot} exists (zone author override)
output/<map>/<map>_X_Y.{whm,wot} exists (editor export)
<data>/world/maps/<map>/<map>_X_Y.{whm,wot} (asset extractor)
-- otherwise falls through to ADTLoader::load(adtData)
Promotes AssetManager::resolveFile to public so callers (terrain
sidecar probe here, anything else later) can locate an extracted
file's directory without reading the bytes.
Servers/private servers continue to read .adt via manifest paths
unchanged. Runtime sidecar coverage now matches the extractor's
emit set across all five binary open formats.
terrain_manager already attempts WOM/WOB via tryLoadByGamePath but
its prefix list only included custom_zones/ and output/. The asset
extractor's --emit-wom/--emit-wob writes sidecars next to the M2/WMO
in the asset tree itself (e.g. <data>/world/maps/foo/foo.wom).
Pass the AssetManager's data path as an extra prefix so the runtime
picks the open-format sidecar up there before falling back to the
proprietary M2/WMO load path. Completes the runtime side of the
dual-format extraction:
AssetManager::loadTexture → tries .png sidecar first, then BLP.
AssetManager::loadDBC → tries .json sidecar first, then DBC.
TerrainManager M2/WMO → tries .wom/.wob sidecar first, then m2/wmo.
Servers/private servers see no change — they read from the proprietary
files via manifest paths and don't touch the sidecars.
AssetManager::loadDBC now probes for a .json sidecar in the same
directory as the binary DBC the manifest resolves to. asset_extract
--emit-json-dbc writes that sidecar on extraction, so the runtime
client transparently falls back to it when the binary DBC is absent
(e.g. open-format-only extraction for end-to-end format testing).
Order: binary DBC > JSON sidecar > custom_zones JSON > CSV > error.
Servers (AzerothCore/TrinityCore) only read the binary DBC, so this
change is invisible to them — the wowee runtime is the only consumer
that tries the JSON path.
The PNG sidecar pickup (tryLoadPngOverride) was already in place from
prior work — this completes the symmetric runtime-side wiring for
both BLP→PNG and DBC→JSON open-format outputs.
4 tests covering the open_format_emitter:
- emitJsonFromDbc round-trips a hand-built 2-record DBC through
DBCFile::load (which auto-detects JSON via the '{' prefix) and
recovers identical record/field/value data.
- Missing input file → graceful failure (no JSON written).
- Bad DBC magic → graceful failure.
- emitOpenFormats walks a subdirectory and writes the side-file
in the right place (matches the extractor's recursive walk).
Brings ctest target count to 31.
Final piece of the open-format emit pipeline:
--emit-terrain foo.adt → foo.whm + foo.wot + foo.woc
With this, --emit-open now produces a fully open-format zone
alongside every Blizzard MPQ extraction:
BLP → PNG (textures)
DBC → JSON (data tables)
M2 → WOM (models, with skin merge)
WMO → WOB (buildings, with group merge)
ADT → WHM/WOT (terrain heights + metadata)
→ WOC (collision mesh derived from heights)
Originals stay on disk and indexed by manifest.json so private
servers continue to load proprietary formats; wowee runtime/editor
read the open formats directly. One extraction now feeds both
audiences with no separate conversion pass.
Implementation:
- Inline WHM+WOT writer in open_format_emitter.cpp (mirrors the
editor's WoweeTerrain::exportOpen but without the PNG-preview /
normal-map deps so the extractor stays editor-independent).
- Tile coords (x,y) parsed from <map>_<x>_<y>.adt filename.
- Collision mesh derived via WoweeCollisionBuilder::fromTerrain
(terrain triangles only — WMO collision overlays would need
asset manager and aren't worth the extractor complexity).
Extends asset_extract with two more open-format emitters:
--emit-wom foo.m2 (+ foo00.skin) → foo.wom
--emit-wob foo.wmo (+ foo_NNN.wmo groups) → foo.wob
--emit-open now also turns these on
Originals are preserved so private servers still load .m2/.wmo
through the manifest path; the wowee runtime/editor pick up the
.wom/.wob next to them via the existing open-format search rules.
Implementation:
- New WoweeModelLoader::fromM2Bytes(m2Data, skinData) shares the
conversion body with fromM2(path, am) via a static helper
(convertM2ToWom). Lets the extractor convert without standing
up an AssetManager.
- fromM2(path, am) moved to a separate translation unit
(wowee_model_fromm2.cpp) so asset_extract doesn't have to
link the AssetManager dependency.
- WoweeBuildingLoader::fromWMO already takes a WMOModel directly,
so emitWobFromWmo just needs to read root + group files and
call save().
- Group sub-files (<base>_NNN.wmo) are skipped during the walk
since they're merged into the root WMO.
The asset_extract tool now optionally writes wowee open-format
copies next to each extracted proprietary file:
--emit-png foo.blp → foo.png
--emit-json-dbc foo.dbc → foo.json
--emit-open shortcut for both
Originals are left untouched, so private servers (AzerothCore,
TrinityCore) that load from the manifest's .blp/.dbc paths
continue to work unchanged. The wowee runtime / editor can now
consume the open formats directly without an extra conversion pass.
Implementation:
- New tools/asset_extract/open_format_emitter.{hpp,cpp} encapsulates
the post-extract walk + per-file conversion.
- BLP→PNG uses BLPLoader::load + stbi_write_png with the same
dimension/buffer-size sanity guards the editor's texture exporter
applies.
- DBC→JSON mirrors the editor's DBCExporter::exportAsJson schema
(string/float/uint heuristic) so the runtime DBC overlay loader
can consume the output drop-in.
applyBrush already early-outs on NaN brush position, but the stored
worldPos_ would still capture NaN and surface it to UI panels and
the brush-circle indicator (which renders a NaN ring). Reject NaN
at the setter so the editor state itself stays sane.
ADTTerrain.chunks is std::array<MapChunk, 256> — out-of-range
indexing is undefined behaviour. Reject indices outside [0, 255]
and return empty / no-op rather than crashing on a stale undo
record from a future-version terrain layout.
EditorCamera::setSpeed accepted any float, including NaN/inf and
values outside the wheel-zoom clamp range [10, 2000]. NaN speed
would propagate into the per-tick position update (NaN * dt =
NaN) and corrupt the camera view matrix. Match the wheel clamp
so external setters can't bypass the UI's bounds.
Loads + immediately re-saves zone.json, creatures.json,
objects.json, and quests.json. The load-time scrubs (NaN,
out-of-range, oversize) and save-time caps fire on the round-trip,
producing a cleaned-up zone without ever opening the GUI.
Useful when an old zone was created before recent hardening
batches — running this once normalizes the on-disk state to match
what the current loaders expect.
A JSON DBC with a malformed record (object instead of array, or
a string entry) would call row[col] which throws on non-arrays —
the outer try-catch treated this as a hard failure for the whole
DBC. Skip the row (stays zero-initialized) so a single malformed
record doesn't lose all the rest.
Both name lists used n.get<std::string> which throws on non-string
entries (would abort the entire WOT load). Real zones use ~5k names
max; cap at 65536 (uint16 nameId range upper bound) so the cap is
generous but bounded. Guard with is_string so a single bad entry
just gets skipped instead of failing the file.
Three issues:
- textures vector was unbounded (cap at 1024).
- Per-chunk layers vector was unbounded (cap at 8 — WoW ADT
format supports 4, doubling for headroom).
- texId.get<uint32_t> and holes.get<uint16_t> would throw
json::type_error on negative or oversize values, aborting the
entire WOT load. Read as int64, clamp to the target range.
Same defense pattern as the editor JSON loaders. Real ADTs cap at
~64k MDDF entries and ~5k in practice; 100k matches the editor
ObjectPlacer cap so an extreme WOT can't bloat the in-memory
terrain past what the editor itself would accept.
Same defense pattern as the other editor JSON loaders. WoW only
supports 65535 maps total and the editor loads one tile at a
time; 1024 zones per project is plenty. Stale autosave or hand-
edit could otherwise grow zones unbounded and slow the project
picker UI.
readInfo iterated the info JSON's files array without bounding;
a malicious WCP could declare more entries than the header
fileCount allows and grow info.files unbounded. Cap to 1M
matching the header check so both readInfo callers and
--list-wcp/--info-wcp stay bounded.
Same defense pattern as QuestEditor (4096) and ObjectPlacer (100k).
A stale autosave or scatter-runaway could carry millions of NPCs;
each emits creature_template + creature + optional addon/waypoint
rows, drowning the SQL output and the M2 marker mesh.
Every editor JSON loader now has a matched-to-cost upper bound
(NPCs 50k, quests 4k, objects 100k, waypoints 256).
A stale autosave or hand-edited JSON could carry an unbounded list:
- 100k quests would emit 100k quest_template + queststarter/ender
INSERTs (huge SQL, slow validate, slow chain walks).
- 1M+ objects bloats the M2 instance SSBO and drags editor framerate
to single digits.
Caps mirror the 256-waypoint cap added in the previous batch — log
a warning and drop the rest so the editor stays responsive.
A stale autosave or hand-edited creature.json could carry an
unbounded patrolPath. The SQL exporter would emit one waypoint_data
INSERT per entry and produce huge SQL files. 256 waypoints covers
any realistic route.
Same defect as the empty-Patrol case: Wander behavior with 0
radius would spawn a creature that pretends to wander but never
moves. Downgrade to stationary so the export reflects the actual
in-game behavior, matching the Patrol-without-waypoints fix.
A creature with behavior=Patrol but an empty patrolPath would emit
movement_type=2 (waypoint) without any waypoint_data rows.
AzerothCore would log 'creature X has no waypoints' on every spawn
and the NPC would behave erratically. Fall back to stationary so
the spawn appears cleanly; user can fix the missing path after.
Pre-scans the quest list and emits a single header note when any
quest uses ExploreArea / EscortNPC / UseObject — those have no
direct quest_template column and need AzerothCore script_quest
hooks. Prevents silent dropping of objectives leaving an unfinished
quest in-game; the user sees the warning once at the top of
02_spawns.sql instead of having to grep through editor logs.
fs::relative can return '../foo' when the pack source is a symlink
that resolves outside the pack root. The unpacker rejects '..' or
absolute paths wholesale, so a single rogue symlink would ruin the
whole archive. Skip the offending file at pack with a warning so
the rest of the zone still ships.
Quest.nextQuestId was captured by the editor and used by
validateChains for cycle detection, but never made it into the
AzerothCore quest_template SQL. Now resolves the editor-relative
ID to the matching SQL entry (startEntry + nextQuestId) and
writes it to the NextQuestInChain column. Players can now
auto-progress through quest chains in-game.
Quest.reward.itemRewards entries were captured in the editor JSON
but never made it into the AzerothCore SQL export. Parse each
entry as a numeric item ID and emit RewardItem1-4 + count columns;
unparseable entries become 0 (skipped at quest grant time). 4 slot
limit matches AzerothCore's quest_template schema.
Previously only KillCreature and CollectItem objectives translated
to SQL. AzerothCore reuses RequiredNpcOrGo for talk objectives
(count=1 indicates an interaction rather than a kill), so wire that
through and add a comment about which objective types need server
scripts (ExploreArea/EscortNPC/UseObject).
Mirrors the other --info-* family inspectors. Accepts either a
zone directory or the zone.json path directly. Prints every
manifest field: name, mapId, biome, baseHeight, tiles, flags,
audio config. Useful when diffing two zones or auditing the
audio/flag setup before packing.
Same per-cell range-check pattern as the JSON DBC fix: if a
waypoint's waitTime field is negative or > UINT32_MAX, the
.get<uint32_t> throws json::type_error and the outer try-catch
aborts the entire NPC file load on a single bad waypoint. Read
as int64, clamp to [0, 600000ms = 10-min cap].
- ObjectPlacer load: scale=0 clamped to 1.0, missing scale field
defaults to 1.0 (locks in the load-side guard at line 346).
- ObjectPlacer save→load: uniqueId values survive a full round
trip (locks in the explicit uniqueId preservation behavior the
ADT round-trip relies on).
Adds object_placer.cpp to test_editor_units sources.
Pack previously trusted recursive_directory_iterator to terminate
naturally — fine on most zones but a hostile symlink loop or a
giant accidental subdirectory would produce an archive with > 1M
files, which the unpack header check rejects wholesale. Cap at the
unpack limit and log a warning so the resulting WCP is at least
loadable, even if incomplete.
Pack previously accepted any file < 4GB and wrote it raw. Unpack
caps at 256MB and rejects the whole archive on overflow — so a
huge file in the source dir would silently produce an unpackable
WCP. Cap at pack and skip the body (size=0 entry) so the rest of
the pack remains usable.
Locks in the recent DBC overflow guards:
- recordCount=1B + recordSize=1024 (would overflow uint32 product)
- fieldCount=65535 (would multiply to 256KB record size)
Both load() calls return false instead of allocating tiny buffers
that get memcpy'd from TB of memory.
val.get<uint32_t>() throws on negative or > UINT32_MAX. The
outer try-catch would then abort the entire JSON DBC load on a
single bad cell. Read as int64_t, clamp to [0, UINT32_MAX], and
zero out anything out of range — matches the per-field NaN scrub
applied to floats one branch up.
Same load-desync pattern as elsewhere — alphaSize > 65536 silently
skipped the read but the actual alpha bytes were still on disk, so
the next chunk's baseHeight float read would parse alpha bytes.
Now rejects the load with LOG_ERROR.
DBCFile::load multiplied recordCount * recordSize as uint32 (line
108), so a header with recordCount=1B and recordSize=1024 would
wrap to a tiny size — resize allocates ~tiny, memcpy reads ~TB
of memory and crashes.
Reject impossible header values up front (10M records / 1024
fields / 16KB record / 256MB string block) and use uint64_t for
the file-size sanity check + size_t for the resize/memcpy product
so the bounds-check is the only path that allows large counts.
Locks in the recent silent-corruption fix:
- Overlong building name (5000 > 1024) → load returns invalid
instead of silently zeroing the length and reading 5000 stale
bytes as the next group's name+counts.
- Overlong group name (9999 > 1024) → same.
Catches regression of the silent-desync defect that affected 7
length fields across the WoB and WOM loaders.
Same silent-corruption pattern as WoB: model.name had no length
check at all (would happily allocate 64KB), and texture paths
silently zeroed pathLen on overflow leaving the actual bytes on
disk to shift the rest of the file. Now reject with LOG_ERROR.
Building name, group name, group texture path, material texture
path, and doodad model path all had the same defect: when the
length field exceeded 1024 the loader silently set the local
counter to 0 and skipped the read — but the actual string bytes
were still on disk, so the next read interpreted them as the next
length+data pair and the whole rest of the file desynced.
Now reject the whole load on each oversize length with an explicit
LOG_ERROR. Save caps at 1024 so this only triggers on hand-crafted
or future-version files, but the failure mode was severe enough
(silent zone corruption, not a clean error) to warrant the fix.
Previously load silently skipped the materials block when mc > 256,
leaving the file pointer right after the count — the next group's
name would then read material bytes as garbage and the rest of the
file would shift. Save now caps at 256 (so the asymmetry shouldn't
trigger from our own writer), but a hand-crafted or future-version
WoB could still hit it.
Locks in the recent save-side count caps:
- WoB save with 1500 texture paths → load reads exactly 1024,
first/last entries match what was written before the cap fired.
- WOC save with tileX/tileY=200 → load reads tileX/tileY=32
(clamped at write, no warning on the second reload).
Catches the asymmetry that would silently drop everything past
the load limit.
Top-level WOM save was writing raw model.vertices/.indices/.texturePaths
sizes; load enforces 1M / 4M / 1024 limits. A pathological model would
emit a header rejected on load, leaking the rest of the file body.
Cap each count at the load limit and iterate the WOM1 vertex block +
texture-path block by index so the body matches the header.
Same per-section cap pattern. The loader caps batchCount at 4096;
save iterated all validBatches without checking. A model with
>4096 batches would write a header rejected on round-trip.
Same per-section cap pattern. Real portals carry 4-12 verts; the
load enforces 4096 max. Save previously wrote raw size() so a
huge portal would write a header the loader rejects.
Per-group counts were uncapped on save while load enforced 1M
vertices, 4M indices, 1024 texture paths. A single huge group
exceeded any cap would write a header the loader rejects, leaking
the rest of the file body into a misread chain.
Cap counts at the load limits and iterate the texture-path block
by index so the body matches the header on round-trip.
WoB load enforces 4096 groups / 8192 portals / 65536 doodads. Save
previously wrote raw size() and iterated all entries — a build
exceeding any cap would be rejected wholesale on round-trip.
Cap each count at the load limit and use indexed loops so the
written body matches the header count even if the in-memory data
goes over.
WOC load caps tris at 2M and clamps tile coords to 0..63. Save
previously wrote raw size() and tileX/Y — a >2M-tri collision
would be silently rejected on round-trip, and OOR tile coords
would log a warning every reload. Cap at save and reuse the
load-side clamp so the on-disk file is round-trip clean.
WOM load caps bones at 512 and animations at 1024. Save previously
wrote raw size() and iterated all entries — a model with >512 bones
would write fine but truncate on round-trip, and the post-truncation
keyframe data would be misread as the next animation.
Cap both counts at save and iterate using the capped value so the
per-bone keyframe block stays aligned with what load expects.
Save previously wrote raw materials.size() as the count, then iterated
all materials. Load caps at 256, so a build with >256 materials would
write fine but truncate on round-trip and the post-truncation block
would be misread as the next group's data. Cap at save and only write
the first 256.