Top-level WOM save was writing raw model.vertices/.indices/.texturePaths
sizes; load enforces 1M / 4M / 1024 limits. A pathological model would
emit a header rejected on load, leaking the rest of the file body.
Cap each count at the load limit and iterate the WOM1 vertex block +
texture-path block by index so the body matches the header.
Same per-section cap pattern. The loader caps batchCount at 4096;
save iterated all validBatches without checking. A model with
>4096 batches would write a header rejected on round-trip.
Same per-section cap pattern. Real portals carry 4-12 verts; the
load enforces 4096 max. Save previously wrote raw size() so a
huge portal would write a header the loader rejects.
Per-group counts were uncapped on save while load enforced 1M
vertices, 4M indices, 1024 texture paths. A single huge group
exceeded any cap would write a header the loader rejects, leaking
the rest of the file body into a misread chain.
Cap counts at the load limits and iterate the texture-path block
by index so the body matches the header on round-trip.
WoB load enforces 4096 groups / 8192 portals / 65536 doodads. Save
previously wrote raw size() and iterated all entries — a build
exceeding any cap would be rejected wholesale on round-trip.
Cap each count at the load limit and use indexed loops so the
written body matches the header count even if the in-memory data
goes over.
WOC load caps tris at 2M and clamps tile coords to 0..63. Save
previously wrote raw size() and tileX/Y — a >2M-tri collision
would be silently rejected on round-trip, and OOR tile coords
would log a warning every reload. Cap at save and reuse the
load-side clamp so the on-disk file is round-trip clean.
WOM load caps bones at 512 and animations at 1024. Save previously
wrote raw size() and iterated all entries — a model with >512 bones
would write fine but truncate on round-trip, and the post-truncation
keyframe data would be misread as the next animation.
Cap both counts at save and iterate using the capped value so the
per-bone keyframe block stays aligned with what load expects.
Save previously wrote raw materials.size() as the count, then iterated
all materials. Load caps at 256, so a build with >256 materials would
write fine but truncate on round-trip and the post-truncation block
would be misread as the next group's data. Cap at save and only write
the first 256.
If forward is parallel to (0,0,1) — camera staring straight up or
down — the cross product is zero and glm::normalize returned NaN.
That NaN flowed into glm::lookAt and produced a NaN view matrix.
The editor camera clamps pitch to +/-89 so it doesn't trigger,
but other call sites or scripted test paths could construct a
Camera at +/-90 and immediately blow up. Length-check the cross
and fall back to world +X / +Z.
glm::min/max on NaN is implementation-defined, so a single bad
vertex would propagate NaN into the camera-occlusion and culling
AABB used by the runtime. WOM/M2 loaders already scrub but defense
in depth catches anything they miss. Falls back to a unit box if
every vertex is bad.
The matrix-transformed normal could be near-zero if the M2 instance
has a degenerate scale; glm::normalize then returns NaN that
contaminates the slope check (NaN < 0.35 is false → no early-out)
and bestNormalZ goes NaN, breaking the walkable-floor heuristic.
Length-check the transformed normal and fall back to the (0,0,1)
flat default — same pattern as the WMO renderer.
A WMO vertex with zero-length or NaN normal would produce a NaN
normalized normal, contaminating the Gram-Schmidt tangent for the
whole vertex and producing visibly broken normal mapping for the
affected face. Length-check before normalize and fall back to
(0,0,1) when degenerate.
A portal whose first three vertices are coincident or collinear
produces a zero cross product and glm::normalize returns NaN. The
NaN propagates into the portal-frustum cull (every interior group
either always-visible or never-visible depending on plane orientation).
Use the same length-check pattern as the editor's spline/path code:
zero cross → fall back to (0,0,1) up-axis.
Mirrors the createInstance guard. position drives the dedup hash
key (std::round of NaN is implementation-defined) and the matrix
flows into the GPU UBO.
Same boundary-rejection pattern as createInstance. NaN in either
function would corrupt the spatial grid (stale cells pointing at
NaN-bounded instances) and the GPU model-matrix UBO.
Even with all the upstream guards I've been adding, internal callers
or addon-style scripted spawns could pass NaN. Reject at the API
boundary so we never hash-key with NaN coords (std::round of NaN
is implementation-defined) or push a NaN instance into the model-
matrix uniform buffer (GPU crash / origin render).
Three issues addressed:
- NaN portal vertices break the WMO portal-frustum cull, defeating
the indoor optimization and forcing the whole interior to draw.
- Out-of-range portal.groupA/groupB indices walk past wmo.groups
during cull; clamp to -1 (invalid) on load.
- Symmetric save-side scrub so we don't persist bad in-memory data.
WHM load already scrubs, but mid-edit terrain can briefly carry
NaN before stitchEdges runs. A single NaN vertex propagates into
normal computations and the chunk's frustum cull, crashing both.
WoB allows uint32 indices but WMO format is uint16. The previous
static_cast would silently wrap a >65k index into a wrong-but-
valid value — producing visible mis-stitched triangles in the
renderer. Now log a warning once per group and clamp to 0
(degenerate triangle) so the bug is visible.
Symmetric with the load-side index clamp. A WoM whose indices
reference past the vertex buffer would crash the GPU vertex shader;
the save side now clamps to 0 (degenerate triangle) so the file
matches what the load guard would produce.
Symmetric with the load-side validation. A WOM3 batch whose
indexStart+indexCount exceeds the index buffer, or whose texture
index points past the texture array, would otherwise emit an
invalid file that the load-time guard then has to drop.
Filter at save instead so the on-disk file stays compact and
self-consistent.
Symmetric with the existing load-side guards. A bone with a NaN
pivot poisons its child bones' world matrices; an out-of-range
parent index would walk past the bones array during evaluation.
Symmetric scrub on the WoB save path matching the existing load
guard. A manually-constructed WoweeBuilding with NaN vertices
would otherwise persist them and force the load-time scrub to
re-clean the same data on every reload.
The load side already scrubs keyframe translation/rotation/scale
floats, but fromM2 → save → load is the typical path: a corrupt
M2 source would write NaN keyframes that the load-time guard would
have to clean up on every subsequent load. Symmetric scrub here
ensures the file is clean from the start.
movingSpeed also defaults to 0 if non-finite (matches load).
A WMO with a NaN doodad quaternion would produce NaN euler angles
through glm::eulerAngles() and persist them into the WoB. Identity
quaternion fallback for a non-finite source, plus NaN scrub on
position/rotation/scale separately so the converted WOB is always
load-safe.
WoW liquid types are 0=water/1=ocean/2=magma/3=slime. A user-edited
WOT could carry an out-of-range value that the editor renderer
silently maps to plain water but the server treats as undefined.
A WOT water entry with non-finite height would push NaN through
the water mesh builder and produce a degenerate Vulkan draw
(invisible water at best, GPU hang at worst).
Same scrub now applied symmetrically on the save side so a
corrupted in-memory doodad transform can't be persisted into a
WOB and then have to be cleaned up on every subsequent load.
Already had a guard for scale; extending to position/rotation too.
A WoB with non-finite doodad transforms produces NaN model matrices
that propagate into the M2 instance SSBO and crash the GPU.
Mirrors the editor-side ZoneManifest sanitize on the discovery
scanner used by the launcher and asset manager. A custom_zones/
zone with bad mapId or out-of-range tile coords would otherwise
appear in the picker and silently fail when the user selects it.
A WOT JSON could carry tile coords outside 0..63 (would compute
chunk world positions tens of thousands of units off-grid) or NaN
position/rotation values on doodad/WMO placements (would propagate
into rendering matrices and produce invisible geometry).
Same float-NaN scrub for WoB save matching the WHM/WOC/WOM saves.
Building boundRadius defaults to 1.0 if non-finite; per-group bounds
zero out non-finite components (would otherwise corrupt the cull
frustum).
Mirrors the WHM and WOC save sanitize. boundRadius defaults to 1.0 if
non-finite (matches load-time default); boundMin/boundMax components
zero out non-finite values. Prevents an in-memory model with a NaN
spike (e.g. mid-edit) from being persisted into the WOM and requiring
load-time cleanup forever after.
In-memory collision can be polluted by addMesh on bad input (e.g. a
WMO with NaN vertex positions). Without this, the save would persist
that NaN into the WOC and the load-time guards would have to clean
it up forever. Now scrubs vertices and bounds at write time, matching
the WHM save sanitize.
Same fix as WoB save just got. Without truncation a model name or
texture path over 65535 chars would silently get a wrap-around length
and corrupt the file.
Without truncation, names over 65535 chars would silently get a wrap-
around length value and produce a corrupt file (the actual string would
be longer than the saved length, shifting everything after it). Single
shared writeStr lambda replaces five copy-pasted (uint16 length + bytes)
write blocks.
JSON DBC values are mostly small integers, but a malicious file could
stuff 100MB strings into every cell to OOM the stringBlock (which has
no per-string or total cap). Added 4KB per-string + 64MB total caps —
fields exceeding either are zeroed. Float fields also get NaN scrub
(same pattern as the earlier vertex/keyframe guards).
Same defensive check as the WoB doodad path guard. Texture paths from
hostile WOM/WoB are passed to the asset manager; '..' or absolute paths
could probe outside the assets/ tree. Now cleared on detection — slot
survives but loads no texture (renderer falls back to white).
Single shared rejectTraversal lambda in WoB to avoid copy-paste.
Doodad model paths from a WoB are passed to the asset manager via
outModel.doodadNames. The asset manager only reads files, but '..' or
absolute paths from a hostile WoB could probe for files outside the
expected assets/ tree. Now clears the modelPath on traversal — the
doodad slot survives but loads no model.
stbi_load happily decodes any PNG up to 32K x 32K — at 4 bytes/pixel
that's 4GB which OOMs the editor before the override even returns.
WoW textures top out at 4K; 8K cap leaves headroom for HD upgrades
without enabling abuse. Also widens the wxh multiplication to size_t
to defeat int overflow on 8K x 8K images.
Real DBCs cap at ~250 fields and a few million records (Spell.dbc is
the biggest at ~50K rows). A malicious JSON DBC declaring fieldCount=
1G or recordCount * recordSize > 256MB would OOM the recordData
allocation. Now rejects upfront — JSON DBCs are user-shareable so a
zone export downloaded from a forum should not be able to OOM the
client by including a bad data table.
ADT placement positions and rotations are loaded as raw floats from the
binary chunks. A corrupted MDDF/MODF entry could feed NaN into the
M2/WMO instance transform and crash render. Now position/rotation are
scrubbed for both MDDF doodads and MODF buildings; MODF also scrubs the
extentLower/extentUpper bounding box used for cull tests.
Three more per-record sanity bounds that the per-group sweep didn't
cover:
- material.texturePath length cap (1KB) — was unbounded
- doodad.modelPath length cap (1KB) — was unbounded
- portal.vertexCount cap (4096) — real portals are 4-12 verts;
>4K is corrupt and would OOM the resize
The WoB top-level header sanity bounds catch obviously-bad totals, but
each group's vc/ic/tc was still unbounded. A corrupted group could
declare 4G vertices and OOM the resize before the next group even
started. Now per-group: vc<=1M, ic<=4M, tc<=1K, name<=1KB.
Per-bone-anim cap of 10K keyframes still let a malicious file allocate
up to 1024 anims × 512 bones × 10K keys = 5.24B keyframes — multi-GB
pre-OOM allocation. Now tracks total across the whole model and stops
allocating after 10M (real models stay well under 100K). When the cap
is hit we still seek past the remaining payload to keep file alignment
intact for whatever follows.
Header sanity bounds for WOC matching the WOM/WoB ones. A whole-tile
collision mesh maxes at ~32K terrain tris + a few thousand building
overlay tris; triCount > 2M is corrupted and would OOM. Tile coords
are 0..63 in WoW; clamp to 32 with warning when bogus.
Adds upfront sanity bounds to both WoB and WOM load:
WOM: vert<=1M, index<=4M, tex<=1K
WOB: groups<=4K, portals<=8K, doodads<=64K
Real WoW models stay well under these limits (M2 vert is uint16 anyway).
Without these checks a corrupted header could trigger a multi-GB
allocation and OOM the process before we finish reading the body. Also
caps name length to 1KB on WoB load (already done on WOM).
Mirrors the WOM index-clamp + texture-path-length guards. Out-of-range
indices into the WMO group's vertex buffer would crash the GPU draw.
Texture path length over 1KB indicates a corrupted/truncated WoB; clamp
to 0 to prevent allocating 65KB-string buffers per bad entry.
Out-of-range indices were a silent vector overrun on the GPU side that
could crash the vertex shader on some drivers. Replace with 0 rather
than dropping so triangle counts stay aligned (a degenerate triangle is
harmless, an off-by-one indexing the wrong vertex is silent corruption).
Texture path length over 1KB is almost certainly a corrupted or
truncated file — was previously read into a 65KB-string allocation per
entry which could exhaust memory on a malicious file.