fix(dbc): cap JSON DBC fieldCount/recordCount to prevent OOM on hostile file

Real DBCs cap at ~250 fields and a few million records (Spell.dbc is
the biggest at ~50K rows). A malicious JSON DBC declaring fieldCount=
1G or recordCount * recordSize > 256MB would OOM the recordData
allocation. Now rejects upfront — JSON DBCs are user-shareable so a
zone export downloaded from a forum should not be able to OOM the
client by including a bad data table.
This commit is contained in:
Kelsi 2026-05-06 06:07:09 -07:00
parent 5b6f59bbbd
commit 2d8c843704

View file

@ -398,9 +398,22 @@ bool DBCFile::loadJSON(const std::vector<uint8_t>& jsonData) {
fieldCount = static_cast<uint32_t>(records[0].size());
}
if (fieldCount == 0) return false;
// Sanity caps. Real DBCs cap at ~250 fields and a few million
// records (Spell.dbc is the biggest at ~50K rows). Multi-million
// products would OOM the recordData allocation below.
if (fieldCount > 1024) {
LOG_ERROR("JSON DBC: fieldCount ", fieldCount, " too large");
return false;
}
recordSize = fieldCount * 4;
recordCount = static_cast<uint32_t>(records.size());
if (recordCount > 5'000'000 ||
static_cast<uint64_t>(recordCount) * recordSize > (256ull << 20)) {
LOG_ERROR("JSON DBC: recordCount ", recordCount, " * recordSize ",
recordSize, " exceeds 256MB cap");
return false;
}
stringBlock.clear();
stringBlock.push_back(0);