Restrict atom count in deserializer to 1 million

Otherwise it's too easy to tie up too many resources (cpu, memory) by
crafting inputs with a very large atom count (up to 4 billion.)

This may need some finetuning. If the limit proves too restrictive for
very large snapshots, we can make it relative to the size of the input.
This commit is contained in:
Ben Noordhuis 2024-10-17 20:28:46 +02:00
parent a1d1bce0b7
commit 7be9d99d15
2 changed files with 7 additions and 1 deletions

View file

@ -35571,8 +35571,13 @@ static int JS_ReadObjectAtoms(BCReaderState *s)
}
if (bc_get_leb128(s, &s->idx_to_atom_count))
return -1;
if (s->idx_to_atom_count > 1000*1000) {
JS_ThrowInternalError(s->ctx, "unreasonable atom count: %u",
s->idx_to_atom_count);
return -1;
}
bc_read_trace(s, "%d atom indexes {\n", s->idx_to_atom_count);
bc_read_trace(s, "%u atom indexes {\n", s->idx_to_atom_count);
if (s->idx_to_atom_count != 0) {
s->idx_to_atom = js_mallocz(s->ctx, s->idx_to_atom_count *

View file

@ -231,6 +231,7 @@ function bjson_test_fuzz()
{
var corpus = [
"EBAAAAAABGA=",
"EObm5oIt",
];
for (var input of corpus) {
var buf = base64decode(input);