Title
#users-public
Newskooler

Newskooler

11/03/2022, 7:19 PM
Hi, I am getting the following error when making a query.
could not mmap [Size=6720, offset-0. fd-65133, memUsed= 16222810218, fileLen=8192]
I remember there was a way to “fix” this in the config by increasing some of the default values. Can you please point me out to where I should read more about this and what in particular seems to be the problem here 🤔
Amy Wang

Amy Wang

11/03/2022, 7:28 PM
Newskooler

Newskooler

11/03/2022, 7:36 PM
Yes, look will into this. Thanks!
Andrey Pechkurov

Andrey Pechkurov

11/04/2022, 6:09 AM
Hi Stelian, We have a handy troubleshooting guide for such situations:https://questdb.io/docs/troubleshooting/faq/
Newskooler

Newskooler

11/04/2022, 8:30 AM
Thanks @Andrey Pechkurov
11:11 AM
Hi @Andrey Pechkurov I read these and understand the problem better now, however I still lack the knowledge of how much to change the mmap by?
11:12 AM
Q1: Does the
vm.max_map_count=
set itself automatically based on machine size? I.e. if I increase the machine, will that value increase dynamically as well?
11:13 AM
Q2: How to know what’s the recommended limit to how much I can allocate to
vm.max_map_count=
? I read this “Each mapped area needs kernel memory, and it’s recommended to have around 128 bytes available per 1 map count.” but don’t quite understand what it means 😕
Andrey Pechkurov

Andrey Pechkurov

11/04/2022, 11:52 AM
A1: no, the default on most Linux distros is something like 64K A2: The kernel needs a metadata struct called
vm_struct
to track each mmapped file (or a segment of a file). Since QDB uses mmap for most disk I/O, it mmaps all of the necessary column files whenever you run a query. This is done by so-called table reader which are reused when possible. So, each time a table reader opens necessary partitions, it mmaps the column file and OS allocates a
vm_struct
to track the mmapped file. Each
vm_struct
requires 128 bytes. If you have 1M of column files in a table and you do a full scan, you'd need 128MB of RAM just for the metadata.
Newskooler

Newskooler

11/04/2022, 12:15 PM
Okay, thanks! That’s very useful😃
12:15 PM
I increase the 64k to 4x. I am running a machine with 8CPU and 32GB ram, so I think that should be okay.
Andrey Pechkurov

Andrey Pechkurov

11/04/2022, 1:36 PM
Yeah, that sounds ok
1:39 PM
Just to make sure that your fix is the correct one, did you check the server logs when you got this error? Did they contain error code 12?
Newskooler

Newskooler

11/04/2022, 1:39 PM
I dis not find any errors on the server logs 😕 which was confusing…
1:40 PM
I found other errors, so my Grep worked, but not these ones
Andrey Pechkurov

Andrey Pechkurov

11/04/2022, 1:44 PM
That's really weird. It's very unlikely that we don't log an error message on such error
Newskooler

Newskooler

11/04/2022, 1:46 PM
I thought so too… i do have a log issue. I log to file outside of docker but also my docker image keeps growing as logs seem to be “printer” there. Could that be related? That’s another problem which I need to “fix”
Andrey Pechkurov

Andrey Pechkurov

11/04/2022, 1:51 PM
I'm not that good in docker, but if you mount a volume and write logs there, the image shouldn't be growing, no?
Newskooler

Newskooler

11/04/2022, 2:22 PM
That what I thought, but it is. I will write and ask you colleague who seems to be the docker expert😃 i certainly don’t know why that’s happening