Title
#users-market-data
j

Jiri Pokorny

10/21/2022, 11:41 AM
Hi everyone, hopefully someone will be able to help me here. 🙂 I am running a local docker instance of QuestDB and I am trying to send quite a large message via influx line protocol (few megabytes in a single field). Quest was dropping my connection so I started to tune some parameters and used something like:
-e QDB_LINE_TCP_MSG_BUFFER_SIZE=5000000 -e QDB_LINE_TCP_MAX_MEASUREMENT_SIZE=5000000
However, now the container just starts and ends without any explicit error, maybe it runs out of memory? Any advice regarding large messages and about how to tune these kind of parameters and how they sum up to total memory consumption of QuestDB process?
Jaromir Hamala

Jaromir Hamala

10/21/2022, 12:01 PM
Hello Jiri, with these settings the container dies even before you send a massive message? or does it die at the moment you send the mssage?
12:03 PM
if you are on a recent version then you can see a rough memory allocation distribution by running this SQL:
select memory_tag, bytes / (1024 * 1024) from memory_metrics() order by bytes desc;
bear in mind this is just allocated virtual memory, not necessary backed by physical memory.
j

Jiri Pokorny

10/21/2022, 12:28 PM
With these settings it dies on container startup, last lines:
2022-10-21T12:25:27.220044Z I i.q.c.l.t.LineTcpReceiver using default context
2022-10-21T12:25:27.220246Z A tcp-line-server listening on 0.0.0.0:9009 [fd=93 backlog=256]
2022-10-21T12:25:27.187851Z A http-min-server listening on 0.0.0.0:9003 [fd=88 backlog=4]
And then it just ends. Exit code is 137 suggesting it was somehow killed.
Jaromir Hamala

Jaromir Hamala

10/21/2022, 12:29 PM
yeah, error 137 is OOM killer
12:30 PM
let me check the units
12:49 PM
ok, it seems the
QDB_LINE_TCP_MAX_MEASUREMENT_SIZE
is the culprit here. Memory consumption is
SMALL_CONSTANT * QDB_LINE_TCP_MAX_MEASUREMENT_SIZE * QDB_LINE_TCP_WRITER_QUEUE_CAPACITY
LINE_TCP_WRITER_QUEUE_CAPACITY
is 128 by default. So
5M * 128 * constant
-> we are already in GBs territory.
j

Jiri Pokorny

10/21/2022, 12:51 PM
ah, ok, I thought that it would be something like that...
Jaromir Hamala

Jaromir Hamala

10/21/2022, 12:51 PM
LINE_TCP_WRITER_QUEUE_CAPACITY
is a queue between network I/O and writer jobs. if you decrease this than memory consumption will go down, but ingestion performance might suffer too. as it will allow less in-flight measurements between network IO and (disk) writer
j

Jiri Pokorny

10/21/2022, 12:52 PM
however, still it seems that the overall memory allocation does not scale linearly with the buffer size, I can run this:
-e QDB_LINE_TCP_MSG_BUFFER_SIZE=10m -e QDB_LINE_TCP_MAX_MEASUREMENT_SIZE=2m -m 1.5g
but this still crashes:
-e QDB_LINE_TCP_MSG_BUFFER_SIZE=10m -e QDB_LINE_TCP_MAX_MEASUREMENT_SIZE=3m -m 10g
12:53 PM
-m
docker parameter limits the memory...
Jaromir Hamala

Jaromir Hamala

10/21/2022, 1:05 PM
with
QDB_LINE_TCP_MSG_BUFFER_SIZE=10m -e QDB_LINE_TCP_MAX_MEASUREMENT_SIZE=2m
it allocates 1426 MB of memory. with
3m
it allocates 2450 MB and with
4m
is also allocated 2450 MB. I assume there is also some rounding to power of 2 involved. with
8m
it’s 4498 MB. with
16m
it’s 8753 MB. ( also had to increase
QDB_LINE_TCP_MSG_BUFFER_SIZE
to 16m)
1:05 PM
so it looks fairly linear to me. there might be some steps due to pow-of-2 rounding. but the overall shape looks linear.
1:06 PM
fwiw: with
32m
it allocates
17314 MB
1:09 PM
and one last bit: with 32m and
QDB_LINE_TCP_WRITER_QUEUE_CAPACITY=64
it allocated +- 9GB. so that’s roughly half compared to the default queue capacity (=128)
j

Jiri Pokorny

10/21/2022, 1:10 PM
Hmm, I see, maybe there's something else that constraints my container size... as I am still unable to run it with
>2m
.
1:10 PM
Thanks a lot. 🙂 One more question, do you think that I am somehow exploiting QuestDB for storing multi-megabyte messages or is this normal use-case?
Jaromir Hamala

Jaromir Hamala

10/21/2022, 1:11 PM
this is just offheap memory allocated by QuestDB. there is more into it: JVM will allocate a couple of 100s MB of RAM.
1:11 PM
multi-megabyte rows are unusual. are these big strings/symbols? or millions of fixed size columns?
j

Jiri Pokorny

10/21/2022, 1:14 PM
Row has just a few columns and one of them might occasionally become quite large. It is a string containing a document.
Jaromir Hamala

Jaromir Hamala

10/21/2022, 1:17 PM
when it’s just occasional then it’s OK. questdb is not very disk-efficient when storing massive strings. strings are stored as UTF-16 encoded. so depending on the content they might use up to twice as much space compared what you could expect. if it’s rare then it’s probably not a big problem.
j

Jiri Pokorny

10/21/2022, 1:24 PM
I see, that's good to know...
1:24 PM
ok, thanks for your help! 🙂
Jaromir Hamala

Jaromir Hamala

10/21/2022, 1:27 PM
you are very welcome. happy questing! 🙂