I have a question regarding throughput in QuestDB. We are writing data from 5 sources (approx each with 2k rows/s). We want to add a new source of data with the potential of writing ~100k rows instantly (if not depending on QuestDB), but we get just a throughput of ~1k rows. Each source is writting to its own table. Here is our configuration:
and server configuration:
16 CPU and 64GB RAM.
We are using ILP for writing. There is a little traffic on the read side.
10/04/2022, 9:29 AM
hi @Michal Zeman let us look into this
10/04/2022, 9:30 AM
hi Michal, is the table partitioned? are timestamps assigned by clients or by a server? are there any out of order inserts? how big (in bytes, roughly) is a single row?
10/04/2022, 9:35 AM
• 3 sources have client timestamps, and 2 of them have server timestamps. 6th we were trying both ways (client which is preferred but we also tried server).
• The table is partitioned by day, we did try also hour (since we are expecting ~millions per hour).
• There should be no out-of-order insertion (99.9% sure, we have checked it already many times).
• Each row has a few kB.
10/04/2022, 10:04 AM
Hi Michal,Have you checked network and disk I/O rates? Would be great to confirm that network and disk aren't the bottleneck.
10/04/2022, 10:15 AM
we will try
10/04/2022, 10:38 AM
20 io/s - does it mean 20 disk I/O operations per second? Are those reads or writes? Could you check the throughput metrics too?
20 operations per second doesn't sound normal considering that you're writing >1000 rows/s
10/04/2022, 12:37 PM
we are checking where we have problem in the dashboards