https://questdb.io logo
Docs
Join the conversationJoin Slack
Channels
announcement
contributors
github
questdb-linen
random
users-market-data
users-public
Powered by Linen
users-public
  • b

    benrush

    11/03/2022, 3:47 AM
    Strongly recommand adding comment on
    CairoConfiguration
    just like
    server.conf
    so that we could understand how to configure the embedded QuestDB instance🙂
    a
    • 2
    • 1
  • w

    WCKD

    11/03/2022, 8:37 AM
    Hello, I'm facing a slowdown issue with questdb in terms of ingestion. Every 1 minute, I'm inserting exactly 1,200,000 (1.2M) rows into the database. I'm using the InfluxDB Ingestion method from python. The database format is: pair:symbol, capacity 512, CACHE, INDEX index_capacity 8_388_608 price: double ts: timestamp The data that's ready for insert is ALWAYS ordered by timestamp, so it's not O3. The problem I"m facing is that, in the first minute, the ingestion process measured by java process takes only 20MB of ram (over the idle 800MB) and it's performed really fast. After 6 hours of ingestion, an ingestion process takes 6GB (over the idle 800MB) of ram and takes 90-100 seconds to perform. Can someone advise me what settings should I tweak? I would like constant ingestion performance.
    j
    a
    +2
    • 5
    • 129
  • s

    Suri Zhang

    11/03/2022, 8:52 AM
    RE https://github.com/questdb/questdb/issues/595 (sqlancer for QuestDB): I have submitted 1 bug report found by sqlancer (and it is fixed 🎉) `work done`: test INSERT into tables, ALTER TABLE (only add or drop index), TRUNCATE TABLE generate data for types - Numbers (int & float) , BOOLEAN and NULL `WIP`: test SELECT WHERE with operators(not, basic binary logical operators, binary arithmetic operators, binary comparators) --- by running current version of sqlancer, it seems to find some logical bugs. I need to investigate a bit to see if they are actual QuestDB bugs and then maybe file bug reports `future work`: test more sql keywords (e.g. JOIN), test more datatypes (especially SYMBOL and time series), etc @Andrey Pechkurov
    a
    • 2
    • 21
  • n

    Newskooler

    11/03/2022, 11:37 AM
    Hi, I get the following error: Could not process line data. Writer is in error. Could not parse measurement. May be mangled due to partial parsing. Do you know how can I debug this and what may be the cause of this?
    j
    • 2
    • 12
  • j

    Justin Bojarski

    11/03/2022, 1:42 PM
    Hi I'm getting a bit of a quirky error I was hoping someone could help with... For some reason my questdb started throwing this error when I try to run a query on a particular table... Partition '2022-03-16.17' does not exist in table 'NFT_HISTORICAL_DATA' directory. Run [ALTER TABLE NFT_HISTORICAL_DATA DROP PARTITION LIST '2022-03-16.17'] to repair the table or restore the partition directory. But that date isn't valid, and when I try to drop that partition I get another error of 'YYYY-MM-DD' expected[errno=0] Any ideas how I can force this drop and correct the problem? Also I'm not even sure how this cropped up in the first place since it's clearly an invalid date so that's another problem
    a
    • 2
    • 7
  • n

    Newskooler

    11/03/2022, 7:19 PM
    Hi, I am getting the following error when making a query.
    could not mmap [Size=6720, offset-0. fd-65133, memUsed= 16222810218, fileLen=8192]
    I remember there was a way to “fix” this in the config by increasing some of the default values. Can you please point me out to where I should read more about this and what in particular seems to be the problem here đŸ€”
    a
    a
    • 3
    • 19
  • s

    Shubham Jain

    11/04/2022, 8:04 AM
    Hi, I am testing out questDB for my financial data application via official questDB AWS AMI in that I am frequently getting email in file
    /var/spool/mail/root
    and below are the contents -
    From root@ip-172-31-24-139.ap-south-1.compute.internal  Fri Nov  4 06:05:02 2022
    From: "(Cron Daemon)" <root@ip-172-31-24-139.ap-south-1.compute.internal>
    To: root@ip-172-31-24-139.ap-south-1.compute.internal
    Subject: Cron <root@ip-172-31-24-139> /etc/cron.daily/logrotate
    Content-Type: text/plain; charset=UTF-8
    Auto-Submitted: auto-generated
    Precedence: bulk
    error: Ignoring questdb because of bad file mode - must be 0644 or 0444.
    any thoughts on this ?
    j
    • 2
    • 3
  • j

    javier ramirez

    11/04/2022, 10:54 AM
    Hi. QuestDB was featured earlier today at the AWS Twitch show “Build on Open Source”. We covered what is QuestDB, and then we did a demo on installing via docker, ingesting a CSV, running some sample queries with interesting time-series capabilities, ingesting streaming data via the Go official client, and integrating with Grafana for near-realtime dashboards. The recording is available at https://www.twitch.tv/videos/1643077489, and if you want to replicate the demo yourself, instructions and source code can be found at https://github.com/javier/questdb-quickstart
  • m

    Michael

    11/04/2022, 7:17 PM
    Hi all, I get following errors after including #include <questdb/ilp/line_sender.hpp> in xCode: Undefined symbol: _SecRandomCopyBytes and Undefined symbol: _kSecRandomDefault I have build the c++ interface according to https://github.com/questdb/c-questdb-client/blob/main/doc/BUILD.md#pre-requisites-and-dependencies Running MacOS 12.6 Monterey installed questdb via brew install questdb Any ideas? Thank you
    n
    a
    p
    • 4
    • 10
  • n

    Nicolas Hourcard

    11/04/2022, 10:19 PM
    welcome @Dave Kilian and @Michael
  • a

    Ahmad Abbasi

    11/06/2022, 1:39 AM
    Has anyone experience an error like this:
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # A fatal error has been detected by the Java Runtime Environment:
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #  SIGSEGV (0xb) at pc=0x00007f7e4c86e460, pid=114765, tid=114954
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # JRE version: OpenJDK Runtime Environment Corretto-11.0.17.8.1 (11.0.17+8) (build 11.0.17+8-LTS)
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # Java VM: OpenJDK 64-Bit Server VM Corretto-11.0.17.8.1 (11.0.17+8-LTS, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # Problematic frame:
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # J 3786 c2 io.questdb.cairo.vm.api.MemoryCR.getLong(J)J io.questdb@6.5.4 (44 bytes) @ 0x00007f7e4c86e460 [0x00007f7e4c86e420+0x0000000000000040]
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # An error report file with more information is saved as:
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # //hs_err_pid114765.log
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: Could not load hsdis-amd64.so; library not loadable; PrintAssembly is disabled
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: # If you would like to submit a bug report, please visit:
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #   <https://github.com/corretto/corretto-11/issues/>
    Nov 06 01:39:20 ip-172-31-42-190.us-east-2.compute.internal questdb[114765]: #
    Nov 06 01:39:21 ip-172-31-42-190.us-east-2.compute.internal systemd[1]: questdb.service: main process exited, code=killed, status=6/ABRT
    Nov 06 01:39:21 ip-172-31-42-190.us-east-2.compute.internal systemd[1]: Unit questdb.service entered failed state.
    Nov 06 01:39:21 ip-172-31-42-190.us-east-2.compute.internal systemd[1]: questdb.service failed.
    n
    a
    • 3
    • 3
  • a

    Ahmad Abbasi

    11/06/2022, 1:40 AM
    This is on version
    6.5.4
    b
    n
    • 3
    • 3
  • n

    Nicolas Hourcard

    11/07/2022, 8:46 AM
    welcome @Pavlos Bountagkidis @Randy Sun and @Weibo Lei
    p
    w
    • 3
    • 13
  • j

    Jone Qiang

    11/07/2022, 9:28 AM
    There is a question about sql from granfan.When run SELECT $__time(timestamp),name from stra WHERE ("bk" = 'test') AND $__timeFilter(timestamp) sample BY $__interval in granfana, granfan convert $__interval to 200ms for example , it didnot support in qeustdb, it should be 200T, how can we fix this issue, could you give me some advice ,thx.
    j
    • 2
    • 12
  • m

    Michal Stovicek

    11/07/2022, 1:07 PM
    Hi guys, I am trying to drop partitions with ALTER TABLE 'mytable' DROP PARTITION WHERE timestamp < to_timestamp('2022-07-01:00:00:00', 'yyyy-MM-dd:HH:mm:ss'); and getting the following error:
    async command/event queue buffer overflow
    Can you suggest what to do about it? Thanks! (Using version 6.5.3)
    n
    j
    +2
    • 5
    • 22
  • p

    Pei

    11/08/2022, 9:06 AM
    Hi @arun T welcome!
  • h

    Hà Hữu

    11/08/2022, 10:55 AM
    Hi QuestDB Team i'm using Kafka connect to sync data from debezium to QuestDB i have one question Does schema registry make connector slowly? i'm always getting lag message in QuestDB ILP connector using 3 brokers with 100 partitions anyway thanks for your help IHi
    v
    j
    p
    • 4
    • 86
  • w

    Weibo Lei

    11/09/2022, 2:43 AM
    Hi QuestDB Team, There are 37317008 rows in my table. I do the query from the table, looks like the query speed is a bit slow.
    SELECT * FROM kline_item WHERE stock_id = '603501' and market_type = '1' and k_type = '6' and candle_mode = '2' order by market_date desc limit 500;
    it may spend more than 1 second to the first time query.
    500 rows in 4.35s
    Execute: 4.28sNetwork: 68.74msTotal: 4.35s
    and It normally spends 600ms in the next couple of times with the same SQL. the schema of the table is:
    CREATE TABLE kline_item(
        market_date TIMESTAMP, 
        update_date TIMESTAMP,
        market_date_int int,
        update_date_int int,
        market_type SYMBOL CAPACITY 4 NOCACHE INDEX,
        stock_id SYMBOL CAPACITY 28000 NOCACHE INDEX,
        k_type SYMBOL CAPACITY 5 NOCACHE INDEX,
        highest_price double,
        lowest_price double,
        open_price double,
        close_price double,
        trade_val double,
        trade_amount long,
        change_amount double,
        change_pct double,
        exchange_ratio double,
        candle_mode SYMBOL CAPACITY 2 NOCACHE INDEX,
        up_or_down_limt byte,
        ma60 double,
        ma120 double,
        ma250 double,
        state byte,
        create_time TIMESTAMP,
        update_time TIMESTAMP
    ), INDEX(stock_id) TIMESTAMP(market_date) 
    PARTITION BY DAY;
    ps: I index the fields like 'market_type','stock_id','k_type' and 'candle_mode' because I want to make the query faster during the query. How can I speed up my query speed?
    j
    b
    • 3
    • 56
  • j

    Jaromir Hamala

    11/09/2022, 1:10 PM
    Hello everyone 👋, some time ago I ran a little survey about streaming technologies used together with QuestDB :questdb_new:. The survey shows Apache Kafka is by far the most popular tech. That’s not really surprising, is it? QuestDB has had a guide for Kafka -> QuestDB ingestion for a long time. The guide was based on the generic Kafka Connect JDBC Sink and it relied on QuestDB Postgres compatibility. It worked, but it had some drawbacks. Mainly: Performance was not great due to the combination of JDBC and parallelism always set to 1. Also schema management was somewhat complicated, it required events in topics to always have an explicit schema, etc. I am happy to announce we released the official QuestDB Connector for Kafka! The connector does use the Influx Line Protocol (ILP) under the hood. The ILP is the way to achieve massive ingestion speeds QuestDB is famous for. It can ingest 100 of 1000s rows / second, easily. đŸ”„ ILP also simplifies schema management, especially during development. The connector is documented at: https://questdb.io/docs/third-party-tools/kafka/questdb-kafka/ If you are using both Kafka and QuestDB then give it a try! As always: Feedback is very much appreciated! Either here or you if you don’t feel like sharing your Kafka - QuestDB story publicly then feel free to send me a private message.
  • w

    WCKD

    11/09/2022, 3:29 PM
    Hello, is there a way we can retrieve a questdb query with the timestamp in unix format instead of the default iso format?
    a
    • 2
    • 23
  • w

    Weibo Lei

    11/09/2022, 3:34 PM
    Hello, is there a way to alter the table's column data type from symbol to int or double? I checked out the docs and found nothing.
    a
    • 2
    • 8
  • n

    Nicolas Hourcard

    11/09/2022, 4:40 PM
    welcome @Zev R
  • b

    Benjamin Böck

    11/10/2022, 9:54 AM
    Hi Quest DB Team, i run into a little trouble with my Database 😞 After a power loss of the server i can only aces the first 1/3 of my data. some information's: ‱ quest running at windows ‱ we use partitioned data ‱ same tables are corrupt (timestamps binary 00...) ‱ ‱ the main data table files seems ok (check with hex viewer for content) ‱ all column file sizes in the partition folders match in size (seem ok) ‱ datarange check is performed ba "select * from tab limit -10" her is see its only the first part of the data based on timestamp At this point i reached out for help. In the past Tim Borowski was part of the company and fixed a problem with broken data. He wrote a script to remove bad data in the binary files. Now my question, habe you any hint to fix this kind of problem. If it made sense I'm able to edit the binarys. i'm happy for all suggestions... Benjamin
    n
    a
    • 3
    • 49
  • h

    Holger

    11/10/2022, 3:19 PM
    Hi,
  • h

    Holger

    11/10/2022, 3:20 PM
    Im gettn the following error using questdb docker:
  • h

    Holger

    11/10/2022, 3:20 PM
    # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fa8ba5d1460, pid=1, tid=45 # # JRE version: OpenJDK Runtime Environment Corretto-17.0.3.6.1 (17.0.3+6) (build 17.0.3+6-LTS) # Java VM: OpenJDK 64-Bit Server VM Corretto-17.0.3.6.1 (17.0.3+6-LTS, mixed mode, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64) # Problematic frame: # C [libc.so.6+0x161460] # # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again # # An error report file with more information is saved as: # /root/.questdb/db/hs_err_pid+1.log # # If you would like to submit a bug report, please visit: # https://github.com/corretto/corretto-17/issues/ # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. #users-public
    b
    • 2
    • 6
  • h

    Holger

    11/10/2022, 3:20 PM
    Any ideas where i should look into?
  • c

    cl

    11/10/2022, 5:45 PM
    Hi, In ILP java client API, how can I check socket is closed before I flush data? I would like to know how many records in memory before I flush data? Is there an API for me to use?
    p
    • 2
    • 2
  • j

    Jan Pojer

    11/10/2022, 10:12 PM
    Hello everyone, we are seeing a strange behaviour when detaching partitions. The partition appears detached - the *.detached folder is present, but the original partition folder also remains in place. Once the pod is restarted the table purges the original partition folder as 'non-attached' partition. We are running embedded quest. Is there a way to either prevent this from happening (original partition would get removed as promised by the doc 🙂 ) or to trigger the purge of non-attached partitions from code?
    m
    • 2
    • 5
  • n

    Nicolas Hourcard

    11/10/2022, 10:30 PM
    welcome @George UA
Powered by Linen
Title
n

Nicolas Hourcard

11/10/2022, 10:30 PM
welcome @George UA
View count: 2