• b

    Baha

    9 months ago
    Hi everyone, I need to insert millions of records from a table A to a table B. The designated timestamp of A is the ingestion time while the designated timestamp for B is an other timestamp date. I’ve tried many combinations but it always take more than 150sec to do the job (~1M records), here some examples
    CREATE TABLE B AS(
    SELECT date,b,c,d
    FROM A
    latest by c ORDER BY  date) timestamp(date)
    here table A is already created :
    Insert into B
    SELECT date,b,c,d
    FROM A
    latest by c ORDER BY date
    Is it an expected behavior ? For that use case would it be preferable to re - populate Table B using bulk_csv upload or  the influx - line protocol ?
    b
    Alex Pelagenko
    8 replies
    Copy to Clipboard
  • k

    Kendrick

    9 months ago
    hi, instead of using
    127.0.0.1
    in kafka connect's
    connection.url
    , I would like to use another server url, e.g GCP's internal url
    10.0.0.0
    . Would it be sufficient to just swap out
    127.0.0.1
    for
    10.0.0.0
    ? It didnt seem to work when I tried it
    name=local-jdbc-sink
    connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
    connection.url=jdbc:<postgresql://10.0.0.0:8812/qdb?useSSL=false>  <--- like so
    connection.user=admin
    connection.password=quest
    
    topics=example-topic
    insert.mode=insert
    dialect.name=PostgreSqlDatabaseDialect
    pk.mode=none
    auto.create=true
    Edit: I verified that kafka-connect works, but there seems to be errors in questdb logs. How do I resolve this? I'm using questdb 6.0.9 linuxand jdbc connector
    10.2.5
    2021-12-10T08:50:45.826638Z I i.q.c.p.PGConnectionContext parse [fd=17, q=SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME,  CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema'  WHEN true THEN CASE  WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TABLE'   WHEN 'v' THEN 'SYSTEM VIEW'   WHEN 'i' THEN 'SYSTEM INDEX'   ELSE NULL   END  WHEN n.nspname = 'pg_toast' THEN CASE c.relkind   WHEN 'r' THEN 'SYSTEM TOAST TABLE'   WHEN 'i' THEN 'SYSTEM TOAST INDEX'   ELSE NULL   END  ELSE CASE c.relkind   WHEN 'r' THEN 'TEMPORARY TABLE'   WHEN 'p' THEN 'TEMPORARY TABLE'   WHEN 'i' THEN 'TEMPORARY INDEX'   WHEN 'S' THEN 'TEMPORARY SEQUENCE'   WHEN 'v' THEN 'TEMPORARY VIEW'   ELSE NULL   END  END  WHEN false THEN CASE c.relkind  WHEN 'r' THEN 'TABLE'  WHEN 'p' THEN 'PARTITIONED TABLE'  WHEN 'i' THEN 'INDEX'  WHEN 'S' THEN 'SEQUENCE'  WHEN 'v' THEN 'VIEW'  WHEN 'c' THEN 'TYPE'  WHEN 'f' THEN 'FOREIGN TABLE'  WHEN 'm' THEN 'MATERIALIZED VIEW'  ELSE NULL  END  ELSE NULL  END  AS TABLE_TYPE, d.description AS REMARKS,  '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION  FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c  LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0)  LEFT JOIN pg_catalog.pg_class dc ON (d.classoid=dc.oid AND dc.relname='pg_class')  LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid=dc.relnamespace AND dn.nspname='pg_catalog')  WHERE c.relnamespace = n.oid  AND c.relname LIKE E'processed_swap_txs' AND (false  OR ( c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema' ) )  ORDER BY TABLE_TYPE,TABLE_SCHEM,TABLE_NAME ]^M
    2021-12-10T08:50:45.842027Z I i.q.c.p.PGConnectionContext parse [fd=17, q=SELECT        result.TABLE_CAT,        result.TABLE_SCHEM,        result.TABLE_NAME,        result.COLUMN_NAME,        result.KEY_SEQ,        result.PK_NAME FROM      (SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM,   ct.relname AS TABLE_NAME, a.attname AS COLUMN_NAME,   (information_schema._pg_expandarray(i.indkey)).n AS KEY_SEQ, ci.relname AS PK_NAME,   information_schema._pg_expandarray(i.indkey) AS KEYS, a.attnum AS A_ATTNUM FROM pg_catalog.pg_class ct   JOIN pg_catalog.pg_attribute a ON (ct.oid = a.attrelid)   JOIN pg_catalog.pg_namespace n ON (ct.relnamespace = n.oid)   JOIN pg_catalog.pg_index i ON ( a.attrelid = i.indrelid)   JOIN pg_catalog.pg_class ci ON (ci.oid = i.indexrelid) WHERE true  AND ct.relname = E'processed_swap_txs' AND i.indisprimary  ) result where  result.A_ATTNUM = (result.KEYS).x  ORDER BY result.table_name, result.pk_name, result.key_seq]^M
    2021-12-10T08:50:45.846389Z I i.q.g.SqlCompiler plan [q=`select-choose TABLE_CAT, TABLE_SCHEM, TABLE_NAME, COLUMN_NAME, KEY_SEQ, PK_NAME from (select-virtual [NULL TABLE_CAT, TABLE_SCHEM, TABLE_NAME, COLUMN_NAME, _pg_expandarray . n KEY_SEQ, PK_NAME] NULL TABLE_CAT, TABLE_SCHEM, TABLE_NAME, COLUMN_NAME, _pg_expandarray . n KEY_SEQ, PK_NAME, _pg_expandarray KEYS, A_ATTNUM from (select-choose [n.nspname TABLE_SCHEM, ct.relname TABLE_NAME, a.attname COLUMN_NAME, _pg_expandarray, ci.relname PK_NAME] n.nspname TABLE_SCHEM, ct.relname TABLE_NAME, a.attname COLUMN_NAME, _pg_expandarray, ci.relname PK_NAME, _pg_expandarray KEYS, a.attnum A_ATTNUM from (select [relname, oid, relnamespace] from pg_catalog.pg_class() ct join select [attname, attrelid, attnum] from pg_catalog.pg_attribute() a on a.attrelid = ct.oid join select [nspname, oid] from pg_catalog.pg_namespace() n on n.oid = ct.relnamespace join (select [indrelid, indisprimary, indexrelid] from pg_catalog.pg_index() i where indisprimary) i on i.indrelid = a.attrelid join select [relname, oid] from pg_catalog.pg_class() ci on ci.oid = i.indexrelid cross join select-cursor [information_schema._pg_expandarray(i.indkey) _pg_expandarray] information_schema._pg_expandarray(i.indkey) _pg_expandarray from (information_schema._pg_expandarray(i.indkey)) _xQdbA0 post-join-where a.attnum = _pg_expandarray . x where relname = E'processed_swap_txs' const-where true) ct) ct) result order by TABLE_NAME, PK_NAME, KEY_SEQ`, fd=17]^M
    2021-12-10T08:50:45.915674Z E server-main unhandled error [job=io.questdb.cutlass.pgwire.PGWireServer$1@cd1d761, ex=^M
    java.lang.AssertionError^M
            at io.questdb.cairo.vm.MemoryCARWImpl.checkAndExtend(MemoryCARWImpl.java:155)^M
            at io.questdb.cairo.vm.MemoryCARWImpl.appendAddressFor(MemoryCARWImpl.java:110)^M
            at io.questdb.cairo.vm.api.MemoryCARW.putStr(MemoryCARW.java:216)^M
            at io.questdb.cairo.vm.api.MemoryCARW.putStr(MemoryCARW.java:209)^M
            at io.questdb.cairo.RecordChain.putStr(RecordChain.java:231)^M
            at io.questdb.cairo.sink/0x000000010025a440.copy(Unknown Source)^M
            at io.questdb.cairo.RecordChain.put(RecordChain.java:156)^M
            at io.questdb.griffin.engine.join.HashOuterJoinRecordCursorFactory.buildMap(HashOuterJoinRecordCursorFactory.java:85)^M
            at io.questdb.griffin.engine.join.HashJoinRecordCursorFactory.buildMapOfSlaveRecords(HashJoinRecordCursorFactory.java:99)^M
            at io.questdb.griffin.engine.join.HashJoinRecordCursorFactory.getCursor(HashJoinRecordCursorFactory.java:84)^M
            at io.questdb.griffin.engine.join.HashJoinRecordCursorFactory.getCursor(HashJoinRecordCursorFactory.java:89)^M
            at io.questdb.griffin.engine.join.HashJoinRecordCursorFactory.getCursor(HashJoinRecordCursorFactory.java:89)^M
            at io.questdb.griffin.engine.join.HashJoinRecordCursorFactory.getCursor(HashJoinRecordCursorFactory.java:89)^M
            at io.questdb.griffin.engine.join.CrossJoinRecordCursorFactory.getCursor(CrossJoinRecordCursorFactory.java:61)^M
            at io.questdb.griffin.engine.table.FilteredRecordCursorFactory.getCursor(FilteredRecordCursorFactory.java:55)^M
            at io.questdb.griffin.engine.table.SelectedRecordCursorFactory.getCursor(SelectedRecordCursorFactory.java:58)^M
            at io.questdb.griffin.engine.table.VirtualRecordCursorFactory.getCursor(VirtualRecordCursorFactory.java:73)^M
            at io.questdb.griffin.engine.orderby.SortedRecordCursorFactory.getCursor(SortedRecordCursorFactory.java:73)^M
            at io.questdb.cutlass.pgwire.PGConnectionContext.setupFactoryAndCursor(PGConnectionContext.java:2219)^M
            at io.questdb.cutlass.pgwire.PGConnectionContext.processExecute(PGConnectionContext.java:1836)^M
            at io.questdb.cutlass.pgwire.PGConnectionContext.processExec(PGConnectionContext.java:1829)^M
            at io.questdb.cutlass.pgwire.PGConnectionContext.parse(PGConnectionContext.java:1411)^M
            at io.questdb.cutlass.pgwire.PGConnectionContext.handleClientOperation(PGConnectionContext.java:371)^M
            at io.questdb.cutlass.pgwire.PGJobContext.handleClientOperation(PGJobContext.java:69)^M
            at io.questdb.cutlass.pgwire.PGWireServer$1.lambda$$0(PGWireServer.java:76)^M
            at io.questdb.network.AbstractIODispatcher.processIOQueue(AbstractIODispatcher.java:162)^M
            at io.questdb.cutlass.pgwire.PGWireServer$1.run(PGWireServer.java:91)^M
            at io.questdb.mp.Worker.run(Worker.java:112)^M
    ]^M
    k
    Vlad
    +1
    7 replies
    Copy to Clipboard
  • j

    John

    9 months ago
    Hi all you clever guys. If I want to create an ILP file from say a csv is there an 'easy' way? I'm assuming not. If not what is the standard way. I come from the old school before 'continuous feeds' existed. Some of the references I have followed were a bit daunting. Nevertheless I do wish to learn this process. Thanks.
    j
    Alex Pelagenko
    +1
    3 replies
    Copy to Clipboard
  • jfcalvo

    jfcalvo

    9 months ago
    Hi everybody. I have recently made a question on stack overflow about how to generate a simple monthly report (filling missing days with zero). https://stackoverflow.com/questions/70308080/how-to-generate-a-monthly-report-filling-missing-days-of-the-month-with-zero Not sure if this is something easy to do or if I need to do some kind of JOIN generating a sequence with the days of the month. In any case I wanted to ask here too. The idea is to use that data as input into a graph. Thanks!
    jfcalvo
    Andrey Pechkurov
    +2
    32 replies
    Copy to Clipboard
  • PONG LI

    PONG LI

    9 months ago
    Hi team, does QuestDB support view / materialized view? https://en.wikipedia.org/wiki/View_(SQL)
    PONG LI
    Pei
    2 replies
    Copy to Clipboard
  • k

    Kendrick

    9 months ago
    Hi, regarding the new zero day vulnerability of log4J - is questDB affected by it?
    k
    1 replies
    Copy to Clipboard
  • terry

    terry

    9 months ago
    dose fluent can save data from file into questDB?
    terry
    Pei
    +1
    6 replies
    Copy to Clipboard
  • Lonnie Cumberland

    Lonnie Cumberland

    9 months ago
    Hi All, I'm exploring QuestDB for possible use in a project and would like to initially run the native binaries for testing and later will probably want to setup docker containers. My question is how to setup a cluster of nodes (perhaps 3) on different physical machines so that data can be queried from any node? Mostly trying to learn about the clustering capabilities of QuestDB at the moment. Any thoughts or suggestions would be appreciated.
    Lonnie Cumberland
    Nicolas Hourcard
    2 replies
    Copy to Clipboard
  • Alihan Koç

    Alihan Koç

    9 months ago
    hello everybody. I have a feature request: specify the user data retention period by data size. 😄 use case: for a data warehouse application we could be able to say to our customers: we store your data up to 5 GB or last 6 months, whichever runs out first…
    Alihan Koç
    1 replies
    Copy to Clipboard
  • d

    David G. Simmons

    9 months ago
    Looks like Docusaurus loves your docs as much as everyone else!
    d
    1 replies
    Copy to Clipboard