• m

    Muhammad Amal

    9 months ago
    I try use query min value > 0 but why the value is false ? not a number
    m
    1 replies
    Copy to Clipboard
  • k

    Kendrick

    9 months ago
    hi, I'd like to ask:1. Is it safe to create a GCP persistent disk snapshot on a VM that questDB is running on without shutting it down + while writes happen 2. If that isn't safe, is there way to make Point-in-Time backups more efficient/complete faster? (we want to backup questdb while it's still running) my team is currently using point-in-time backups on a table with around 5mil rows in questdb running on google cloud's compute engine (2 vCPU, 8GB RAM). It takes about 3-5min to complete the backup (10qps on average querying large chunks of the table), during which quest lags causing requests to timeout. Persistent disk snapshots seem the most promising, but the docs did mention
    To run a reliable filesystem backup, the QuestDB instance must be shut down or no write operations should running while disk backup is being created.
    .
    k
    Vlad
    3 replies
    Copy to Clipboard
  • o

    Orrin

    9 months ago
    at the moment, 100% of my python queries are performed using this section of code.
    def rq(sql_query):
        query_params = {'query': sql_query, 'fmt': 'json'}
        try:
            response = <http://requests.post|requests.post>(host + '/exec', params=query_params)
            json_response = json.loads(response.text)
            return( json_response)
        except requests.exceptions.RequestException as e:
            print("Error: %s" % (e))
    are there other formats that are more suitable to heavy loads? zipped binary encoding or something like that?
    o
    Vlad
    3 replies
    Copy to Clipboard
  • Hasitha Dharmasiri

    Hasitha Dharmasiri

    9 months ago
    Hi folks, quick question - do null values in tables take up appreciable space on disk? we have a pretty sparse table and we've noticed our disk space filling up dramatically quickly. I haven't really instrumented everything super closely yet, but from eyeballing it, it seems like we're producing quite a bit more data on disk than the sum total bytes of the uncompressed Influx Line Protocol being written in
    Hasitha Dharmasiri
    Andrey Pechkurov
    +1
    6 replies
    Copy to Clipboard
  • s

    Sourav Patra

    9 months ago
    s
    1 replies
    Copy to Clipboard
  • m

    Muhammad Amal

    9 months ago
    [ask] can I export CSV file to different timezone ? like from UTC to Asia/Jakarta
    m
    1 replies
    Copy to Clipboard
  • j

    John

    9 months ago
    Hi. I am trying to implement the use of
    'text_loader.json'
    in the
    'conf'
    directory. I can't get the curl command to act according to a date format I have added into the
    'text_loader.json'
    file. Does it matter in which order new entries are added to this file? I restarted QDB after adding the file. Permissions on the file in Windows seem to be correct. As stated, I'm running this on Windows.
    j
    Vlad
    2 replies
    Copy to Clipboard
  • k

    Kendrick

    9 months ago
    hi, I'm having trouble using kafka connect with questdb. My questdb schema has
    symbol
    ,
    long
    ,
    string
    and
    timestamp
    and I use kafkajs to push messages into kafka following this example (using type: string for symbol type, and type: int64 for long type) I'm currently getting this error when I run the kafka connector - any idea what is causing this?
    [2021-12-08 18:35:46,294] INFO JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter:56)
    [2021-12-08 18:35:46,326] DEBUG Records is empty (io.confluent.connect.jdbc.sink.BufferedRecords:176)
    [2021-12-08 18:35:46,328] DEBUG Using PostgreSql dialect to check support for [TABLE] (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:467)
    [2021-12-08 18:35:46,353] DEBUG Used PostgreSql dialect to find table types: [TABLE] (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:486)
    [2021-12-08 18:35:46,354] INFO Checking PostgreSql dialect for existence of TABLE "test" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:575)
    [2021-12-08 18:35:46,386] INFO Using PostgreSql dialect TABLE "test" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:583)
    [2021-12-08 18:35:46,386] DEBUG Querying PostgreSql dialect column metadata for catalog:null schema:null table:test (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:619)
    [2021-12-08 18:35:46,518] WARN Write of 500 records failed, remainingRetries=10 (io.confluent.connect.jdbc.sink.JdbcSinkTask:92)
    org.postgresql.util.PSQLException: ERROR: String is outside of file boundary [offset=448, len=7274605, size=4096]
    	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2553)
    	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2285)
    	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:323)
    	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
    	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
    	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:322)
    	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:308)
    	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:284)
    	at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:236)
    	at org.postgresql.jdbc.PgDatabaseMetaData.getPrimaryKeys(PgDatabaseMetaData.java:2168)
    	at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.primaryKeyColumns(GenericDatabaseDialect.java:791)
    	at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.describeColumns(GenericDatabaseDialect.java:628)
    	at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.describeTable(GenericDatabaseDialect.java:827)
    	at io.confluent.connect.jdbc.util.TableDefinitions.get(TableDefinitions.java:62)
    	at io.confluent.connect.jdbc.sink.DbStructure.createOrAmendIfNecessary(DbStructure.java:64)
    	at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:123)
    	at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:74)
    	at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:84)
    	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:563)
    	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
    	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
    	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
    	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
    	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
    	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    	at java.base/java.lang.Thread.run(Thread.java:829)
    k
    Vlad
    2 replies
    Copy to Clipboard
  • k

    Kendrick

    9 months ago
    hi, does anyone know how to get the row with maximum value from a table? I have tried the following on a test table with 5m rows:1.
    select * from table where cast(timestamp as symbol) in (select cast(max(timestamp) as symbol) from table );
    2.
    select * from table inner join (select max(timestamp) mm from table ) on timestamp >= mm
    3.
    select * from table where timestamp = max(timestamp)
    4.
    select * from table where timestamp = (select max(timestamp) from table )
    where 1 is correct but runs in ~5s, 2 is correct and runs in ~500ms but looks unnecessarily verbose for a query, 3 compiles but returns an empty table, and 4 is incorrect syntax although that's how sql usually does it
    k
    Alex Pelagenko
    +1
    12 replies
    Copy to Clipboard
  • Jakob

    Jakob

    9 months ago
    hi - would QuestDB be suited for storing user credentials (besides time series data) for authentication purposes or would I be better of looking elsewhere?
    Jakob
    Vlad
    +1
    3 replies
    Copy to Clipboard