• Domnic Amalan

    Domnic Amalan

    2 months ago
    can we use distinct on columns
    Domnic Amalan
    Pei
    4 replies
    Copy to Clipboard
  • r

    Raza

    2 months ago
    Hey team, For the last example in the docs: https://questdb.io/docs/reference/sql/update/ After creating the two tables using CREATE table spreads(symbol SYMBOL, ts TIMESTAMP, spread DOUBLE); CREATE table temp_spreads(symbol SYMBOL, ts TIMESTAMP, spread DOUBLE); I'm getting '=' expected after running the query in the example WITH up AS ( SELECT symbol, spread FROM temp_spreads WHERE timestamp between '2022-01-02' and '2022-01-03' ) UPDATE spreads s SET s.spread = up.spread FROM up WHERE s.timestamp = up.timestamp AND s.symbol = up.symbol;
    r
    Alex Pelagenko
    +2
    8 replies
    Copy to Clipboard
  • Domnic Amalan

    Domnic Amalan

    2 months ago
    group by having issue
    SELECT min(sale_price),  collection_floor_price, (min(sale_price) /  collection_floor_price) , floor_timestamp, collection_floor_opensea_event_id from 'sale_history' 
    where collection_slug = 'proof-moonbirds'
    group by floor_timestamp, collection_floor_price,collection_floor_opensea_event_id
    HAVING (min(sale_price) /  collection_floor_price) > 1.2
    order by floor_timestamp asc
    Domnic Amalan
    1 replies
    Copy to Clipboard
  • Pei

    Pei

    2 months ago
    Hi @Taras @Greg Fragin welcome!
  • Newskooler

    Newskooler

    2 months ago
    I have a problem where data is saved to a table very slowly and I can’t find out why. When the table was empty - speed of insert was perfect - as expected. The table now has ~250 million rows, and things are slowing down. I don’t know why. The data is saved via ILP. Table is partitioned by month. maxUncommitedRows is 10k and commitLag is 10million. The table has 8 columns. What can I do to debug this?
    Newskooler
    Alex Pelagenko
    42 replies
    Copy to Clipboard
  • Andy

    Andy

    2 months ago
    Apache Arrow Powering In-Memory Analytics Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enable big data systems to process and move data fast. Major components of the project include: • The Arrow Columnar In-Memory Format: a standard and efficient in-memory representation of various datatypes, plain or nested • The Arrow IPC Format: an efficient serialization of the Arrow format and associated metadata, for communication between processes and heterogeneous environments • The Arrow Flight RPC protocol: based on the Arrow IPC format, a building block for remote services exchanging Arrow data with application-defined semantics (for example a storage server or a database) • C++ libraries C bindings using GLib C# .NET libraries Gandiva: an LLVM-based Arrow expression compiler, part of the C++ codebase • Go libraries Java libraries JavaScript libraries Plasma Object Store: a shared-memory blob store, part of the C++ codebase • Python libraries R libraries
    Andy
    Nicolas Hourcard
    +3
    13 replies
    Copy to Clipboard
  • Andy

    Andy

    2 months ago
    could questdb integrate with apache arrow ?
  • Pei

    Pei

    2 months ago
    Hi @Jelle Nabuurs welcome!
    Pei
    Jelle Nabuurs
    3 replies
    Copy to Clipboard
  • Taras

    Taras

    2 months ago
    Hi everyone. I started creating some data notebooks on Observable this month. What's a good doc to check out on QuestDB to get the list of tables & references? I'd like to create a diagram similar to this for your JS data users: https://observablehq.com/@randomfractals/sqlite-er-diagram?collection=@randomfractals/tables
    Taras
    Jaromir Hamala
    +1
    11 replies
    Copy to Clipboard
  • Newskooler

    Newskooler

    2 months ago
    is there a way to see what SQL cmd was used to create a table? In particular I am interested in the settings per column (esp for symbol columns)
    Newskooler
    j
    +2
    14 replies
    Copy to Clipboard