• Help with sqlite statement

    From Luc@21:1/5 to All on Sat Nov 19 09:55:22 2022
    Hello. Very simple question. I think.

    foreach i $columns {
    blah blah blah,
    create list of columns and types,
    set eval [list of columns]
    }

    sqlwrite eval "CREATE TABLE $tablename($eval)"

    I am really struggling with that last line. I've tried all kinds of syntax
    and every one fails. It seems to me that Tcl interprets the form $variable(parentheses) as an element of an array. I can only make
    it work hard coding the table's name, which is not what I want.

    What would the correct way for that be?


    You want more code, don't you? Alright.

    set datatypes {
    Dirname text
    Contains int
    Data real
    X text
    Y int
    Z real
    }

    set fe 0
    foreach i $datatypes {
    append eval " [lindex $datatypes $fe]"
    incr fe
    append eval " [lindex $datatypes $fe]"
    if {[expr $fe + 1] < [llength $datatypes]} {
    set eval "$eval,"
    } else {set eval "$eval"}
    incr fe
    }

    puts $eval

    sqlite3 sqlwrite /path/to/file.db
    set tablename "nicetable"
    sqlwrite eval "CREATE TABLE $tablename($eval)"

    --
    Luc


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From fr@21:1/5 to Luc on Sat Nov 19 07:17:31 2022
    On Saturday, November 19, 2022 at 1:55:30 PM UTC+1, Luc wrote:
    Hello. Very simple question. I think.

    foreach i $columns {
    blah blah blah,
    create list of columns and types,
    set eval [list of columns]
    }

    sqlwrite eval "CREATE TABLE $tablename($eval)"

    I am really struggling with that last line. I've tried all kinds of syntax and every one fails. It seems to me that Tcl interprets the form $variable(parentheses) as an element of an array. I can only make
    it work hard coding the table's name, which is not what I want.

    What would the correct way for that be?


    You want more code, don't you? Alright.

    set datatypes {
    Dirname text
    Contains int
    Data real
    X text
    Y int
    Z real
    }

    set fe 0
    foreach i $datatypes {
    append eval " [lindex $datatypes $fe]"
    incr fe
    append eval " [lindex $datatypes $fe]"
    if {[expr $fe + 1] < [llength $datatypes]} {
    set eval "$eval,"
    } else {set eval "$eval"}
    incr fe
    }

    puts $eval

    sqlite3 sqlwrite /path/to/file.db
    set tablename "nicetable"
    sqlwrite eval "CREATE TABLE $tablename($eval)"

    --
    Luc

    Better avoid eval as name of a variable, as eval is a Tcl command.
    #one possible way without using lists could be
    set description "name1 type 1, name2 type 2"
    #then append description to tablename like
    append tablename ( $description )
    # note the 3 parameters to the append command
    sqlwrite eval "CREATE TABLE $tablename"
    # you probably want to create variable description from your list variable datatypes, to insert the required comma in SQL-syntax.

    Roland Frank

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gerald Lester@21:1/5 to Luc on Sat Nov 19 10:01:21 2022
    On 11/19/22 06:55, Luc wrote:
    Hello. Very simple question. I think.

    foreach i $columns {
    blah blah blah,
    create list of columns and types,
    set eval [list of columns]
    }

    sqlwrite eval "CREATE TABLE $tablename($eval)"

    I am really struggling with that last line. I've tried all kinds of syntax and every one fails. It seems to me that Tcl interprets the form $variable(parentheses) as an element of an array. I can only make
    it work hard coding the table's name, which is not what I want.

    What would the correct way for that be?


    You want more code, don't you? Alright.

    set datatypes {
    Dirname text
    Contains int
    Data real
    X text
    Y int
    Z real
    }

    set fe 0
    foreach i $datatypes {
    append eval " [lindex $datatypes $fe]"
    incr fe
    append eval " [lindex $datatypes $fe]"
    if {[expr $fe + 1] < [llength $datatypes]} {
    set eval "$eval,"
    } else {set eval "$eval"}
    incr fe
    }

    puts $eval

    sqlite3 sqlwrite /path/to/file.db
    set tablename "nicetable"
    sqlwrite eval "CREATE TABLE $tablename($eval)"


    Correct code:
    set datatypes {
    Dirname text
    Contains int
    Data real
    X text
    Y int
    Z real
    }
    sqlite3 sqlwrite /path/to/file.db
    set tablename "nicetable"
    sqlwrite eval "CREATE TABLE $tablename ( [join $eval {,}] )"

    --
    +----------------------------------------------------------------------+
    | Gerald W. Lester, President, KNG Consulting LLC |
    | Email: Gerald.Lester@kng-consulting.net | +----------------------------------------------------------------------+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From clt.to.davebr@dfgh.net@21:1/5 to All on Sat Nov 19 16:28:06 2022
    More ideas:

    # make sure the db is closed and deleted (just in case)
    catch {sqlwrite close}
    catch {file delete file.db}


    set datatypes {
    Dirname text
    Contains int
    Data real
    X text
    Y int
    Z real
    }

    # make sure the list starts empty
    set colDefs {}
    # foreach can take more than one value from the list at a time
    foreach {name type} $datatypes {
    # first make a list of column definitions
    lappend colDefs "$name $type"
    }
    # use join to make a string from the list
    # with comma and space between each list entry
    set tableDef [join $colDefs ", "]
    puts $tableDef

    sqlite3 sqlwrite file.db
    set tablename "nicetable"
    # when the string being built looks like a tcl array,
    # you can use ${} to make the variable name clear to Tcl
    sqlwrite eval "CREATE TABLE ${tablename}($tableDef)"

    # an alterate way to make the table definition string from the data types list set tableDef [join [lmap {name type} $datatypes {concat $name $type}] ", "] puts $tableDef


    Dave B

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From saitology9@21:1/5 to Luc on Sat Nov 19 11:19:46 2022
    On 11/19/2022 7:55 AM, Luc wrote:

    sqlwrite eval "CREATE TABLE $tablename($eval)"

    I am really struggling with that last line. I've tried all kinds of syntax

    A couple of options: You can use space around the parentheses:
    "$tablename ($eval)". Or you can escape them: "$tablename\($eval\)".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Luc on Sat Nov 19 19:08:30 2022
    Luc <no@no.no> wrote:
    Hello. Very simple question. I think.

    foreach i $columns {
    blah blah blah,
    create list of columns and types,
    set eval [list of columns]
    }

    sqlwrite eval "CREATE TABLE $tablename($eval)"

    I am really struggling with that last line. I've tried all kinds of syntax and every one fails. It seems to me that Tcl interprets the form $variable(parentheses) as an element of an array. I can only make
    it work hard coding the table's name, which is not what I want.

    What would the correct way for that be?

    Is there a reason you want to dynamically create SQL create table
    statements?

    Most often, the SQL table structure is pre-determined, and one's code
    simply inserts/deletes/updates with the existing tables.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Luc@21:1/5 to Gerald Lester on Sat Nov 19 18:19:23 2022
    On Sat, 19 Nov 2022 10:01:21 -0600, Gerald Lester wrote:

    Correct code:
    sqlwrite eval "CREATE TABLE $tablename ( [join $eval {,}] )"


    That works for the insertion of data, but not for 'create table.'
    For 'create table' I need two words before each comma: the name of
    the field and the data type.



    On Sat, 19 Nov 22 16:28:06 GMT, clt.to.davebr@dfgh.net wrote:

    More ideas:

    # foreach can take more than one value from the list at a time
    foreach {name type} $datatypes {
    # first make a list of column definitions
    lappend colDefs "$name $type"


    Oooh, I didn't know that. That's useful. I could have used it so many
    times if I'd known that before.


    Many other great tips there. Thank you all!


    On Sat, 19 Nov 2022 19:08:30 -0000 (UTC), Rich wrote:

    Is there a reason you want to dynamically create SQL create table
    statements?

    Most often, the SQL table structure is pre-determined, and one's code
    simply inserts/deletes/updates with the existing tables.


    Well, it's not pre-determined until I determine it. I am creating the
    table from scratch. Since I make many mistakes, I didn't use sqlite
    for some 15 years and have to learn everything again, it's definitely
    better to automate the creation of the database, tables and everything.
    Every time I screw up, the script rebuilds the test pad and I can
    pick it up where I left off.


    Again, thank you all for the tips. They really help.


    --
    Luc


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Luc@21:1/5 to All on Sat Nov 19 18:45:09 2022
    Hey, since you people seem to be in the mood for sharing wisdom,
    I would like to ask you about something else.

    I was doing some kind of test that was very repetitive, testing
    multiple possible conditions and assessing the results. This test
    had to read from a few hundred thousand database lines.

    Yes, that many. Please don't ask. Let's just say it's a good number
    crunching workout. I think I'm going to call this one TkCruncher.

    Anyway, I wasn't very happy about the performance. I thought it
    was too slow. Like, almost 10 minutes for all passes and iterations.
    And that was using only one table and only one whole table lookup
    at each pass. I was disappointed.

    So I had this bold idea. The script now reads THE ENTIRE DATABASE,
    five tables in total, at startup. And everything is stored in
    arrays. The time it takes to run all the tests now dropped from
    almost 10 minutes to less than one.

    Another benefit is that selecting from multiple tables has become
    a lot simpler compared to how complicated SQL joins can be. A
    little proc with sugar to parse my arrays and the code certainly
    looks better now.

    I tested the lookups on Tkcon and well, every look up returns the
    result instantly. I was doing 'for' loops over multiple hundred
    thousand lines on each look up and never felt like there was any
    delay at all so bravo, Tcl!

    The entire .db file is only 80MB so I figure that's how much RAM
    I'm using to slurp the whole database into arrays?

    Anyway, do you think there could be some hazard or pitfall in my
    way for doing this? What do the elder of the village say?


    --
    Luc


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Luc on Sat Nov 19 23:07:09 2022
    Luc <no@no.no> wrote:
    Hey, since you people seem to be in the mood for sharing wisdom,
    I would like to ask you about something else.

    I was doing some kind of test that was very repetitive, testing
    multiple possible conditions and assessing the results. This test
    had to read from a few hundred thousand database lines.

    Yes, that many. Please don't ask. Let's just say it's a good number
    crunching workout. I think I'm going to call this one TkCruncher.

    Anyway, I wasn't very happy about the performance. I thought it
    was too slow. Like, almost 10 minutes for all passes and iterations.
    And that was using only one table and only one whole table lookup
    at each pass. I was disappointed.

    Without you telling us more details, at best we can only offer common
    pitfalls.

    First question: Are you talking about an Sqlite db, or another (Oracle/PostgreSQL/Mysql/etc.) db?

    One very common pitfall: Did you create indexes on the columns that
    were part of the 'where' clauses used for selecting rows from the
    database?

    Without indexes, the database engine will be doing what is known as
    'full table scans' - looking through every row in the table being
    selected from for any rows that match the 'where' clause limits. Full
    table scans, esp. if you repeat them for every select, produce in the
    end an O(N^2) complexity factor.

    So I had this bold idea. The script now reads THE ENTIRE DATABASE,
    five tables in total, at startup. And everything is stored in
    arrays. The time it takes to run all the tests now dropped from
    almost 10 minutes to less than one.

    This implies you had no indexes -- but you've not told us enough to
    know any more than that.

    Another benefit is that selecting from multiple tables has become
    a lot simpler compared to how complicated SQL joins can be. A
    little proc with sugar to parse my arrays and the code certainly
    looks better now.

    Don't feel so happy on that one. For simple cartesian joins, yes, your
    code might look simpler than the corresponding SQL statement. But add
    in grouping and 'with' clauses to limit by the grouped elements, and
    suddenly the 'code' you'll need to write to produce the equivalent
    results to a straightforward SQL statement will dwarf the SQL
    statement, and look nowhere nearly as simple as the SQL.

    The entire .db file is only 80MB so I figure that's how much RAM
    I'm using to slurp the whole database into arrays?

    Not necessarily. It might be more, it might be less. More if the
    Tcl_Obj overhead is a significant portion of the total data size.
    Could be less if the db has a lot of indexes that you've not replicated
    in your in memory version.

    Anyway, do you think there could be some hazard or pitfall in my
    way for doing this? What do the elder of the village say?

    Yes, you could eventually encounter a dataset too large to "slurp" into
    RAM all up front.

    You also may end up writing half (or better) of a poorly optimized SQL
    query engine to satisfy lookups or data output patterns that in SQL
    are simply the addition of another statement to the query.

    Long term, if your project will grow to do more than just exactly what
    it does now, you'll be 'foot-gunning' yourself more often than not this
    way.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Luc on Sat Nov 19 22:53:23 2022
    Luc <no@no.no> wrote:
    On Sat, 19 Nov 2022 10:01:21 -0600, Gerald Lester wrote:

    Correct code:
    sqlwrite eval "CREATE TABLE $tablename ( [join $eval {,}] )"


    That works for the insertion of data, but not for 'create table.'
    For 'create table' I need two words before each comma: the name of
    the field and the data type.



    On Sat, 19 Nov 22 16:28:06 GMT, clt.to.davebr@dfgh.net wrote:

    More ideas:

    # foreach can take more than one value from the list at a time
    foreach {name type} $datatypes {
    # first make a list of column definitions
    lappend colDefs "$name $type"


    Oooh, I didn't know that. That's useful. I could have used it so many
    times if I'd known that before.

    It /is/ right there in the documentation:

    SYNOPSIS
    foreach varname list body
    foreach varlist1 list1 ?varlist2 list2 ...? body

    Note the second version.

    And a bit further down in the manpage:

    In the general case there can be more than one value list (e.g.,
    list1 and list2), and each value list can be associated with a
    list of loop variables (e.g., varlist1 and varlist2). During
    each iteration of the loop the variables of each varlist are
    assigned consecutive values from the corresponding list. Values
    in each list are used in order from first to last, and each
    value is used exactly once. The total number of loop iterations
    is large enough to use up all the values from all the value
    lists. If a value list does not contain enough elements for
    each of its loop variables in each iteration, empty values are
    used for the missing elements.

    You can do all kinds of 'magic' with foreach.

    On Sat, 19 Nov 2022 19:08:30 -0000 (UTC), Rich wrote:

    Is there a reason you want to dynamically create SQL create table
    statements?

    Most often, the SQL table structure is pre-determined, and one's code
    simply inserts/deletes/updates with the existing tables.


    Well, it's not pre-determined until I determine it. I am creating the
    table from scratch. Since I make many mistakes, I didn't use sqlite
    for some 15 years and have to learn everything again, it's definitely
    better to automate the creation of the database, tables and everything.
    Every time I screw up, the script rebuilds the test pad and I can
    pick it up where I left off.

    By 'predetermined' I mean that in the most common instance, you know
    the Sqlite DB will represent, say, an address book (this below will be
    very US centric, but is meant as an example). That fact
    'pre-determines' the table structure, so your code would do this to 'initialize' the database: (assuming "db" is the command already setup
    to point to a file on disk):

    db eval {create table addresses (first_name text, last_name text,
    address1 text, address2 text,
    city text, state text, zip text);}
    db eval {create index zip_index on addresses(zip);} ;# assuming you
    # planned to 'look up by zip code' often

    No need to dynamically 'write' SQL create table statements, just write
    out the full create table statement in full once up front and be done.

    Then, later, you'd do things like:

    db eval {select * from addresses where zip = '12345';}

    to get all addresses in zipcode '12345'.

    Or:

    db eval {insert into addresses (first_name, last_name, address1,
    city, state, zip) values (:first, :last, :addr, :city,
    :state, :zip);}

    To insert the contents of the tcl variables "first" "last" "addr"
    "city" "state" "zip", visible in the current variable context, into a
    new row in the database.

    For 98% of most uses of SQL databases, there is no need to dynamically
    write the create table statements (as in the code 'writes' the create
    table statements as opposed to a person just 'writing it out in full up front'). The other 2% is when one is doing something like building a
    "SQL query interface app" for others to use to build databases -- and
    you've not told us that is what you are trying to do yet.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gerald Lester@21:1/5 to Gerald Lester on Sat Nov 19 22:23:54 2022
    On 11/19/22 10:01, Gerald Lester wrote:
    On 11/19/22 06:55, Luc wrote:
    Hello. Very simple question. I think.

    foreach i $columns    {
        blah blah blah,
        create list of columns and types,
        set eval [list of columns]
    }

    sqlwrite eval "CREATE TABLE $tablename($eval)"

    I am really struggling with that last line. I've tried all kinds of
    syntax
    and every one fails. It seems to me that Tcl interprets the form
    $variable(parentheses) as an element of an array. I can only make
    it work hard coding the table's name, which is not what I want.

    What would the correct way for that be?


    You want more code, don't you? Alright.

    set datatypes {
        Dirname        text
        Contains    int
        Data        real
        X            text
        Y            int
        Z            real
    }

    set fe 0
    foreach i $datatypes    {
        append eval " [lindex $datatypes $fe]"
        incr fe
        append eval " [lindex $datatypes $fe]"
            if {[expr $fe + 1] < [llength $datatypes]}    {
                set eval "$eval,"
            } else {set eval "$eval"}
        incr fe
    }

    puts $eval

    sqlite3 sqlwrite /path/to/file.db
    set tablename "nicetable"
    sqlwrite eval "CREATE TABLE $tablename($eval)"


    Correct code:
     set datatypes {
         {Dirname      text}
         {Contains    int}
         {Data        real}
         {X            text}
         {Y            int}
         {Z            real}
     }
     sqlite3 sqlwrite /path/to/file.db
     set tablename "nicetable"
     sqlwrite eval "CREATE TABLE $tablename ( [join $eval {,}] )"



    --
    +----------------------------------------------------------------------+
    | Gerald W. Lester, President, KNG Consulting LLC |
    | Email: Gerald.Lester@kng-consulting.net | +----------------------------------------------------------------------+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Luc@21:1/5 to Gerald Lester on Sun Nov 20 11:13:10 2022
    On Sat, 19 Nov 2022 22:23:54 -0600, Gerald Lester wrote:

    Correct code:
     set datatypes {
         {Dirname      text}
         {Contains    int}
         {Data        real}
         {X            text}
         {Y            int}
         {Z            real}
     }
     sqlite3 sqlwrite /path/to/file.db
     set tablename "nicetable"
     sqlwrite eval "CREATE TABLE $tablename ( [join $eval {,}] )"


    Thank you again. I love syntatic sugar! And Tcl is the best for that.

    Just for the sake of discussion, I noticed that the resulting $eval by
    that approach preserves all the tabs I used for "drawing" a neat table
    in the code. So the resulting $eval looks messy.

    That doesn't matter for creating database tables, but in other
    situations I would have to do something like this:

    set eval [join $datatypes ","]
    regsub -all {\s+} $eval { } eval
    regsub -all {,} $eval {, } eval

    Unless you have some other less obvious trick up your sleeve. :-)

    --
    Luc


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Luc@21:1/5 to Rich on Sun Nov 20 11:35:14 2022
    On Sat, 19 Nov 2022 23:07:09 -0000 (UTC), Rich wrote:

    First question: Are you talking about an Sqlite db, or another (Oracle/PostgreSQL/Mysql/etc.) db?

    Strictly Sqlite. Nothing corporate. Very humble home desktop application.



    One very common pitfall: Did you create indexes on the columns that
    were part of the 'where' clauses used for selecting rows from the
    database?

    Without indexes, the database engine will be doing what is known as
    'full table scans' - looking through every row in the table being
    selected from for any rows that match the 'where' clause limits. Full
    table scans, esp. if you repeat them for every select, produce in the
    end an O(N^2) complexity factor.

    Excuse me, did you say 'indexes' in the plural? Yes, you did. How many
    am I supposed to create then? It's an honest question. I suspect I have
    been overestimating the efficiency of databases. Do I have to create one
    index per column?

    My database has an index, but it's a 'unique' column which I thought
    would be good enough because I admit I don't fully understand indices.
    That index is not used in my WHERE queries.

    So, 1) How many indices do I have to create? 2) Should I use indices
    or views? I did some reading at bed time yesterday and found this page
    (among many others):

    https://stackoverflow.com/questions/439056/is-a-view-faster-than-a-simple-query

    You may skip to "Update 2" where he mentions Lithuania. He argues that
    indices may not be the cat's pajamas after all.



    Anyway, do you think there could be some hazard or pitfall in my
    way for doing this? What do the elder of the village say?

    Yes, you could eventually encounter a dataset too large to "slurp" into
    RAM all up front.


    If hundreds of thousands of lines take up 80MB, how many lines does it
    take to cause someone to run out of RAM with modern computers?
    How does a database like Sqlite manage its own RAM use footprint?


    --
    Luc


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Luc on Sun Nov 20 15:48:27 2022
    Luc <no@no.no> wrote:
    On Sat, 19 Nov 2022 23:07:09 -0000 (UTC), Rich wrote:

    First question: Are you talking about an Sqlite db, or another
    (Oracle/PostgreSQL/Mysql/etc.) db?

    Strictly Sqlite. Nothing corporate. Very humble home desktop application.

    Neither PostgreSQL nor Mysql imply 'corporate' -- both being 'free to
    use'. Oracle, well, at the price for an oracle license, yes, it would
    imply corporate.

    One very common pitfall: Did you create indexes on the columns that
    were part of the 'where' clauses used for selecting rows from the
    database?

    Without indexes, the database engine will be doing what is known as
    'full table scans' - looking through every row in the table being
    selected from for any rows that match the 'where' clause limits.
    Full table scans, esp. if you repeat them for every select, produce
    in the end an O(N^2) complexity factor.

    Excuse me, did you say 'indexes' in the plural?

    Yes, intentionally so.

    Yes, you did. How many am I supposed to create then?

    As many as are necessary for your needs (yes, I know, a vague answer,
    but it /is/ the answer I can give, since you've revealed insufficient information about your actual design for me to provide more than that
    level of 'general answer'.

    It's an honest question. I suspect I have been overestimating the
    efficiency of databases. Do I have to create one index per column?

    The answer there depends upon what kinds of queries you plan to run.

    For example, lets take a 'music database' (i.e., catalog of CD's, vinyl records, compressed audio files). You might have a table like (note,
    all made up here on the fly):

    create table music (artist text, album text, disk_num text,
    song text, format text, shelf text, row integer,
    column integer);

    Where 'format' might be "CD" or "Vinyl" or "mp3" or "flac".

    If all you ever did was retrieve all records from this table, you would
    'need' zero indexes (note, I'm skipping the 'need' for a unique index
    if you want to avoid duplicate rows).

    But, lets say you often want to retrieve just your collection of Vinyl
    entries. You'd run this query:

    db eval {select * from music where format = 'Vinyl';}

    And with on indexes, that would have to look at every row to decide
    which rows had 'Vinyl' in the format column.

    But, if you indexed the format column:

    db eval {create index format_idx on music(format);}

    Then the same query can utilize the index to (essentially) directly
    retrieve only the rows with "Vinyl" as format, without ever looking at
    any other rows.

    If you commonly want to look for "music by artist A, and in flac
    format" (note, made up query again), you would do:

    db eval {select * from music where artist = 'Frank Sinatra' and
    format = 'flac';}

    Again, with no index this looks through the entire set of rows. But
    with an index:

    db eval {create index my_new_idx on music(artist, format);}

    Then the same query can retrieve all of Frank's music, that is in flac
    format, without looking at all the other rows (and it is the 'without
    looking at the other rows' that provides the speedup, it literally does
    not look at the other non-Sinatra rows).

    So the answer to your question is: you need indexes that cover the
    queries you are performing and that you want to speed up. But we don't
    know enough details to recommend more than that.

    My database has an index, but it's a 'unique' column which I thought
    would be good enough because I admit I don't fully understand indices.
    That index is not used in my WHERE queries.

    If the unique column is never part of a where clause, it is never used
    for a retrieval query.

    So, 1) How many indices do I have to create?

    The number necessary to cover your slow queries.

    2) Should I use indices or views?

    Given that your questions imply this is your initial foray into the
    world of SQL queries, just stick to indexes for the moment. You can
    level-up to views once you've become more proficient at the next level
    down basics.

    I did some reading at bed time
    yesterday and found this page (among many others):

    https://stackoverflow.com/questions/439056/is-a-view-faster-than-a-simple-query

    You may skip to "Update 2" where he mentions Lithuania. He argues that indices may not be the cat's pajamas after all.

    He's discussing a rather advanced 'view' there where the view looks to
    be an actual materialized view containing only the subset of the
    records to which the create view statement pertains. This is several
    levels up in SQL knowledge from where your postings imply you are now,
    and likely unnecessary for what you are trying to do. Second, I'm not
    sure Sqlite has added support for materialized views, and if not, you
    can't (yet) do with sqlite what this poster is discussing.

    Anyway, do you think there could be some hazard or pitfall in my
    way for doing this? What do the elder of the village say?

    Yes, you could eventually encounter a dataset too large to "slurp"
    into RAM all up front.

    If hundreds of thousands of lines take up 80MB, how many lines does it
    take to cause someone to run out of RAM with modern computers?

    Note that 80MB on disk does not equate to 80MB of RAM if you hoover
    everything into TCL lists or arrays. This is due to Tcl_Obj overhead
    (see https://www.tcl.tk/man/tcl/TclLib/Object.html for the C definition
    of a Tcl_Obj). On a 64-bit machine, this is somewhere around at least
    5*8bytes (40 bytes) of overhead for each Tcl_Obj. If you retrieve each
    row as a single string, and keep it a single string, then your in Tcl
    memory footprint will be the byte length of each string, plus 40 bytes
    per string, so for 100,000 rows you'd have 4,000,000 bytes just in
    Tcl_Obj overhead.

    If, instead, you convert each row into a nested list, and you have
    100,000 rows, each row having 8 columns, then you get somewhere around 100000*40 + 100000*8*40 or 36,000,000 bytes for the Tcl_Obj overhead (each
    row is a list, 40 byte Tcl_Obj per row, each element of each row is
    itself another Tcl_Obj, 40 bytes each). And that does not count the
    space taken up by the actual data in each list element.

    As for "how much does your dataset take" -- well on that point you'll
    have to do some of the math yourself.

    How does a database like Sqlite manage its own RAM use footprint?

    You'd have to peruse the Sqlite source to know that with certainty.
    But one way it does is by not (intentionally) hoovering everything into
    ram up-front. Now, with sufficient spare cache, eventually much of the
    file will be cached by the OS, giving the effect of "it is all in RAM
    now" as far as disk reads go, but OS caching of the file is not Sqlite intentionally "hoovering it all in up front".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gerald Lester@21:1/5 to Luc on Sun Nov 20 17:32:28 2022
    On 11/20/22 08:13, Luc wrote:
    On Sat, 19 Nov 2022 22:23:54 -0600, Gerald Lester wrote:

    Correct code:
     set datatypes {
         {Dirname      text}
         {Contains    int}
         {Data        real}
         {X            text}
         {Y            int}
         {Z            real}
     }
     sqlite3 sqlwrite /path/to/file.db
     set tablename "nicetable"
     sqlwrite eval "CREATE TABLE $tablename ( [join $eval {,}] )"


    Thank you again. I love syntatic sugar! And Tcl is the best for that.

    Just for the sake of discussion, I noticed that the resulting $eval by
    that approach preserves all the tabs I used for "drawing" a neat table
    in the code. So the resulting $eval looks messy.

    That doesn't matter for creating database tables, but in other
    situations I would have to do something like this:

    set eval [join $datatypes ","]
    regsub -all {\s+} $eval { } eval
    regsub -all {,} $eval {, } eval

    Unless you have some other less obvious trick up your sleeve. :-)


    I would never use regsub for that -- way overkill. String map is where
    I'd go, but why bother?

    --
    +----------------------------------------------------------------------+
    | Gerald W. Lester, President, KNG Consulting LLC |
    | Email: Gerald.Lester@kng-consulting.net | +----------------------------------------------------------------------+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)