• VMS

    From c186282@21:1/5 to All on Sat Jun 14 01:15:35 2025
    I've still got my 3-inch (now painfully) small-type VMS
    manual.

    This was one of the genius systems - WAY beyond its time.

    If you were, maybe, the Hilton hotel chain and wanted to
    keep current with systems world-wide - over SLOW modems -
    VMS was set up to do it, even late 70s.

    This was a WELL thought-out operating system.

    Now, alas, somebody BOUGHT all the code and BIOS
    stuff. No longer 'free' for development. They will
    hold it hostage for the last nickel until it's
    utterly obsolete.

    Tragic.

    I still have HOPE there will be a New Linus - someone
    who sees the value of the system/approach and writes
    an updated work-alike.

    Various corps DO seem to be scheming against Linux.
    They somehow want to claim ownership and then
    absorb/destroy the system. The increasing M$ content
    is part of that scheme. A *FREE* OS - horrors !!!

    Some OTHER capable system is Disaster-Proofing the future.
    The other oddball is Plan-9 ... but it was never meant
    for 'home/small-biz' computers. They DID get it to
    run on the latest IBM mainframes though - there
    are celebration videos.

    Yea yea, there are a few other potentials, even
    BeOS, but they're just not nearly as capable
    as Linux or VMS. Amiga-OS ... sorry, no. Have
    less experience with the Control Data systems.
    MIGHT be useful.

    Just saying - Linux/BSD is great, but there ARE
    people legally conspiring against them. Really
    good alts DO need to Be There, SOON.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bobbie Sellers@21:1/5 to All on Sat Jun 14 10:05:24 2025
    On 6/13/25 22:15, c186282 wrote:
    I've still got my 3-inch (now painfully) small-type VMS
    manual.

    This was one of the genius systems - WAY beyond its time.

    If you were, maybe, the Hilton hotel chain and wanted to
    keep current with systems world-wide - over SLOW modems -
    VMS was set up to do it, even late 70s.

    This was a WELL thought-out operating system.

    Now, alas, somebody BOUGHT all the code and BIOS
    stuff. No longer 'free' for development. They will
    hold it hostage for the last nickel until it's
    utterly obsolete.

    Tragic.

    I still have HOPE there will be a New Linus - someone
    who sees the value of the system/approach and writes
    an updated work-alike.

    Various corps DO seem to be scheming against Linux.
    They somehow want to claim ownership and then
    absorb/destroy the system. The increasing M$ content
    is part of that scheme. A *FREE* OS - horrors !!!

    Some OTHER capable system is Disaster-Proofing the future.
    The other oddball is Plan-9 ... but it was never meant
    for 'home/small-biz' computers. They DID get it to
    run on the latest IBM mainframes though - there
    are celebration videos.

    Yea yea, there are a few other potentials, even
    BeOS, but they're just not nearly as capable
    as Linux or VMS. Amiga-OS ... sorry, no. Have
    less experience with the Control Data systems.
    MIGHT be useful.

    Just saying - Linux/BSD is great, but there ARE
    people legally conspiring against them. Really
    good alts DO need to Be There, SOON.

    Keep up with Distrowatch: They reported month ago
    that some group is writing a kernel in Rust to go with
    a new OS. Sorry but I lost the name of this one.

    bliss

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andreas Eder@21:1/5 to Bobbie Sellers on Sat Jun 14 20:30:34 2025
    On Sa 14 Jun 2025 at 10:05, Bobbie Sellers <bliss-sf4ever@dslextreme.com> wrote:

    Keep up with Distrowatch: They reported month ago
    that some group is writing a kernel in Rust to go with
    a new OS. Sorry but I lost the name of this one.

    You meaybe thinking of Redox OS https://www.redox-os.org/

    'Andreas

    --
    ceterum censeo redmondinem esse delendam

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Andreas Eder on Sat Jun 14 23:27:38 2025
    On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:

    You meaybe thinking of Redox OS https://www.redox-os.org/

    That name is obviously meant to be a kind of word play on “Rust”. As I recall from my high-school chemistry lessons, a “redox reaction” is one where one reactant is “reduced” (gains electrons) while the other is “oxidized” (loses them). This may or may not involve actual oxygen atoms (which are notorious eaters of electrons), but the concept has been
    generalized from that.

    The slight irony is that the name “Rust” does not come from the well-known redox reaction that iron undergoes with water in the presence of oxygen (catalyzed by a little bit of polar contaminants such as common salt), but
    from the name of a kind of fungus.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Lawrence D'Oliveiro on Sun Jun 15 00:57:32 2025
    On Sat, 14 Jun 2025 23:27:38 -0000 (UTC), Lawrence D'Oliveiro wrote:

    The slight irony is that the name “Rust” does not come from the well-known redox reaction that iron undergoes with water in the presence
    of oxygen (catalyzed by a little bit of polar contaminants such as
    common salt), but from the name of a kind of fungus.

    Even more ironical, rust is a pathogen that the Romans sacrificed a dog in hopes of preventing,

    https://penelope.uchicago.edu/encyclopaedia_romana/calendar/robigalia.html

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Bobbie Sellers on Sat Jun 14 22:57:49 2025
    On 6/14/25 1:05 PM, Bobbie Sellers wrote:


    On 6/13/25 22:15, c186282 wrote:
    I've still got my 3-inch (now painfully) small-type VMS
    manual.

    This was one of the genius systems - WAY beyond its time.

    If you were, maybe, the Hilton hotel chain and wanted to
    keep current with systems world-wide - over SLOW modems -
    VMS was set up to do it, even late 70s.

    This was a WELL thought-out operating system.

    Now, alas, somebody BOUGHT all the code and BIOS
    stuff. No longer 'free' for development. They will
    hold it hostage for the last nickel until it's
    utterly obsolete.

    Tragic.

    I still have HOPE there will be a New Linus - someone
    who sees the value of the system/approach and writes
    an updated work-alike.

    Various corps DO seem to be scheming against Linux.
    They somehow want to claim ownership and then
    absorb/destroy the system. The increasing M$ content
    is part of that scheme. A *FREE* OS - horrors !!!

    Some OTHER capable system is Disaster-Proofing the future.
    The other oddball is Plan-9 ... but it was never meant
    for 'home/small-biz' computers. They DID get it to
    run on the latest IBM mainframes though - there
    are celebration videos.

    Yea yea, there are a few other potentials, even
    BeOS, but they're just not nearly as capable
    as Linux or VMS. Amiga-OS ... sorry, no. Have
    less experience with the Control Data systems.
    MIGHT be useful.

    Just saying - Linux/BSD is great, but there ARE
    people legally conspiring against them. Really
    good alts DO need to Be There, SOON.

    Keep up with Distrowatch: They reported month ago
    that some group is writing a kernel in Rust to go with
    a new OS. Sorry but I lost the name of this one.

    I've nothing AGAINST Rust ... though frankly it seems
    redundant, you could do it almost as easily in 'C'.
    Too many 'new languages' just seem to be 'C' knock-offs
    with crappier syntax.

    The "new OS" is the more interesting bit. But what
    is it based on ... and is it REALLY new and unique
    and beyond the reach of corporate lawyers ?

    OS-9 kind of co-evolved with Unix back in the day.
    Some describe it as Unix-ish, but fast and compact
    and un-sucky. I used it once or twice back in the
    day and it WAS impressive. You can STILL buy it
    and it's expanded its base from just the 6809.
    Thing is, it's "Unix-LIKE" but in no way Unix.
    Good IDEAS were kept and improved, but none of
    it is original Unix code.

    What we NEED is a new Linus who can combine lots
    of the good IDEAS into a clearly new OS that
    no corp can claim ownership of. Is there such
    a person these days ?

    LONGER term ... will there be any OS's AT ALL ???
    Fair chance it'll ALL be 'AI' stuff that kind of
    pretends to be an OS while plotting world overthrow.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Sat Jun 14 23:03:34 2025
    On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
    On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:

    You meaybe thinking of Redox OS https://www.redox-os.org/

    That name is obviously meant to be a kind of word play on “Rust”. As I recall from my high-school chemistry lessons, a “redox reaction” is one where one reactant is “reduced” (gains electrons) while the other is “oxidized” (loses them). This may or may not involve actual oxygen atoms (which are notorious eaters of electrons), but the concept has been generalized from that.

    The slight irony is that the name “Rust” does not come from the well-known
    redox reaction that iron undergoes with water in the presence of oxygen (catalyzed by a little bit of polar contaminants such as common salt), but from the name of a kind of fungus.

    "Fungus" ??? TOO CRUEL !

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Sat Jun 14 23:32:37 2025
    On 6/14/25 8:57 PM, rbowman wrote:
    On Sat, 14 Jun 2025 23:27:38 -0000 (UTC), Lawrence D'Oliveiro wrote:

    The slight irony is that the name “Rust” does not come from the
    well-known redox reaction that iron undergoes with water in the presence
    of oxygen (catalyzed by a little bit of polar contaminants such as
    common salt), but from the name of a kind of fungus.

    Even more ironical, rust is a pathogen that the Romans sacrificed a dog in hopes of preventing,

    https://penelope.uchicago.edu/encyclopaedia_romana/calendar/robigalia.html

    Hmmmmmmmm ... in THEORY a dose of iron-containing
    hemoglobin in the vicinity COULD delay rusting ...

    I don't think the Old People were very aware of
    zinc. Bronze came early, but brass didn't really
    show up until much later.

    Probably because they didn't have any VMS units
    to help with analysis :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Sun Jun 15 08:26:07 2025
    On 15/06/2025 04:32, c186282 wrote:
    I don't think the Old People were very aware of
      zinc. Bronze came early, but brass didn't really
      show up until much later.

    Mm. Iron came and hordes of bronze bars became worthless.
    Talk about disruptive technology.
    The history of technology is fascinating


      Probably because they didn't have any VMS units
      to help with analysis  🙂

    Very likely true
    --
    The higher up the mountainside
    The greener grows the grass.
    The higher up the monkey climbs
    The more he shows his arse.

    Traditional

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to c186282@nnada.net on Sun Jun 15 14:24:57 2025
    c186282 <c186282@nnada.net> wrote:
    I've nothing AGAINST Rust ... though frankly it seems
    redundant, you could do it almost as easily in 'C'.
    Too many 'new languages' just seem to be 'C' knock-offs
    with crappier syntax.

    Rust's big claim to fame was/is memory safety -- that you can't have
    buffer overflows or writes to unallocated memory. And that by making
    such actions impossible, Rust programs can not suffer from the security breaches that occur when someone exploits a buffer overflow in an
    existing C program.

    In essence it does for you all the "checking error codes" and "checking
    buffer sizes for sufficient space before writing" that C programmers
    have to had manually, and sometimes forget to include.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sun Jun 15 18:49:31 2025
    On Sat, 14 Jun 2025 23:32:37 -0400, c186282 wrote:

    On 6/14/25 8:57 PM, rbowman wrote:
    On Sat, 14 Jun 2025 23:27:38 -0000 (UTC), Lawrence D'Oliveiro wrote:

    The slight irony is that the name “Rust” does not come from the
    well-known redox reaction that iron undergoes with water in the
    presence of oxygen (catalyzed by a little bit of polar contaminants
    such as common salt), but from the name of a kind of fungus.

    Even more ironical, rust is a pathogen that the Romans sacrificed a dog
    in hopes of preventing,

    https://penelope.uchicago.edu/encyclopaedia_romana/calendar/
    robigalia.html

    Hmmmmmmmm ... in THEORY a dose of iron-containing hemoglobin in the
    vicinity COULD delay rusting ...

    afaik wheat rust has nothing to do with iron. Odd to name your programming language after a fungus that has been destroying crops for millennia.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sun Jun 15 21:12:04 2025
    On 6/15/25 3:26 AM, The Natural Philosopher wrote:
    On 15/06/2025 04:32, c186282 wrote:
    I don't think the Old People were very aware of
       zinc. Bronze came early, but brass didn't really
       show up until much later.

    Mm. Iron came and hordes of bronze bars became worthless.
    Talk about disruptive technology.
    The history of technology is fascinating

    Bronze is STILL valuable ... but not in the major
    military sense as back in the old days. Of course
    bronze cannons were still made into the 1800s, but
    usually for small mobile applications.

    Iron was indeed a 'disruptive technology', I'll
    agree with that ! Even fairly crappy steel swords
    and spears were still better than bronze.

       Probably because they didn't have any VMS units
       to help with analysis  🙂

    Very likely true

    Babbage was making his computers using BRASS gears
    and cogs - not bronze or steel. Lovelace didn't
    live long enough to invent VMS alas.

    Hmm, how WOULD you network Babbage AEs using the
    tech of the time ? The telegraph was demonstrated
    just a few years after he proposed the AE ... maybe
    a two baud connection ? :-)

    Steel micro-factoid - the famous Damascus steel that
    allowed the arabics to make light thin fast ultra-
    sharp swords was not actually MADE in Damascus or
    anywhere near. It came as ingots from outfits in
    eastern INDIA ... where the 'magic contaminate',
    vanadium, was introduced by accident because they
    lined their steel kilns with the plentiful seashells.
    The particular species tended to absorb and concentrate
    vanadium and it'd get into the steel.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Sun Jun 15 22:45:18 2025
    On 6/15/25 2:49 PM, rbowman wrote:
    On Sat, 14 Jun 2025 23:32:37 -0400, c186282 wrote:

    On 6/14/25 8:57 PM, rbowman wrote:
    On Sat, 14 Jun 2025 23:27:38 -0000 (UTC), Lawrence D'Oliveiro wrote:

    The slight irony is that the name “Rust” does not come from the
    well-known redox reaction that iron undergoes with water in the
    presence of oxygen (catalyzed by a little bit of polar contaminants
    such as common salt), but from the name of a kind of fungus.

    Even more ironical, rust is a pathogen that the Romans sacrificed a dog
    in hopes of preventing,

    https://penelope.uchicago.edu/encyclopaedia_romana/calendar/
    robigalia.html

    Hmmmmmmmm ... in THEORY a dose of iron-containing hemoglobin in the
    vicinity COULD delay rusting ...

    afaik wheat rust has nothing to do with iron. Odd to name your programming language after a fungus that has been destroying crops for millennia.

    As the subject seemed to be IRON I was commenting
    on adding 'free Fe' into an environment were lots
    of iron was involved.

    WHEAT rust is a very different subject. Doubt blood
    would REALLY help there.

    But, superstition IS often stronger than 100 True Facts.
    Reason and 'gut feeling' are two entirely different
    brain systems - the latter being MUCH older and very
    much Darwin-Tested.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Rich on Sun Jun 15 22:26:33 2025
    On 6/15/25 10:24 AM, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    I've nothing AGAINST Rust ... though frankly it seems
    redundant, you could do it almost as easily in 'C'.
    Too many 'new languages' just seem to be 'C' knock-offs
    with crappier syntax.

    Rust's big claim to fame was/is memory safety -- that you can't have
    buffer overflows or writes to unallocated memory. And that by making
    such actions impossible, Rust programs can not suffer from the security breaches that occur when someone exploits a buffer overflow in an
    existing C program.

    That IS important these days. A huge percentage of
    hacks seem to be exploitation of buffer overflows -
    and M$ has NEVER stamped-out that problem. There
    are 'C' programming practices that can reduce the
    problem, but it seems few USE those even to this day
    no matter how much the manuals scream.

    In essence it does for you all the "checking error codes" and "checking buffer sizes for sufficient space before writing" that C programmers
    have to had manually, and sometimes forget to include.

    If writing long boring code, esp if it's just PART
    of some larger app you're not in control of, it IS
    tempting to cut corners.

    REALLY good code - 'C' or otherwise - can often be
    as much as one third 'fuck-up prevention'. I did lots
    of custom code for company apps and just dealing with
    every way clueless users could screw up was literally
    25-33% of the code. Defending against Vlad's boyz
    now makes it even more difficult.

    Otherwise however, the RUST syntax in general just
    seems more unpleasant than 'C'. It's like someone
    deliberately wanted to screw with people.

    There are SOME in these groups who really HATE
    Rust, treat it like an invasion of demonic powers.
    It's not nearly THAT bad IMHO ... I'm just never
    likely to use it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Mon Jun 16 04:30:56 2025
    On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:

    Otherwise however, the RUST syntax in general just seems more
    unpleasant than 'C'. It's like someone deliberately wanted to screw
    with people.

    println!() would be a show stopper for me. Why everybody has to come up
    with their special snowflake function to write to the console is beyond
    me. Won't even go into cout << "hello world" << endl;

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Mon Jun 16 04:35:47 2025
    On Sun, 15 Jun 2025 22:45:18 -0400, c186282 wrote:

    As the subject seemed to be IRON I was commenting on adding 'free Fe'
    into an environment were lots of iron was involved.

    I believe it all started with the rust programming language which was
    named after a fungus. No iron involved.

    Now if the discussion were about IronPython...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Mon Jun 16 01:31:45 2025
    On 6/16/25 12:30 AM, rbowman wrote:
    On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:

    Otherwise however, the RUST syntax in general just seems more
    unpleasant than 'C'. It's like someone deliberately wanted to screw
    with people.

    println!() would be a show stopper for me. Why everybody has to come up
    with their special snowflake function to write to the console is beyond
    me. Won't even go into cout << "hello world" << endl;


    Know exactly what you mean !

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Mon Jun 16 01:35:25 2025
    On 6/16/25 12:35 AM, rbowman wrote:
    On Sun, 15 Jun 2025 22:45:18 -0400, c186282 wrote:

    As the subject seemed to be IRON I was commenting on adding 'free Fe'
    into an environment were lots of iron was involved.

    I believe it all started with the rust programming language which was
    named after a fungus. No iron involved.

    I thought it began with sacrificing dogs ....

    Now if the discussion were about IronPython...

    Isn't that mostly a pointless Win/C# thing ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to c186282@nnada.net on Mon Jun 16 18:15:31 2025
    On 2025-06-16, c186282 <c186282@nnada.net> wrote:

    Babbage was making his computers using BRASS gears
    and cogs - not bronze or steel. Lovelace didn't
    live long enough to invent VMS alas.

    Hmm, how WOULD you network Babbage AEs using the
    tech of the time ? The telegraph was demonstrated
    just a few years after he proposed the AE ... maybe
    a two baud connection ? :-)

    Well, Teletypes managed 110 baud (even 150 on the model 37
    but that was pushing it). I have a 35RO on which I did a
    complete adjustment and lubrication schedule according to
    the manual. In the process I got a good look at how it
    decoded incoming data with nothing more than a honking big
    solenoid and a bunch of very clever little cams and pawls.
    Pretty awesome, actually.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Charlie Gibbs on Tue Jun 17 23:20:24 2025
    On 6/16/25 2:15 PM, Charlie Gibbs wrote:
    On 2025-06-16, c186282 <c186282@nnada.net> wrote:

    Babbage was making his computers using BRASS gears
    and cogs - not bronze or steel. Lovelace didn't
    live long enough to invent VMS alas.

    Hmm, how WOULD you network Babbage AEs using the
    tech of the time ? The telegraph was demonstrated
    just a few years after he proposed the AE ... maybe
    a two baud connection ? :-)

    Well, Teletypes managed 110 baud (even 150 on the model 37
    but that was pushing it). I have a 35RO on which I did a
    complete adjustment and lubrication schedule according to
    the manual. In the process I got a good look at how it
    decoded incoming data with nothing more than a honking big
    solenoid and a bunch of very clever little cams and pawls.
    Pretty awesome, actually.

    Old telegraphs were interesting - because the data
    was essentially 'binary' - ones and zeros, contact
    or not. This made it possible to use simple relays
    as repeater/amplifiers. Easy 1800s tech.

    So, in theory, they COULD have networked Babbage
    Analytical Engines. Very low speed, but it really
    would have worked.

    I wonder what protocol Ada would have envisioned ?
    Babbage was the hardware guy, but Lovelace understood
    the Full Potential a LOT better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Wed Jun 18 04:14:42 2025
    On Tue, 17 Jun 2025 23:20:24 -0400, c186282 wrote:

    Old telegraphs were interesting - because the data was essentially
    'binary' - ones and zeros, contact or not. This made it possible to
    use simple relays as repeater/amplifiers. Easy 1800s tech.

    Sort of. The first attempts were complex.

    https://en.wikipedia.org/wiki/Needle_telegraph

    Morse and the refiners of his system introduced a time element, with a dot being three dits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to c186282@nnada.net on Wed Jun 18 05:30:06 2025
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
    On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
    On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:

    You meaybe thinking of Redox OS https://www.redox-os.org/

    That name is obviously meant to be a kind of word play on “Rust”. As I >> recall from my high-school chemistry lessons, a “redox reaction” is one >> where one reactant is “reduced” (gains electrons) while the other is
    “oxidized” (loses them). This may or may not involve actual oxygen atoms >> (which are notorious eaters of electrons), but the concept has been
    generalized from that.

    The slight irony is that the name “Rust” does not come from the well-known
    redox reaction that iron undergoes with water in the presence of oxygen
    (catalyzed by a little bit of polar contaminants such as common salt), but >> from the name of a kind of fungus.

    "Fungus" ??? TOO CRUEL !

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.


    Rust I personally dislike the syntax of, AND its development team is
    apparently pretty controversial.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to All on Wed Jun 18 02:09:19 2025
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
    On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
    On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:

    You meaybe thinking of Redox OS https://www.redox-os.org/

    That name is obviously meant to be a kind of word play on “Rust”. As I >>> recall from my high-school chemistry lessons, a “redox reaction” is one >>> where one reactant is “reduced” (gains electrons) while the other is >>> “oxidized” (loses them). This may or may not involve actual oxygen atoms
    (which are notorious eaters of electrons), but the concept has been
    generalized from that.

    The slight irony is that the name “Rust” does not come from the well-known
    redox reaction that iron undergoes with water in the presence of oxygen
    (catalyzed by a little bit of polar contaminants such as common salt), but >>> from the name of a kind of fungus.

    "Fungus" ??? TOO CRUEL !

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.


    Rust I personally dislike the syntax of, AND its development team is apparently pretty controversial.


    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Wed Jun 18 02:34:23 2025
    On 6/18/25 12:14 AM, rbowman wrote:
    On Tue, 17 Jun 2025 23:20:24 -0400, c186282 wrote:

    Old telegraphs were interesting - because the data was essentially
    'binary' - ones and zeros, contact or not. This made it possible to
    use simple relays as repeater/amplifiers. Easy 1800s tech.

    Sort of. The first attempts were complex.

    https://en.wikipedia.org/wiki/Needle_telegraph

    Morse and the refiners of his system introduced a time element, with a dot being three dits.

    Yea, required a few tweaks - but what doesn't ?

    In any case, using telegraph to network Babbage
    machines WAS possible by about 1850.

    Alas the machines WEREN'T THERE YET. The theory
    was perfect, the physical MEANS had not been
    realized. THAT had to wait for valves to replace
    brass gears.

    As best I've been able to tell, Lovelace never
    described any intercommunication scheme between
    Babbage computers. Her health went bad ...
    basically crushing the last half of her career
    alas.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to rbowman on Wed Jun 18 17:40:13 2025
    rbowman <bowman@montana.com> wrote:
    On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:

    Otherwise however, the RUST syntax in general just seems more
    unpleasant than 'C'. It's like someone deliberately wanted to screw
    with people.

    println!() would be a show stopper for me. Why everybody has to come up
    with their special snowflake function to write to the console is beyond
    me. Won't even go into cout << "hello world" << endl;

    "println" (without the !) makes me think someone was very much a Pascal disciple (with it's write/writeln) for output.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to c186282@nnada.net on Wed Jun 18 19:00:02 2025
    c186282 <c186282@nnada.net> wrote at 06:09 this Wednesday (GMT):
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
    On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
    On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:

    You meaybe thinking of Redox OS https://www.redox-os.org/

    That name is obviously meant to be a kind of word play on “Rust”. As I >>>> recall from my high-school chemistry lessons, a “redox reaction” is one
    where one reactant is “reduced” (gains electrons) while the other is >>>> “oxidized” (loses them). This may or may not involve actual oxygen atoms
    (which are notorious eaters of electrons), but the concept has been
    generalized from that.

    The slight irony is that the name “Rust” does not come from the well-known
    redox reaction that iron undergoes with water in the presence of oxygen >>>> (catalyzed by a little bit of polar contaminants such as common salt), but >>>> from the name of a kind of fungus.

    "Fungus" ??? TOO CRUEL !

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.


    Rust I personally dislike the syntax of, AND its development team is
    apparently pretty controversial.


    IMHO, stick to 'C' ... but use GOOD PRACTICES.


    Makes sense to me.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to candycanearter07@candycanearter07.n on Wed Jun 18 20:23:21 2025
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    c186282 <c186282@nnada.net> wrote at 06:09 this Wednesday (GMT):
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
    On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
    On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:

    You meaybe thinking of Redox OS https://www.redox-os.org/

    That name is obviously meant to be a kind of word play on “Rust”. As I
    recall from my high-school chemistry lessons, a “redox reaction” is one
    where one reactant is “reduced” (gains electrons) while the other is >>>>> “oxidized” (loses them). This may or may not involve actual oxygen atoms
    (which are notorious eaters of electrons), but the concept has been
    generalized from that.

    The slight irony is that the name “Rust” does not come from the well-known
    redox reaction that iron undergoes with water in the presence of oxygen >>>>> (catalyzed by a little bit of polar contaminants such as common salt), but
    from the name of a kind of fungus.

    "Fungus" ??? TOO CRUEL !

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.


    Rust I personally dislike the syntax of, AND its development team is
    apparently pretty controversial.


    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    Makes sense to me.

    Yes, assuming a perfectly infallable programmer, C can be "memory safe"
    as well.

    Unfortunately, there is no such "perfectly infallable programmer" and
    trying to do so is much like trying to remain anonymous online when the
    FBI, CIA and NSA are all out to find you. You have to be *absolutely
    perfect* in your OPSEC, every single time. The FBI, CIA and NSA can
    just patiently wait for that one time you slighly slip up, and
    *gotcha*.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to Rich on Wed Jun 18 20:30:15 2025
    On 2025-06-18, Rich <rich@example.invalid> wrote:

    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:

    c186282 <c186282@nnada.net> wrote at 06:09 this Wednesday (GMT):

    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    Makes sense to me.

    Yes, assuming a perfectly infallable programmer, C can be "memory safe"
    as well.

    Unfortunately, there is no such "perfectly infallable programmer" and
    trying to do so is much like trying to remain anonymous online when the
    FBI, CIA and NSA are all out to find you. You have to be *absolutely perfect* in your OPSEC, every single time. The FBI, CIA and NSA can
    just patiently wait for that one time you slighly slip up, and
    *gotcha*.

    Ditto for web scammers (including those "Do not sell my data" buttons
    that they're waiting for you to forget to click).

    Pretty much the same for life in general, actually.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Rich on Wed Jun 18 23:06:49 2025
    On Wed, 18 Jun 2025 17:40:13 -0000 (UTC), Rich wrote:

    rbowman <bowman@montana.com> wrote:
    On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:

    Otherwise however, the RUST syntax in general just seems more
    unpleasant than 'C'. It's like someone deliberately wanted to screw
    with people.

    println!() would be a show stopper for me. Why everybody has to come up
    with their special snowflake function to write to the console is beyond
    me. Won't even go into cout << "hello world" << endl;

    "println" (without the !) makes me think someone was very much a Pascal disciple (with it's write/writeln) for output.

    <rant>
    So in one swell foop they manage to confuse C and Java people (printf), Go people (Printf), Pascal people (writeln), C# people (WriteLine), Python
    people (print), Javascript people (log), Fortran people (write) and
    probably other languages I'm not familiar with.

    Then there is fn, function, func, def, and a few others. I hate languages
    that are sort of like other languages but not quite. At least JavaScript doesn't get its knickers in a knot over semicolons, altohugh its casual approach to ' and " burns me when I use not so casual languages.
    </rant>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Rich on Wed Jun 18 23:09:53 2025
    On Wed, 18 Jun 2025 20:23:21 -0000 (UTC), Rich wrote:

    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
    wrote:
    c186282 <c186282@nnada.net> wrote at 06:09 this Wednesday (GMT):
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
    On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
    On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:

    You meaybe thinking of Redox OS https://www.redox-os.org/

    That name is obviously meant to be a kind of word play on “Rust”. >>>>>> As I recall from my high-school chemistry lessons, a “redox
    reaction” is one where one reactant is “reduced” (gains electrons) >>>>>> while the other is “oxidized” (loses them). This may or may not >>>>>> involve actual oxygen atoms (which are notorious eaters of
    electrons), but the concept has been generalized from that.

    The slight irony is that the name “Rust” does not come from the >>>>>> well-known redox reaction that iron undergoes with water in the
    presence of oxygen (catalyzed by a little bit of polar contaminants >>>>>> such as common salt), but from the name of a kind of fungus.

    "Fungus" ??? TOO CRUEL !

    Rust is perfectly OK ... but I don't see much advantage over
    plain 'C'. Lots of 'new langs' are like that, just 'C' with
    nastier syntax.


    Rust I personally dislike the syntax of, AND its development team is
    apparently pretty controversial.


    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    Makes sense to me.

    Yes, assuming a perfectly infallable programmer, C can be "memory safe"
    as well.

    I don't know rust at all but I wonder if it's like Stroustrup's comment on
    C++ -- it's harder to shoot yourself in the foot but when you do you blow
    your whole leg off.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Rich on Wed Jun 18 19:43:49 2025
    On 6/18/25 1:40 PM, Rich wrote:
    rbowman <bowman@montana.com> wrote:
    On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:

    Otherwise however, the RUST syntax in general just seems more
    unpleasant than 'C'. It's like someone deliberately wanted to screw
    with people.

    println!() would be a show stopper for me. Why everybody has to come up
    with their special snowflake function to write to the console is beyond
    me. Won't even go into cout << "hello world" << endl;

    "println" (without the !) makes me think someone was very much a Pascal disciple (with it's write/writeln) for output.

    I still do Pascal ... writeln() simply tacks
    on a '\n' to each line. It's a convenience.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to c186282@nnada.net on Thu Jun 19 01:08:58 2025
    c186282 <c186282@nnada.net> wrote:
    On 6/18/25 1:40 PM, Rich wrote:
    rbowman <bowman@montana.com> wrote:
    On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:

    Otherwise however, the RUST syntax in general just seems more
    unpleasant than 'C'. It's like someone deliberately wanted to screw >>>> with people.

    println!() would be a show stopper for me. Why everybody has to come up
    with their special snowflake function to write to the console is beyond
    me. Won't even go into cout << "hello world" << endl;

    "println" (without the !) makes me think someone was very much a Pascal
    disciple (with it's write/writeln) for output.

    I still do Pascal ... writeln() simply tacks on a '\n' to each
    line. It's a convenience.

    My point was the chosen spelling makes it look like someone liked
    Pascal's function name style, but preferred "print" to "write" for some
    reason.

    Wrote my fair share of Pascal back in the day (Apple II UCSD, Turbo
    Pascal 4 on a PC clone, some University's Pascal compiler (I've long
    since forgotten the name) for the CDC Cyber 7600 during college). I
    know what the 'ln' suffix on Pascal's write (and read) does.

    Now, what would be surprising would be if a Pascal deciple decided on
    "println" (why the ! I don't know) but then made it not append a new
    line to the output.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Rich on Thu Jun 19 00:46:35 2025
    On 6/18/25 9:08 PM, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 6/18/25 1:40 PM, Rich wrote:
    rbowman <bowman@montana.com> wrote:
    On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:

    Otherwise however, the RUST syntax in general just seems more
    unpleasant than 'C'. It's like someone deliberately wanted to screw >>>>> with people.

    println!() would be a show stopper for me. Why everybody has to come up >>>> with their special snowflake function to write to the console is beyond >>>> me. Won't even go into cout << "hello world" << endl;

    "println" (without the !) makes me think someone was very much a Pascal
    disciple (with it's write/writeln) for output.

    I still do Pascal ... writeln() simply tacks on a '\n' to each
    line. It's a convenience.

    My point was the chosen spelling makes it look like someone liked
    Pascal's function name style, but preferred "print" to "write" for some reason.

    Wrote my fair share of Pascal back in the day (Apple II UCSD, Turbo
    Pascal 4 on a PC clone, some University's Pascal compiler (I've long
    since forgotten the name) for the CDC Cyber 7600 during college). I
    know what the 'ln' suffix on Pascal's write (and read) does.

    Now, what would be surprising would be if a Pascal deciple decided on "println" (why the ! I don't know) but then made it not append a new
    line to the output.

    'B'/'C' was a little ahead of Pascal ... so my guess is
    that Wirth was annoyed having to physically add '\n' and
    made a function that'd do it automatically.

    You can still get a 'B' compiler for Linux BTW.

    Still LOVE Pascal however, code poetry. Wirth DID have
    a vision to amp Algol (you can still get an ALGOL
    compiler and manuals too).

    Python sometimes annoys be for NOT having a println()

    DO look into Lazarus/FPC - THE fastest way to make a
    good capable basic GUI in Linux AND portable to Win.
    Not 'modern artistic', but VERY USABLE. Works like
    tKinter and others ... a pgm within a pgm ... but
    with MAJOR EZ event/options control and such. I've used
    it for a LONG time - once the Borland/Delphi stuff
    became Giant $$$.

    Have had SOME problems of late with the distro libs,
    wrong versions of the various components. Try the
    home site. Correct install order is important.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Thu Jun 19 06:36:40 2025
    On Thu, 19 Jun 2025 00:46:35 -0400, c186282 wrote:

    Python sometimes annoys be for NOT having a println()

    Yes, but it has f strings now. I never liked the old style formatting. At
    least f is similar to the C# $ rather than some of the stranger ways to signify string interpolation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to c186282@nnada.net on Thu Jun 19 08:40:31 2025
    (Counting C++ as a dialect of C for the purposes of this posting, which
    isn’t true in general, but doesn’t really affect the point.)

    c186282 <c186282@nnada.net> writes:
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.

    It is absolutely not C with different syntax. Language designers have
    learned a lot since C.

    Rust I personally dislike the syntax of, AND its development team is
    apparently pretty controversial.

    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does not
    work.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Fri Jun 20 00:43:17 2025
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:
    (Counting C++ as a dialect of C for the purposes of this posting, which isn’t true in general, but doesn’t really affect the point.)

    c186282 <c186282@nnada.net> writes:
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.

    It is absolutely not C with different syntax. Language designers have
    learned a lot since C.


    Ummmmmmmm ... nothing GOOD that I can tell :-)


    Rust I personally dislike the syntax of, AND its development team is
    apparently pretty controversial.

    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does not
    work.

    At some point, soon, they need to start flagging
    the unsafe functions as ERRORS, not just WARNINGS.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to c186282@nnada.net on Fri Jun 20 09:00:18 2025
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does
    not work.

    At some point, soon, they need to start flagging the unsafe functions
    as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language
    is riddled with unsafe semantics.

    There is some movement towards fixing the easy issues, e.g. [1]. But the
    wider issues are a lot harder to truly fix, so much so that one of the
    more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other issues, despite their recurring role in defects and vulnerabilities.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly successfully.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Fri Jun 20 10:12:28 2025
    On 20/06/2025 05:43, c186282 wrote:
    The software industry has been trying this for decades now. It does not
    work.

      At some point, soon, they need to start flagging
      the unsafe functions as ERRORS, not just WARNINGS.

    The problem is that C was designed by two smart people to run on small
    hardware for use by other smart people.

    Although they didn't *invent* stack based temporary variables I was
    totally impressed when I discovered how they worked.

    But the potential for overrunning *any* piece of memory allocated for a variable is always there unless you are using the equivalent of a whole
    other CPU to manage memory and set hard limits.

    You can get rid of using the program stack which helps, but the problem
    remains

    --
    The biggest threat to humanity comes from socialism, which has utterly
    diverted our attention away from what really matters to our existential survival, to indulging in navel gazing and faux moral investigations
    into what the world ought to be, whilst we fail utterly to deal with
    what it actually is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Richard Kettlewell on Fri Jun 20 08:57:09 2025
    On Thu, 19 Jun 2025 08:40:31 +0100, Richard Kettlewell wrote:

    The software industry has been trying this for decades now. It does not
    work.

    There is a safety-critical spec for writing C code. It has been in
    production use in the automotive industry for some years, decades now. How often do you hear about software bugs in safety-critical car systems?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Richard Kettlewell on Fri Jun 20 10:19:08 2025
    On 20/06/2025 09:00, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does
    not work.

    At some point, soon, they need to start flagging the unsafe functions
    as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language
    is riddled with unsafe semantics.

    There is some movement towards fixing the easy issues, e.g. [1]. But the wider issues are a lot harder to truly fix, so much so that one of the
    more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other issues, despite their recurring role in defects and vulnerabilities.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly successfully.

    I don't really see how you can have a program that cannot write or read
    memory beyond the intentions of the original programmer.

    Sure if its a different process but simply reading one byte beyond the
    end of a buffer is going to be hard.

    And probably make the language very hard to use when you are dealing
    with multi-typed data.

    I do like, at a cursory glance, the second link. Hardware that protects
    memory is a great leap forward

    --
    For every complex problem there is an answer that is clear, simple, and
    wrong.

    H.L.Mencken

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to The Natural Philosopher on Fri Jun 20 15:15:25 2025
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 20/06/2025 09:00, Richard Kettlewell wrote:
    Most languages after C designed these issues out, one way or
    another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly
    successfully.

    I don't really see how you can have a program that cannot write or
    read memory beyond the intentions of the original programmer.

    It’s not particularly difficult to describe (building a complete
    language is more effort, of course). Some relevant techniques are:

    * Automatic bounds-checking on arrays. Found in practically everything
    more recent than C (and some earlier languages, e.g. Algol). If you
    try to access an array out of bounds then you’re guaranteed a runtime
    error (what that means depends on the language, but it’s definitely
    not going to read or write something it shouldn’t). The application
    may fail if you don’t handle the error, but it does so in a
    predictable way, and it doesn’t represent an attack vector.

    * Either eliminate pointers entirely, or replace with some kind of
    reference type and automated memory management.
    In concrete terms automated memory management usually means a garbage
    collector, but as Rust shows, it’s not the only option.

    You write your programs slightly differently to C as a result, and there
    is a performance cost, although not necessarily as you might think. But
    there are huge numbers of applications written in languages that use
    these strategies.


    Now, if you are writing an OS kernel then at least at certain points
    you’re going to need rather free access to memory, and that’s hard if
    you stay purely within constraints like the above.

    In some languages that’s not really an issue. Nobody is writing an OS
    kernel in Python, for example. But we need to write kernels and drivers
    in something...

    A common approach is to segregate these operations (e.g. pointer
    arithmetic) into ‘unsafe’ sections of some kind, meaning only very small parts of the application need the extra care associated with raw
    pointers etc. Everywhere else you can devote your full attention to
    getting the business logic right and not worry too much about the
    consequences of an array overrun or whatever.

    Consider electricity as an analogy. In the home it’s insulated cables,
    nice friendly plugs, shuttered sockets, etc, at least in civilized
    countries. The really dangerous stuff is kept locked away in a
    substation with high walls, DANGER OF DEATH signs, etc.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to The Natural Philosopher on Fri Jun 20 13:36:10 2025
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 20/06/2025 09:00, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does
    not work.

    At some point, soon, they need to start flagging the unsafe functions
    as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language
    is riddled with unsafe semantics.

    There is some movement towards fixing the easy issues, e.g. [1]. But the
    wider issues are a lot harder to truly fix, so much so that one of the
    more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other
    issues, despite their recurring role in defects and vulnerabilities.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly
    successfully.

    I don't really see how you can have a program that cannot write or read memory beyond the intentions of the original programmer.

    Ada accomplished it years ago (i.e., Rust is nothing new in that
    regard). But.... it did so by inserting in the compiled output all
    the checks for buffer sizes before use and checks of error return codes
    that so often get omitted in C code. And the performance hit was
    sufficient that Ada only found a niche in very safety critical
    environments (aircraft avionics, etc.).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to The Natural Philosopher on Fri Jun 20 13:39:42 2025
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 20/06/2025 05:43, c186282 wrote:
    The software industry has been trying this for decades now. It does not
    work.

      At some point, soon, they need to start flagging
      the unsafe functions as ERRORS, not just WARNINGS.

    The problem is that C was designed by two smart people to run on small hardware for use by other smart people.

    Although they didn't *invent* stack based temporary variables I was
    totally impressed when I discovered how they worked.

    But the potential for overrunning *any* piece of memory allocated for a variable is always there unless you are using the equivalent of a whole
    other CPU to manage memory and set hard limits.

    You can get rid of using the program stack which helps, but the problem remains

    Any language which provides the programmer the equivalent of BASIC's
    "poke" provides the programmer with the ability to corrupt memory.

    Whether that corruption leads to a security issue instead of the
    ordering of 32456 widgets when 42 were intended, depends upon the
    nature of the corruption.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to c186282@nnada.net on Fri Jun 20 13:30:48 2025
    c186282 <c186282@nnada.net> wrote:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:
    (Counting C++ as a dialect of C for the purposes of this posting, which
    isn’t true in general, but doesn’t really affect the point.)

    c186282 <c186282@nnada.net> writes:
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.

    It is absolutely not C with different syntax. Language designers have
    learned a lot since C.


    Ummmmmmmm ... nothing GOOD that I can tell :-)


    Rust I personally dislike the syntax of, AND its development team is
    apparently pretty controversial.

    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does not
    work.

    At some point, soon, they need to start flagging
    the unsafe functions as ERRORS, not just WARNINGS.

    That's not enough. It is very easy in C to use a "safe" function
    unsafely. Writing "safe" C code requires a very knowledgable (about
    C), very careful, programmer. The vast majority of those writing C are neither.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Rich on Fri Jun 20 16:14:38 2025
    On 20/06/2025 14:30, Rich wrote:
    c186282 <c186282@nnada.net> wrote:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:
    (Counting C++ as a dialect of C for the purposes of this posting, which
    isn’t true in general, but doesn’t really affect the point.)

    c186282 <c186282@nnada.net> writes:
    On 6/18/25 1:30 AM, candycanearter07 wrote:
    c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):

    Rust is perfectly OK ... but I don't see much advantage
    over plain 'C'. Lots of 'new langs' are like that, just
    'C' with nastier syntax.

    It is absolutely not C with different syntax. Language designers have
    learned a lot since C.


    Ummmmmmmm ... nothing GOOD that I can tell :-)


    Rust I personally dislike the syntax of, AND its development team is >>>>> apparently pretty controversial.

    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does not
    work.

    At some point, soon, they need to start flagging
    the unsafe functions as ERRORS, not just WARNINGS.

    That's not enough. It is very easy in C to use a "safe" function
    unsafely. Writing "safe" C code requires a very knowledgable (about
    C), very careful, programmer. The vast majority of those writing C are neither.

    Which raises the question of whether they ought to be allowed to write
    code at all.


    --
    Any fool can believe in principles - and most of them do!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Rich on Fri Jun 20 16:15:45 2025
    On 20/06/2025 14:36, Rich wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 20/06/2025 09:00, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does
    not work.

    At some point, soon, they need to start flagging the unsafe functions
    as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language >>> is riddled with unsafe semantics.

    There is some movement towards fixing the easy issues, e.g. [1]. But the >>> wider issues are a lot harder to truly fix, so much so that one of the
    more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other >>> issues, despite their recurring role in defects and vulnerabilities.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly >>> successfully.

    I don't really see how you can have a program that cannot write or read
    memory beyond the intentions of the original programmer.

    Ada accomplished it years ago (i.e., Rust is nothing new in that
    regard). But.... it did so by inserting in the compiled output all
    the checks for buffer sizes before use and checks of error return codes
    that so often get omitted in C code. And the performance hit was
    sufficient that Ada only found a niche in very safety critical
    environments (aircraft avionics, etc.).

    I bet a bad (or extremely good) programmer could circumvfent that

    --
    Any fool can believe in principles - and most of them do!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Rich on Fri Jun 20 21:19:35 2025
    Rich <rich@example.invalid> writes:
    Ada accomplished it years ago (i.e., Rust is nothing new in that
    regard). But.... it did so by inserting in the compiled output all
    the checks for buffer sizes before use and checks of error return codes
    that so often get omitted in C code. And the performance hit was
    sufficient that Ada only found a niche in very safety critical
    environments (aircraft avionics, etc.).

    I don’t know what Ada’s approach was in detail, but I have a few points
    to make here.

    First, just because an automated check isn’t reflected in comparable C
    code doesn’t mean the check isn’t necessary; and as the stream of vulnerabilities over the last few decades show, often omitted checks
    _are_ necessary. Comparing buggy C code with correctly functioning Ada
    code is not really an argument for using C.

    Secondly, many checks can be optimized out. e.g. iterating over an array
    (or a prefix of it) doesn’t need a check on every access, it just needs
    a check that the loop bound doesn’t exceed the array bound[1]. This kind
    of optimization is easy mode for compilers;
    https://godbolt.org/z/Tz5KGq6vais shows an example in C++ (the at()
    method is bounds-checked array indexing).

    [1] provided of course that the array can’t change size during the
    loop; experience doesn’t really support the idea that humans are
    good at noticing whether this condition is true.

    Finally, on all but the least powerful microprocessors, a correctly
    predicted branch is almost free, and a passed bounds check is easy mode
    for a branch predictor.

    With that in mind, with compilers and microprocessors from this century,
    the impact of this sort of thing is rather small. (Ada dates back to
    1980, at which time a lot of these technologies were much less mature.)

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to The Natural Philosopher on Fri Jun 20 23:07:20 2025
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 20/06/2025 14:36, Rich wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 20/06/2025 09:00, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does >>>>>> not work.

    At some point, soon, they need to start flagging the unsafe functions >>>>> as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language >>>> is riddled with unsafe semantics.

    There is some movement towards fixing the easy issues, e.g. [1]. But the >>>> wider issues are a lot harder to truly fix, so much so that one of the >>>> more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other >>>> issues, despite their recurring role in defects and vulnerabilities.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly >>>> successfully.

    I don't really see how you can have a program that cannot write or read
    memory beyond the intentions of the original programmer.

    Ada accomplished it years ago (i.e., Rust is nothing new in that
    regard). But.... it did so by inserting in the compiled output all
    the checks for buffer sizes before use and checks of error return codes
    that so often get omitted in C code. And the performance hit was
    sufficient that Ada only found a niche in very safety critical
    environments (aircraft avionics, etc.).

    I bet a bad (or extremely good) programmer could circumvfent that

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes
    or buffer lengths, etc.). I.e. the typical 9-5 contract programmer,
    not the Dennis Ritchie's of the world.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to Richard Kettlewell on Fri Jun 20 23:17:23 2025
    Richard Kettlewell <invalid@invalid.invalid> wrote:
    Rich <rich@example.invalid> writes:
    Ada accomplished it years ago (i.e., Rust is nothing new in that
    regard). But.... it did so by inserting in the compiled output all
    the checks for buffer sizes before use and checks of error return codes
    that so often get omitted in C code. And the performance hit was
    sufficient that Ada only found a niche in very safety critical
    environments (aircraft avionics, etc.).

    I don’t know what Ada’s approach was in detail, but I have a few points to make here.

    First, just because an automated check isn’t reflected in comparable C
    code doesn’t mean the check isn’t necessary; and as the stream of vulnerabilities over the last few decades show, often omitted checks
    _are_ necessary. Comparing buggy C code with correctly functioning Ada
    code is not really an argument for using C.

    I never said it was an argument for using C. I was responding to this
    part of TNP's post with that, which you've left out above:

    I don't really see how you can have a program that cannot write or
    read memory beyond the intentions of the original programmer.

    Rust's "memory safety" is nothing new. New maybe to "Today's 10,000" (https://xkcd.com/1053/) but not new to the world of programming.

    Secondly, many checks can be optimized out. e.g. iterating over an array
    (or a prefix of it) doesn’t need a check on every access, it just needs
    a check that the loop bound doesn’t exceed the array bound[1]. This kind
    of optimization is easy mode for compilers;
    https://godbolt.org/z/Tz5KGq6vais shows an example in C++ (the at()
    method is bounds-checked array indexing).

    Yes, whether the Ada compiler of yesteryear (or the modern ones today)
    do so I cannot say.

    Finally, on all but the least powerful microprocessors, a correctly
    predicted branch is almost free, and a passed bounds check is easy mode
    for a branch predictor.

    With that in mind, with compilers and microprocessors from this century,
    the impact of this sort of thing is rather small. (Ada dates back to
    1980, at which time a lot of these technologies were much less mature.)

    Indeed, yes, on a modern CPU much of the runtime checking is less
    performance eventful than it was on 1980's CPUs. It is not free by any
    measure either, some short number of cycles are consumed by that
    correctly predicted branch. For all but the most performance critical
    the loss is well worth the gain in safety. And one could argue that "performance critical" which in the end results in some sigificant
    security breech might not be as "performance critical" as it seems when
    the whole picture is taken into account.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Rich on Sat Jun 21 01:07:06 2025
    On 21/06/2025 00:07, Rich wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 20/06/2025 14:36, Rich wrote:
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 20/06/2025 09:00, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does >>>>>>> not work.

    At some point, soon, they need to start flagging the unsafe functions >>>>>> as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language >>>>> is riddled with unsafe semantics.

    There is some movement towards fixing the easy issues, e.g. [1]. But the >>>>> wider issues are a lot harder to truly fix, so much so that one of the >>>>> more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other >>>>> issues, despite their recurring role in defects and vulnerabilities. >>>>>
    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another. >>>>> The clever bit is figuring out how to combine performance and safety, >>>>> and that’s what language designers have been working out, increasingly >>>>> successfully.

    I don't really see how you can have a program that cannot write or read >>>> memory beyond the intentions of the original programmer.

    Ada accomplished it years ago (i.e., Rust is nothing new in that
    regard). But.... it did so by inserting in the compiled output all
    the checks for buffer sizes before use and checks of error return codes
    that so often get omitted in C code. And the performance hit was
    sufficient that Ada only found a niche in very safety critical
    environments (aircraft avionics, etc.).

    I bet a bad (or extremely good) programmer could circumvfent that

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes
    or buffer lengths, etc.). I.e. the typical 9-5 contract programmer,
    not the Dennis Ritchie's of the world.

    the 9-5 contract programmers WERE the Dennis Ritchies.

    The idiots were the permies.

    --
    “Ideas are inherently conservative. They yield not to the attack of
    other ideas but to the massive onslaught of circumstance"

    - John K Galbraith

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Rich on Sat Jun 21 03:09:12 2025
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes
    or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not
    the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the problem even though I'm probably screwed at that point. It has pointed out errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen. Sadly, some had years of experience.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Riches@21:1/5 to rbowman on Sat Jun 21 03:43:56 2025
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes
    or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not
    the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the problem even though I'm probably screwed at that point. It has pointed out errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen. Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Those who had worked on that project longer said the bug had been
    latent in the code for several years, most likely with alignment
    padding masking the bug from being discovered. Curiously, the
    bug made itself manifest immediately upon changing from a 32-bit
    build environment to a 64-bit build environment.

    --
    Robert Riches
    spamtrap42@jacob21819.net
    (Yes, that is one of my email addresses.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Sat Jun 21 01:10:05 2025
    On 6/20/25 4:00 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
    IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does
    not work.

    At some point, soon, they need to start flagging the unsafe functions
    as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language
    is riddled with unsafe semantics.

    That's where human IQ is *supposed* to cut in ...

    There is some movement towards fixing the easy issues, e.g. [1]. But the wider issues are a lot harder to truly fix, so much so that one of the
    more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other issues, despite their recurring role in defects and vulnerabilities.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly successfully.

    Oh, just a question ... how do we KNOW they've
    successfully designed-out all these problems ???
    How many more obscure issues were created ???

    Just sayin' ....

    Any stuff THIS complex, EXPECT lots and lots of
    hidden problems - so bad humans and even 'AI'
    are not going to detect it up-front. It's in the
    nature of what we've made and is getting worse.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sat Jun 21 01:23:51 2025
    On 6/20/25 5:12 AM, The Natural Philosopher wrote:
    On 20/06/2025 05:43, c186282 wrote:
    The software industry has been trying this for decades now. It does not
    work.

       At some point, soon, they need to start flagging
       the unsafe functions as ERRORS, not just WARNINGS.

    The problem is that C was designed by two smart people to run on small hardware for use by other  smart people.

    Agreed.

    Although they didn't *invent* stack based temporary variables I was
    totally impressed when I discovered how they worked.

    For the time it WAS kinda clever.

    But the potential for overrunning *any* piece of memory allocated for a variable is always there unless you are using the equivalent of a whole
    other CPU to manage memory and set hard limits.

    You can get rid of using the program stack which helps, but the problem remains

    Eliminating the stack will also cut performance.

    There's always GW-BASIC .......

    Anyway, I think all our stuff has become TOO complex
    to properly de-bug or even anticipate all fail modes.
    That's software AND CPUs and support chips/FPGAs.
    We made it, so now ........

    And SUPPOSED fixes ... do THEY also need fixes ? :-)

    At BEST I think we can do more to prevent the EASY
    problems. Too many programmers don't seem to do that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Sat Jun 21 01:17:53 2025
    On 6/20/25 4:57 AM, Lawrence D'Oliveiro wrote:
    On Thu, 19 Jun 2025 08:40:31 +0100, Richard Kettlewell wrote:

    The software industry has been trying this for decades now. It does not
    work.

    There is a safety-critical spec for writing C code. It has been in
    production use in the automotive industry for some years, decades now. How often do you hear about software bugs in safety-critical car systems?

    DOES happen, but very rarely. Hurried 'updates' seem
    to be the main source.

    However, for now, car systems aren't a major target. It's
    banks, govt, infrastructure.

    As I said elsewhere - how do we KNOW the supposed "fixes"
    by post-'C' compliers didn't introduce a lot more potential
    flaws/errors ???

    Face it, our stuff is just TOO COMPLEX these days. It is
    NOT possible to pre-ID all possible flaws/interactions.

    Now a MAJOR cyber-war ... they COULD go after all the
    'connected' cars too - paralyze almost everything. May
    be good to have a pre-Y2K car handy, '64 Ford pickup ...
    all 'dumb-ware' :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sat Jun 21 01:27:34 2025
    On 6/20/25 5:19 AM, The Natural Philosopher wrote:
    On 20/06/2025 09:00, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
    c186282 <c186282@nnada.net> writes:
        IMHO, stick to 'C' ... but use GOOD PRACTICES.

    The software industry has been trying this for decades now. It does
    not work.

    At some point, soon, they need to start flagging the unsafe functions
    as ERRORS, not just WARNINGS.

    The problem is not just a subset of unsafe functions. The whole language
    is riddled with unsafe semantics.

    There is some movement towards fixing the easy issues, e.g. [1]. But the
    wider issues are a lot harder to truly fix, so much so that one of the
    more promising options is an architecture extension[2]; and there
    remains considerable resistance[3] in the standards body to fixing other
    issues, despite their recurring role in defects and vulnerabilities.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
    [2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
    [3] https://www.youtube.com/watch?v=DRgoEKrTxXY

    Most languages after C designed these issues out, one way or another.
    The clever bit is figuring out how to combine performance and safety,
    and that’s what language designers have been working out, increasingly
    successfully.

    I don't really see how you can have a program that cannot write or read memory beyond the intentions of the original programmer.

    Sure if its a different process but simply reading one byte beyond the
    end of a buffer is going to be hard.

    And probably make the language very hard to use when you are dealing
    with multi-typed data.

    I do like, at a cursory glance, the second link. Hardware that protects memory is a great leap forward


    Yet SOME attacks exist to take advantage of those
    very inbuilt 'smart' functions :-)

    CPUs have become SO complex now ... they are like
    a whole busy multi-user/tasking near-AI program unto
    themselves.

    Z80's anyone ? :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Robert Riches on Sat Jun 21 01:36:09 2025
    On 6/20/25 11:43 PM, Robert Riches wrote:
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes
    or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>> the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the
    problem even though I'm probably screwed at that point. It has pointed out >> errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen. >> Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Those who had worked on that project longer said the bug had been
    latent in the code for several years, most likely with alignment
    padding masking the bug from being discovered. Curiously, the
    bug made itself manifest immediately upon changing from a 32-bit
    build environment to a 64-bit build environment.


    Surely megatons of such stuff in all code/compilers.
    Our shit just became TOO complex to really debug.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Robert Riches on Sat Jun 21 05:53:15 2025
    On 21 Jun 2025 03:43:56 GMT, Robert Riches wrote:

    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return
    codes or buffer lengths, etc.). I.e. the typical 9-5 contract
    programmer, not the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log
    the problem even though I'm probably screwed at that point. It has
    pointed out errors for calloc if you've manged to come up with a
    negative size.

    I have worked with programmers that assumed nothing bad would ever
    happen.
    Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The code had _intended_ to dynamically allocate storage for a string and the
    terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly- allocated destination.

    Those who had worked on that project longer said the bug had been latent
    in the code for several years, most likely with alignment padding
    masking the bug from being discovered. Curiously, the bug made itself manifest immediately upon changing from a 32-bit build environment to a 64-bit build environment.

    We picked up quite a few bugs moving from AIX to Linux. AIX was very
    tolerant of null pointers. Building for Windows using the MKS toolkit was
    also interesting. I've fixed 20 year old bugs that were lurking there
    waiting for the right alignment of the planets.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sat Jun 21 05:59:00 2025
    On Sat, 21 Jun 2025 01:10:05 -0400, c186282 wrote:

    Oh, just a question ... how do we KNOW they've successfully
    designed-out all these problems ???
    How many more obscure issues were created ???

    That remains to be seen as rust becomes more widely used and the new
    generation of programmers becomes complacent thinking the language will
    clean up after them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Sat Jun 21 02:10:39 2025
    On 6/21/25 1:59 AM, rbowman wrote:
    On Sat, 21 Jun 2025 01:10:05 -0400, c186282 wrote:

    Oh, just a question ... how do we KNOW they've successfully
    designed-out all these problems ???
    How many more obscure issues were created ???

    That remains to be seen as rust becomes more widely used and the new generation of programmers becomes complacent thinking the language will
    clean up after them.

    Won't. Betcha.

    The 'fixes' require more code ... which WILL have
    it's OWN probs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Richard Kettlewell on Sat Jun 21 07:02:05 2025
    On Fri, 20 Jun 2025 21:19:35 +0100, Richard Kettlewell wrote:

    Secondly, many checks can be optimized out. e.g. iterating over an array
    (or a prefix of it) doesn’t need a check on every access, it just needs
    a check that the loop bound doesn’t exceed the array bound[1].

    And remember, Ada has subrange types, just like Pascal before it. This
    means, if you have something like (excuse any errors in Ada syntax)

    nr_elements : constant integer := 10;
    subtype index is
    1 .. nr_elements;

    buffer : array index of elt_type;
    buffer_index : index;

    then an array access like

    buffer(buffer_index)

    doesn’t actually need to be range-checked at that point, because the
    value of buffer_index is already known to be within the valid range.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to The Natural Philosopher on Sat Jun 21 06:57:21 2025
    On Fri, 20 Jun 2025 10:12:28 +0100, The Natural Philosopher wrote:

    Although they didn't *invent* stack based temporary variables I was
    totally impressed when I discovered how they worked.

    ALGOL 60 had that worked out years earlier. The implementors even figured
    out how to declare one routine inside another, such that the inner one
    could access the outer one’s locals.

    C never had that, even to this day†. Even after Pascal showed how to implement the same idea.

    †Not officially. But GNU C does. (Oddly, GNU C++ does not.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Sat Jun 21 03:07:26 2025
    On 6/21/25 2:57 AM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Jun 2025 10:12:28 +0100, The Natural Philosopher wrote:

    Although they didn't *invent* stack based temporary variables I was
    totally impressed when I discovered how they worked.

    ALGOL 60 had that worked out years earlier. The implementors even figured
    out how to declare one routine inside another, such that the inner one
    could access the outer one’s locals.

    C never had that, even to this day†. Even after Pascal showed how to implement the same idea.

    †Not officially. But GNU C does. (Oddly, GNU C++ does not.)


    Bright minds everywhere, even WAY back (ESPECIALLY
    way back ?).

    Early ALGOL wasn't really intended to be a practical
    language alas ... more a 'demonstration of principle'.

    Algol-68 ... which you CAN get for Linux ... was much
    better.

    Ok ok ... 'better' is relative here :-)

    In early CPUs there was often limited EZ stack space.
    This was a limitation in passing lots of stuff back
    and forth using the stack. A few pointers yes, but
    like full strings and arrays ..........

    Seems like some of the most clever stuff comes from
    environments were resources are very limited/slow.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Rich on Sat Jun 21 08:42:11 2025
    Rich <rich@example.invalid> writes:
    Rust's "memory safety" is nothing new. New maybe to "Today's 10,000" (https://xkcd.com/1053/) but not new to the world of programming.

    ??? nobody is claiming that memory safety is new with Rust, nor even
    that the the techniques it uses are particularly new (although they are distinct from the approaches found in well-known languages such as Java, Python, C#, Go, etc). The novelty is in their delivery in a widely
    adopted systems programming language.

    Finally, on all but the least powerful microprocessors, a correctly
    predicted branch is almost free, and a passed bounds check is easy mode
    for a branch predictor.

    With that in mind, with compilers and microprocessors from this century,
    the impact of this sort of thing is rather small. (Ada dates back to
    1980, at which time a lot of these technologies were much less mature.)

    Indeed, yes, on a modern CPU much of the runtime checking is less
    performance eventful than it was on 1980's CPUs. It is not free by any measure either, some short number of cycles are consumed by that
    correctly predicted branch. For all but the most performance critical
    the loss is well worth the gain in safety. And one could argue that "performance critical" which in the end results in some sigificant
    security breech might not be as "performance critical" as it seems when
    the whole picture is taken into account.

    Have a look at https://en.algorithmica.org/hpc/pipelining/branching/. In
    the P=0 case the loop achieves 1 cycle per iteration. The branch is
    free.

    Certainly this is a best case that won’t always be achieved, but it’s
    quite a good fit to the array bounds checking case. If your program has
    had the first round of bugs shaken out, and isn’t receiving adversarial input, then most array bounds checks will pass.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Sat Jun 21 03:23:25 2025
    On 6/21/25 3:02 AM, Lawrence D'Oliveiro wrote:
    On Fri, 20 Jun 2025 21:19:35 +0100, Richard Kettlewell wrote:

    Secondly, many checks can be optimized out. e.g. iterating over an array
    (or a prefix of it) doesn’t need a check on every access, it just needs
    a check that the loop bound doesn’t exceed the array bound[1].

    And remember, Ada has subrange types, just like Pascal before it. This
    means, if you have something like (excuse any errors in Ada syntax)

    nr_elements : constant integer := 10;
    subtype index is
    1 .. nr_elements;

    buffer : array index of elt_type;
    buffer_index : index;

    then an array access like

    buffer(buffer_index)

    doesn’t actually need to be range-checked at that point, because the
    value of buffer_index is already known to be within the valid range.

    "Already known" is great, I pref it when possible ... but
    the modern stuff tends to be built with the 'infinitely
    extensible' mindset. Little data sets, a few small params,
    become HUGE sets and params 20 years on.

    I've done ONE borderline 'medium' Ada app ... never again.
    Just TOO anal-retentive - half your work is DEFEATING that
    so you can get ANYTHING useful done.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to The Natural Philosopher on Sat Jun 21 08:45:38 2025
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 20/06/2025 05:43, c186282 wrote:

    The software industry has been trying this for decades now. It does
    not work.
      At some point, soon, they need to start flagging
      the unsafe functions as ERRORS, not just WARNINGS.

    The problem is that C was designed by two smart people to run on small hardware for use by other smart people.

    Well, maybe, but the original Unix team still ended up with buffer
    overruns in their code. There’s a famous one in V7 mkdir, which ran with elevated privileged due to the inadequate kernel API. I’ve not tried to exploit it but it’s a pretty straightforward array overrun so almost certainly exploitable to escalate from a mortal user to root.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Sun Jun 22 02:32:29 2025
    On 6/21/25 3:45 AM, Richard Kettlewell wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 20/06/2025 05:43, c186282 wrote:

    The software industry has been trying this for decades now. It does
    not work.
      At some point, soon, they need to start flagging
      the unsafe functions as ERRORS, not just WARNINGS.

    The problem is that C was designed by two smart people to run on small
    hardware for use by other smart people.

    Well, maybe, but the original Unix team still ended up with buffer
    overruns in their code. There’s a famous one in V7 mkdir, which ran with elevated privileged due to the inadequate kernel API. I’ve not tried to exploit it but it’s a pretty straightforward array overrun so almost certainly exploitable to escalate from a mortal user to root.


    I'm gonna say ... today's code is just way WAY too
    complex to properly test/diagnose.

    This is a SERIOUS problem.

    Even 'AI' can only help just SO much.

    These aren't the CP/M Z80 days. More everything
    meant of kind of UNLIMITED code size/complexity
    and many MORE semi-pros with deadlines involved
    in writing it all.

    1965 ... arrow tie 'Dilberts' - SUPER good.

    Now ... ????

    There's a path, paths, to total DOOM now ......

    Vlad & Xi's boyz will find them all.

    Next year or two - it's ALL gonna go down
    REALLY hard. Just wait until your toilet
    won't flush - THEN you'll really GET it ....

    Got recent PAPER statements for your bank
    and similar accounts - something to stuff
    in their faces ? Tuff titty ........

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Robert Riches on Sun Jun 22 13:50:03 2025
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes
    or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>> the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the
    problem even though I'm probably screwed at that point. It has pointed out >> errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen. >> Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Aren't you supposed to multiply by sizeof as well?

    Those who had worked on that project longer said the bug had been
    latent in the code for several years, most likely with alignment
    padding masking the bug from being discovered. Curiously, the
    bug made itself manifest immediately upon changing from a 32-bit
    build environment to a 64-bit build environment.


    I'm more surprised it didn't segfault. Any idea what caused it to not?
    I know strlen doesn't account for the terminating character, but it
    seems like it should've been TWO bytes shorter...
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Richard Kettlewell on Sun Jun 22 15:56:37 2025
    On 22/06/2025 15:27, Richard Kettlewell wrote:
    Segmentation faults don’t happen for all out of bounds accesses, they happen if you access a page which isn’t mapped at all or if you don’t have permission on that page for the operation you’re attempting. The example discussed here would only trigger a segmentation fault if the allocation finished at the end of a page, otherwise you’ll just read or write padding bytes, or the header of the next allocation.

    That, is a really useful factoid...

    Thanks

    --
    Socialism is the philosophy of failure, the creed of ignorance and the
    gospel of envy.

    Its inherent virtue is the equal sharing of misery.

    Winston Churchill

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to All on Sun Jun 22 15:27:22 2025
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
    writes:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
    Some years ago, I heard of a bug related to use of malloc. The code
    had _intended_ to dynamically allocate storage for a string and the
    terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Aren't you supposed to multiply by sizeof as well?

    No, because strlen already gives you the number of bytes, excluding the
    0 terminator.

    It’s also worth noting that in general malloc(n * sizeof something) is a vulnerability if there’s any possibility of adversarial control over the length ‘n’; the multiply operation can overflow size_t and lead to allocating a lot less space that required. This isn’t particularly
    relevant to strings on most platforms (because multiplying by 1 can’t overflow) but if you are multiplying anything by a size and passing thr
    product to malloc or realloc, you may have a problem.

    In principle the fix is to use calloc(), and your C runtime will return
    an error if an overflow would occur. That said, in practice C runtimes
    were still being found to get this wrong as recently as 2021 so
    depending on how mainstream your target platform is, you might want to
    check...

    Those who had worked on that project longer said the bug had been
    latent in the code for several years, most likely with alignment
    padding masking the bug from being discovered. Curiously, the
    bug made itself manifest immediately upon changing from a 32-bit
    build environment to a 64-bit build environment.

    I'm more surprised it didn't segfault. Any idea what caused it to not?
    I know strlen doesn't account for the terminating character, but it
    seems like it should've been TWO bytes shorter...

    Segmentation faults don’t happen for all out of bounds accesses, they
    happen if you access a page which isn’t mapped at all or if you don’t
    have permission on that page for the operation you’re attempting. The
    example discussed here would only trigger a segmentation fault if the allocation finished at the end of a page, otherwise you’ll just read or
    write padding bytes, or the header of the next allocation.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sun Jun 22 19:23:02 2025
    On Sun, 22 Jun 2025 13:50:03 -0000 (UTC), candycanearter07 wrote:

    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday
    (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return
    codes or buffer lengths, etc.). I.e. the typical 9-5 contract
    programmer, not the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log
    the problem even though I'm probably screwed at that point. It has
    pointed out errors for calloc if you've manged to come up with a
    negative size.

    I have worked with programmers that assumed nothing bad would ever
    happen.
    Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The code
    had _intended_ to dynamically allocate storage for a string and the
    terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly- allocated
    destination.

    Aren't you supposed to multiply by sizeof as well?

    No, malloc is N bytes. calloc is N elements of sizeof(foo). Also
    malloc() doesn't initialize the memory but calloc() zeroes it out. That
    can be another pitfall if you're using something like memcpy() with
    strings and don't copy in the terminating NUL. If you try something like printf("%s", my_string) if you're really lucky there will have been a NUL
    in the garbage; if not the string will be terminated somewhere, maybe.

    calloc() is to be preferred imnsho. In many cases you're going to memset()
    the malloc'd memory to 0 so you might as well get it over with.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Mon Jun 23 00:18:57 2025
    On 6/22/25 10:56 AM, The Natural Philosopher wrote:
    On 22/06/2025 15:27, Richard Kettlewell wrote:
    Segmentation faults don’t happen for all out of bounds accesses, they
    happen if you access a page which isn’t mapped at all or if you don’t
    have permission on that page for the operation you’re attempting. The
    example discussed here would only trigger a segmentation fault if the
    allocation finished at the end of a page, otherwise you’ll just read or
    write padding bytes, or the header of the next allocation.

    That, is a really useful factoid...

    Thanks

    It *is* very useful info ... spread it around.

    Memory/stack overwrites are THE most common
    malicious hack approaches these days - look
    at the M$ reports especially.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to rbowman on Mon Jun 23 18:10:02 2025
    rbowman <bowman@montana.com> wrote at 19:23 this Sunday (GMT):
    On Sun, 22 Jun 2025 13:50:03 -0000 (UTC), candycanearter07 wrote:

    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday
    (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from >>>>> their own common mistakes (of not carefully checking error return
    codes or buffer lengths, etc.). I.e. the typical 9-5 contract
    programmer, not the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log
    the problem even though I'm probably screwed at that point. It has
    pointed out errors for calloc if you've manged to come up with a
    negative size.

    I have worked with programmers that assumed nothing bad would ever
    happen.
    Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The code
    had _intended_ to dynamically allocate storage for a string and the
    terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly- allocated
    destination.

    Aren't you supposed to multiply by sizeof as well?

    No, malloc is N bytes. calloc is N elements of sizeof(foo). Also
    malloc() doesn't initialize the memory but calloc() zeroes it out. That
    can be another pitfall if you're using something like memcpy() with
    strings and don't copy in the terminating NUL. If you try something like printf("%s", my_string) if you're really lucky there will have been a NUL
    in the garbage; if not the string will be terminated somewhere, maybe.

    Right, and since malloc uses byte counts, you have to multiply by
    sizeof to get the proper amount to allocate.

    calloc() is to be preferred imnsho. In many cases you're going to memset() the malloc'd memory to 0 so you might as well get it over with.


    Fair, but some might find it redundant to set the memory to 0 and
    immidietly write data over those null bytes.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Mon Jun 23 19:27:02 2025
    On Mon, 23 Jun 2025 18:10:02 -0000 (UTC), candycanearter07 wrote:

    calloc() is to be preferred imnsho. In many cases you're going to
    memset()
    the malloc'd memory to 0 so you might as well get it over with.


    Fair, but some might find it redundant to set the memory to 0 and
    immidietly write data over those null bytes.

    It depends on what you plan to do with the memory. Typically you wouldn't memset() calloc'd memory. It comes in handy when you're going to copy in
    data that you want to use as a string without worrying about adding a
    NUL.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Riches@21:1/5 to candycanearter07@candycanearter07.n on Tue Jun 24 03:34:09 2025
    On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes >>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>> the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the >>> problem even though I'm probably screwed at that point. It has pointed out >>> errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen. >>> Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    Those who had worked on that project longer said the bug had been
    latent in the code for several years, most likely with alignment
    padding masking the bug from being discovered. Curiously, the
    bug made itself manifest immediately upon changing from a 32-bit
    build environment to a 64-bit build environment.


    I'm more surprised it didn't segfault. Any idea what caused it to not?
    I know strlen doesn't account for the terminating character, but it
    seems like it should've been TWO bytes shorter...

    IIUC, heap-based malloc _usually_ returns a larger allocation
    block than you really asked for. As long as malloc gave you at
    least 2 extra bytes, you'd never see any misbehavior. Even if it
    didn't give you 2 or more extra bytes, it's fairly likely you'd
    just get lucky and never see the program crash or otherwise
    misbehavior in a significant way. For example, if you stomped on
    the header of the next allocation block, as long as nothing ever
    read and acted upon the data in said header, you'd never see it.

    --
    Robert Riches
    spamtrap42@jacob21819.net
    (Yes, that is one of my email addresses.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to Robert Riches on Tue Jun 24 04:52:21 2025
    On 2025-06-24, Robert Riches <spamtrap42@jacob21819.net> wrote:

    IIUC, heap-based malloc _usually_ returns a larger allocation
    block than you really asked for. As long as malloc gave you at
    least 2 extra bytes, you'd never see any misbehavior. Even if it
    didn't give you 2 or more extra bytes, it's fairly likely you'd
    just get lucky and never see the program crash or otherwise
    misbehavior in a significant way.

    Or if malloc() rounds the size of the block up to, say, the next
    multiple of 8, odds are good that you'll be clobbering an unused
    byte.

    For example, if you stomped on
    the header of the next allocation block, as long as nothing ever
    read and acted upon the data in said header, you'd never see it.

    If you wrote a NUL to a byte that's normally zero, you might
    still get away with it even if that header is referenced.

    But worst case, a program that ran flawlessly for years might
    suddenly bomb because you happened to write a different number
    of bytes to the area than you normally do. That one can be a
    nightmare to debug.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Charlie Gibbs on Tue Jun 24 05:14:54 2025
    On Tue, 24 Jun 2025 04:52:21 GMT, Charlie Gibbs wrote:

    On 2025-06-24, Robert Riches <spamtrap42@jacob21819.net> wrote:

    IIUC, heap-based malloc _usually_ returns a larger allocation block
    than you really asked for. As long as malloc gave you at least 2 extra
    bytes, you'd never see any misbehavior. Even if it didn't give you 2
    or more extra bytes, it's fairly likely you'd just get lucky and never
    see the program crash or otherwise misbehavior in a significant way.

    Or if malloc() rounds the size of the block up to, say, the next
    multiple of 8, odds are good that you'll be clobbering an unused byte.

    For example, if you stomped on
    the header of the next allocation block, as long as nothing ever read
    and acted upon the data in said header, you'd never see it.

    If you wrote a NUL to a byte that's normally zero, you might still get
    away with it even if that header is referenced.

    But worst case, a program that ran flawlessly for years might suddenly
    bomb because you happened to write a different number of bytes to the
    area than you normally do. That one can be a nightmare to debug.


    "Do you feel lucky, punk?" isn't a great programming philosophy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Tue Jun 24 01:36:28 2025
    On 6/24/25 1:14 AM, rbowman wrote:
    On Tue, 24 Jun 2025 04:52:21 GMT, Charlie Gibbs wrote:

    On 2025-06-24, Robert Riches <spamtrap42@jacob21819.net> wrote:

    IIUC, heap-based malloc _usually_ returns a larger allocation block
    than you really asked for. As long as malloc gave you at least 2 extra
    bytes, you'd never see any misbehavior. Even if it didn't give you 2
    or more extra bytes, it's fairly likely you'd just get lucky and never
    see the program crash or otherwise misbehavior in a significant way.

    Or if malloc() rounds the size of the block up to, say, the next
    multiple of 8, odds are good that you'll be clobbering an unused byte.

    For example, if you stomped on
    the header of the next allocation block, as long as nothing ever read
    and acted upon the data in said header, you'd never see it.

    If you wrote a NUL to a byte that's normally zero, you might still get
    away with it even if that header is referenced.

    But worst case, a program that ran flawlessly for years might suddenly
    bomb because you happened to write a different number of bytes to the
    area than you normally do. That one can be a nightmare to debug.


    "Do you feel lucky, punk?" isn't a great programming philosophy.


    Don't worry, you're out of business soon. 'AI' will
    program everything. The pointy-haired bosses reign
    supreme ... just roughly describe what they think
    they want. Any probs, blame the 'AI' - paycheck safe.

    Do you have ANY issues with this vision of the
    Near Future ???

    Oh WOW we're all screwed .......

    Hmmm ... maybe time for a C-64 Underground - human
    writ software on simple boxes as an emergency
    backup when the 'new' paradigm bombs horribly .....

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Tue Jun 24 06:49:09 2025
    On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:

    Don't worry, you're out of business soon. 'AI' will program
    everything. The pointy-haired bosses reign supreme ... just roughly
    describe what they think they want. Any probs, blame the 'AI' -
    paycheck safe.

    Do you have ANY issues with this vision of the Near Future ???

    None at all.

    “The leveling of the European man is the great process which cannot be obstructed; it should even be accelerated. The necessity of cleaving
    gulfs, distance, order of rank, is therefore imperative —not the necessity
    of retarding this process. This homogenizing species requires
    justification as soon as it is attained: its justification is that it lies
    in serving a higher and sovereign race which stands upon the former and
    can raise itself this task only by doing this. Not merely a race of
    masters whose sole task is to rule, but a race with its own sphere of
    life, with an overflow of energy for beauty, bravery, culture, and
    manners, even for the most abstract thought; a yea-saying race that may
    grant itself every great luxury —strong enough to have no need of the
    tyranny of the virtue-imperative, rich enough to have no need of economy
    or pedantry; beyond good and evil; a hothouse for rare and exceptional plants.”

    Friedrich Nietzsche

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Robert Riches on Tue Jun 24 08:56:05 2025
    Robert Riches <spamtrap42@jacob21819.net> writes:
    On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    Yes. char is the unit in which sizeof measures things. Multiplying by
    ‘sizeof (char)’ is a completely incoherent thing to do.

    And as noted elsewhere, doing the multiplication yourself is generally
    the wrong approach.

    IIUC, heap-based malloc _usually_ returns a larger allocation
    block than you really asked for.

    Yes.

    As long as malloc gave you at least 2 extra bytes, you'd never see any misbehavior. Even if it didn't give you 2 or more extra bytes, it's
    fairly likely you'd just get lucky and never see the program crash or otherwise misbehavior in a significant way. For example, if you
    stomped on the header of the next allocation block, as long as nothing
    ever read and acted upon the data in said header, you'd never see it.

    This is wrong. Exceeding the space allocated by even 1 byte is undefined behavior, even if the allocation happens to have been sufficiently
    padded. What this means in practice is very situational but
    optimizations exploiting the freedom that undefined behavior provides to
    the compiler routinely result in defects.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to rbowman on Tue Jun 24 10:31:33 2025
    On 24/06/2025 07:49, rbowman wrote:
    On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:

    Don't worry, you're out of business soon. 'AI' will program
    everything. The pointy-haired bosses reign supreme ... just roughly
    describe what they think they want. Any probs, blame the 'AI' -
    paycheck safe.

    Do you have ANY issues with this vision of the Near Future ???

    None at all.

    “The leveling of the European man is the great process which cannot be obstructed; it should even be accelerated. The necessity of cleaving
    gulfs, distance, order of rank, is therefore imperative —not the necessity of retarding this process. This homogenizing species requires
    justification as soon as it is attained: its justification is that it lies
    in serving a higher and sovereign race which stands upon the former and
    can raise itself this task only by doing this. Not merely a race of
    masters whose sole task is to rule, but a race with its own sphere of
    life, with an overflow of energy for beauty, bravery, culture, and
    manners, even for the most abstract thought; a yea-saying race that may
    grant itself every great luxury —strong enough to have no need of the tyranny of the virtue-imperative, rich enough to have no need of economy
    or pedantry; beyond good and evil; a hothouse for rare and exceptional plants.”

    Friedrich Nietzsche


    What a wanker Nietzsche really was.

    --
    "Corbyn talks about equality, justice, opportunity, health care, peace, community, compassion, investment, security, housing...."
    "What kind of person is not interested in those things?"

    "Jeremy Corbyn?"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Riches@21:1/5 to Richard Kettlewell on Wed Jun 25 03:01:33 2025
    On 2025-06-24, Richard Kettlewell <invalid@invalid.invalid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> writes:
    On 2025-06-22, candycanearter07
    <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    Yes. char is the unit in which sizeof measures things. Multiplying by ‘sizeof (char)’ is a completely incoherent thing to do.

    And as noted elsewhere, doing the multiplication yourself is generally
    the wrong approach.

    IIUC, heap-based malloc _usually_ returns a larger allocation
    block than you really asked for.

    Yes.

    As long as malloc gave you at least 2 extra bytes, you'd never see any
    misbehavior. Even if it didn't give you 2 or more extra bytes, it's
    fairly likely you'd just get lucky and never see the program crash or
    otherwise misbehavior in a significant way. For example, if you
    stomped on the header of the next allocation block, as long as nothing
    ever read and acted upon the data in said header, you'd never see it.

    This is wrong. Exceeding the space allocated by even 1 byte is undefined behavior, even if the allocation happens to have been sufficiently
    padded. What this means in practice is very situational but
    optimizations exploiting the freedom that undefined behavior provides to
    the compiler routinely result in defects.

    Please remember that this was an unintended _BUG_ in some old
    code, _NOT_ a deliberately chosen strategy. What I was
    describing was one possible explanation for how the bug remained
    undetected for some number of years.

    --
    Robert Riches
    spamtrap42@jacob21819.net
    (Yes, that is one of my email addresses.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Wed Jun 25 01:36:29 2025
    On 6/24/25 5:31 AM, The Natural Philosopher wrote:
    On 24/06/2025 07:49, rbowman wrote:
    On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:

        Don't worry, you're out of business soon. 'AI' will program
        everything. The pointy-haired bosses reign supreme ... just roughly >>>     describe what they think they want. Any probs, blame the 'AI' -
        paycheck safe.

        Do you have ANY issues with this vision of the Near Future ???

    None at all.

    “The leveling of the European man is the great process which cannot be
    obstructed; it should even be accelerated. The necessity of cleaving
    gulfs, distance, order of rank, is therefore imperative —not the
    necessity
    of retarding this process. This homogenizing species requires
    justification as soon as it is attained: its justification is that it
    lies
    in serving a higher and sovereign race which stands upon the former and
    can raise itself this task only by doing this. Not merely a race of
    masters whose sole task is to rule, but a race with its own sphere of
    life, with an overflow of energy for beauty, bravery, culture, and
    manners, even for the most abstract thought; a yea-saying race that may
    grant itself every great luxury —strong enough to have no need of the
    tyranny of the virtue-imperative, rich enough to have no need of economy
    or pedantry; beyond good and evil; a hothouse for rare and exceptional
    plants.”

    Friedrich Nietzsche


    What a wanker Nietzsche really was.


    He WAS a wanker ... but kinda too often CORRECT.

    Even loons get SOME stuff right.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Robert Riches on Wed Jun 25 01:59:53 2025
    On 6/24/25 11:01 PM, Robert Riches wrote:
    On 2025-06-24, Richard Kettlewell <invalid@invalid.invalid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> writes:
    On 2025-06-22, candycanearter07
    <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    Yes. char is the unit in which sizeof measures things. Multiplying by
    ‘sizeof (char)’ is a completely incoherent thing to do.

    And as noted elsewhere, doing the multiplication yourself is generally
    the wrong approach.

    IIUC, heap-based malloc _usually_ returns a larger allocation
    block than you really asked for.

    Yes.

    As long as malloc gave you at least 2 extra bytes, you'd never see any
    misbehavior. Even if it didn't give you 2 or more extra bytes, it's
    fairly likely you'd just get lucky and never see the program crash or
    otherwise misbehavior in a significant way. For example, if you
    stomped on the header of the next allocation block, as long as nothing
    ever read and acted upon the data in said header, you'd never see it.

    This is wrong. Exceeding the space allocated by even 1 byte is undefined
    behavior, even if the allocation happens to have been sufficiently
    padded. What this means in practice is very situational but
    optimizations exploiting the freedom that undefined behavior provides to
    the compiler routinely result in defects.

    Please remember that this was an unintended _BUG_ in some old
    code, _NOT_ a deliberately chosen strategy. What I was
    describing was one possible explanation for how the bug remained
    undetected for some number of years.

    Regardless, easy fix - always allocate at least
    one byte/word/whatever more than you THINK you
    need. Minimal penalty - possibly BIG gains.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Wed Jun 25 07:31:58 2025
    On 25/06/2025 06:36, c186282 wrote:
    On 6/24/25 5:31 AM, The Natural Philosopher wrote:
    On 24/06/2025 07:49, rbowman wrote:
    On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:

        Don't worry, you're out of business soon. 'AI' will program
        everything. The pointy-haired bosses reign supreme ... just roughly >>>>     describe what they think they want. Any probs, blame the 'AI' - >>>>     paycheck safe.

        Do you have ANY issues with this vision of the Near Future ???

    None at all.

    “The leveling of the European man is the great process which cannot be >>> obstructed; it should even be accelerated. The necessity of cleaving
    gulfs, distance, order of rank, is therefore imperative —not the
    necessity
    of retarding this process. This homogenizing species requires
    justification as soon as it is attained: its justification is that it
    lies
    in serving a higher and sovereign race which stands upon the former and
    can raise itself this task only by doing this. Not merely a race of
    masters whose sole task is to rule, but a race with its own sphere of
    life, with an overflow of energy for beauty, bravery, culture, and
    manners, even for the most abstract thought; a yea-saying race that may
    grant itself every great luxury —strong enough to have no need of the
    tyranny of the virtue-imperative, rich enough to have no need of economy >>> or pedantry; beyond good and evil; a hothouse for rare and exceptional
    plants.”

    Friedrich Nietzsche


    What a wanker Nietzsche really was.


      He WAS a wanker ... but kinda too often CORRECT.

      Even loons get SOME stuff right.

    Like king Donald?


    --
    "Fanaticism consists in redoubling your effort when you have
    forgotten your aim."

    George Santayana

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Wed Jun 25 06:52:19 2025
    On Wed, 25 Jun 2025 01:59:53 -0400, c186282 wrote:

    Regardless, easy fix - always allocate at least one
    byte/word/whatever more than you THINK you need. Minimal penalty -
    possibly BIG gains.

    I tend to think in powers of 2.

    char deviceId[64];
    char incidentNumber[32];
    char finalDisp[32];
    char comment[512];

    Those are generous. A typical incident number is 062525-1000. Then I
    follow up with

    fgets(incidentNumber, sizeof(incidentNumber)-1, stdin);

    incidentNumber has been memset to 0 so even if the user gets carried away
    it will get truncated with a NUL. Some of the legacy code was stingy like
    they had to pay for every byte. That's the sort of thinking that leads to
    a 2038 problem way down the road. To be fair, nobody expected the code to
    be merrily chugging along 30 years later.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Wed Jun 25 03:08:08 2025
    On 6/25/25 2:31 AM, The Natural Philosopher wrote:
    On 25/06/2025 06:36, c186282 wrote:
    On 6/24/25 5:31 AM, The Natural Philosopher wrote:
    On 24/06/2025 07:49, rbowman wrote:
    On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:

        Don't worry, you're out of business soon. 'AI' will program
        everything. The pointy-haired bosses reign supreme ... just
    roughly
        describe what they think they want. Any probs, blame the 'AI' - >>>>>     paycheck safe.

        Do you have ANY issues with this vision of the Near Future ??? >>>>
    None at all.

    “The leveling of the European man is the great process which cannot be >>>> obstructed; it should even be accelerated. The necessity of cleaving
    gulfs, distance, order of rank, is therefore imperative —not the
    necessity
    of retarding this process. This homogenizing species requires
    justification as soon as it is attained: its justification is that
    it lies
    in serving a higher and sovereign race which stands upon the former and >>>> can raise itself this task only by doing this. Not merely a race of
    masters whose sole task is to rule, but a race with its own sphere of
    life, with an overflow of energy for beauty, bravery, culture, and
    manners, even for the most abstract thought; a yea-saying race that may >>>> grant itself every great luxury —strong enough to have no need of the >>>> tyranny of the virtue-imperative, rich enough to have no need of
    economy
    or pedantry; beyond good and evil; a hothouse for rare and exceptional >>>> plants.”

    Friedrich Nietzsche


    What a wanker Nietzsche really was.


       He WAS a wanker ... but kinda too often CORRECT.

       Even loons get SOME stuff right.

    Like king Donald?

    Yep.

    AVERAGING "better" is a lot better than
    averaging WRONG.

    Don't like that - suck donkey dick. The
    world is NOT "about us". It's a constantly
    evolving equation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to John Ames on Wed Jun 25 19:01:31 2025
    On 6/25/25 12:44 PM, John Ames wrote:
    On Wed, 25 Jun 2025 09:32:13 -0700
    John Ames <commodorejohn@gmail.com> wrote:

    Regardless, easy fix - always allocate at least one byte/word/
    whatever more than you THINK you need. Minimal penalty - possibly
    BIG gains.

    That strikes me as a terrible strategy - allocating N elements extra
    won't save you from overstepping into N+1 if and when you finally do

    (Additionally, it won't save you from whatever weird undefined behavior
    may result from reading an element N which isn't even part of the
    "true" range and may have uninitialized/invalid data.)

    I've had good luck doing it that way since K&R.
    Doesn't hurt to null the entire space after
    allocating. Leave a speck of extra space and
    it covers a lot of potential little write
    issues. You still have to take care when reading.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Robert Riches on Fri Jun 27 06:00:03 2025
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT):
    On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from >>>>> their own common mistakes (of not carefully checking error return codes >>>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>>> the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the >>>> problem even though I'm probably screwed at that point. It has pointed out >>>> errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen. >>>> Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    I still multiply by sizeof(char), half because of habit and half to make
    it clear to myself I'm making a char array, even if its "redundant". I
    kinda thought that was the "cannonical" way to do that, since you could
    have a weird edge case with a system defining char as something else?

    Those who had worked on that project longer said the bug had been
    latent in the code for several years, most likely with alignment
    padding masking the bug from being discovered. Curiously, the
    bug made itself manifest immediately upon changing from a 32-bit
    build environment to a 64-bit build environment.


    I'm more surprised it didn't segfault. Any idea what caused it to not?
    I know strlen doesn't account for the terminating character, but it
    seems like it should've been TWO bytes shorter...

    IIUC, heap-based malloc _usually_ returns a larger allocation
    block than you really asked for. As long as malloc gave you at
    least 2 extra bytes, you'd never see any misbehavior. Even if it
    didn't give you 2 or more extra bytes, it's fairly likely you'd
    just get lucky and never see the program crash or otherwise
    misbehavior in a significant way. For example, if you stomped on
    the header of the next allocation block, as long as nothing ever
    read and acted upon the data in said header, you'd never see it.


    Oh, so it was some poorly written code being covered up by a weird quirk
    in the 32b version of the compiler? Always interesting hearing about "accidentilly working" programs.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to All on Fri Jun 27 08:37:24 2025
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
    writes:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT):
    <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    I still multiply by sizeof(char), half because of habit and half to
    make it clear to myself I'm making a char array, even if its
    "redundant". I kinda thought that was the "cannonical" way to do that,
    since you could have a weird edge case with a system defining char as something else?

    Whatever the representation of char, sizeof(char)=1. That’s what the definition of sizeof is - char is the unit it counts in.

    From the language specification:

    When sizeof is applied to an operand that has type char, unsigned
    char, or signed char, (or a qualified version thereof) the result is
    1. When applied to an operand that has array type, the result is the
    total number of bytes in the array.) When applied to an operand that
    has structure or union type, the result is the total number of bytes
    in such an object, including internal and trailing padding.

    A programmer can adopt a personal style of redundantly multiplying by 1
    if they like, it’ll be a useful hint to anyone else reading the code
    that the author didn’t know the language very well. But in no way is
    anyone ‘supposed’ to do it.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Richard Kettlewell on Fri Jun 27 08:45:43 2025
    On 27/06/2025 08:37, Richard Kettlewell wrote:
    A programmer can adopt a personal style of redundantly multiplying by 1
    if they like, it’ll be a useful hint to anyone else reading the code
    that the author didn’t know the language very well.

    Or a hint that the writer expected the maintainer would not know the
    language very well.


    But in no way is
    anyone ‘supposed’ to do it.

    Morality rarely enters into code writing, unless introduced by parties
    with 'political' aims.


    --
    How fortunate for governments that the people they administer don't think.

    Adolf Hitler

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to The Natural Philosopher on Fri Jun 27 08:14:36 2025
    On Fri, 27 Jun 2025 08:45:43 +0100, The Natural Philosopher wrote:

    On 27/06/2025 08:37, Richard Kettlewell wrote:

    But in no way is anyone ‘supposed’ to do it.

    Morality rarely enters into code writing, unless introduced by parties
    with 'political' aims.

    I must learn to use that as an excuse: the next time someone complains
    about the way I write my code, I can tell them that their criticism is “political”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Fri Jun 27 13:27:56 2025
    On 6/27/25 4:14 AM, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 08:45:43 +0100, The Natural Philosopher wrote:

    On 27/06/2025 08:37, Richard Kettlewell wrote:

    But in no way is anyone ‘supposed’ to do it.

    Morality rarely enters into code writing, unless introduced by parties
    with 'political' aims.

    I must learn to use that as an excuse: the next time someone complains
    about the way I write my code, I can tell them that their criticism is “political”.

    Hey ... it could WORK ! :-)

    "Racist", "colonialist" and "gender-fascist" should work
    in some locales as well

    Alas, 'AI', that can and HAS been tampered with to
    achieve PC results for political reasons already.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Fri Jun 27 17:40:02 2025
    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:

    Some of us are old enough to remember when CPUs were
    not always 4/8/16/32/64 ... plus even now they've added a lot of new
    types like 128-bit ints. Simply ASSUMING an int is 16 bits is
    'usually safe' but not necessarily 'best practice' and limits future
    (or past) compatibility. 'C' lets you fly free ...
    but that CAN be straight into a window pane

    Assuming an int is 16 bits is not a good idea. I wouldn't even assume a
    short is 16 bits

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Fri Jun 27 13:24:06 2025
    On 6/27/25 3:37 AM, Richard Kettlewell wrote:
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
    writes:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT): >>> <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    I still multiply by sizeof(char), half because of habit and half to
    make it clear to myself I'm making a char array, even if its
    "redundant". I kinda thought that was the "cannonical" way to do that,
    since you could have a weird edge case with a system defining char as
    something else?

    Whatever the representation of char, sizeof(char)=1. That’s what the definition of sizeof is - char is the unit it counts in.

    From the language specification:

    When sizeof is applied to an operand that has type char, unsigned
    char, or signed char, (or a qualified version thereof) the result is
    1. When applied to an operand that has array type, the result is the
    total number of bytes in the array.) When applied to an operand that
    has structure or union type, the result is the total number of bytes
    in such an object, including internal and trailing padding.

    A programmer can adopt a personal style of redundantly multiplying by 1
    if they like, it’ll be a useful hint to anyone else reading the code
    that the author didn’t know the language very well. But in no way is
    anyone ‘supposed’ to do it.

    "Best practice" sometimes means a little bit of
    redundant/clarifying code.

    Some of us are old enough to remember when CPUs were
    not always 4/8/16/32/64 ... plus even now they've
    added a lot of new types like 128-bit ints. Simply
    ASSUMING an int is 16 bits is 'usually safe' but
    not necessarily 'best practice' and limits future
    (or past) compatibility. 'C' lets you fly free ...
    but that CAN be straight into a window pane :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Fri Jun 27 19:13:15 2025
    On 27/06/2025 18:27, c186282 wrote:
    On 6/27/25 4:14 AM, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 08:45:43 +0100, The Natural Philosopher wrote:

    On 27/06/2025 08:37, Richard Kettlewell wrote:

    But in no way is anyone ‘supposed’ to do it.

    Morality rarely enters into code writing, unless introduced by parties
    with 'political' aims.

    I must learn to use that as an excuse: the next time someone complains
    about the way I write my code, I can tell them that their criticism is
    “political”.

    In your case it almost certainly will be.

      Hey ... it could WORK !  :-)

      "Racist", "colonialist" and "gender-fascist" should work
      in some locales as well

    Byteist, wordist, and non-boolean...

      Alas, 'AI', that can and HAS been tampered with to
      achieve PC results for political reasons already.

    Indeed.

    --
    Of what good are dead warriors? … Warriors are those who desire battle
    more than peace. Those who seek battle despite peace. Those who thump
    their spears on the ground and talk of honor. Those who leap high the
    battle dance and dream of glory … The good of dead warriors, Mother, is
    that they are dead.
    Sheri S Tepper: The Awakeners.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to rbowman on Fri Jun 27 18:20:31 2025
    On Fri, 27 Jun 2025 17:40:02 +0000, rbowman wrote:

    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:

    Some of us are old enough to remember when CPUs were
    not always 4/8/16/32/64 ... plus even now they've added a lot of new
    types like 128-bit ints. Simply ASSUMING an int is 16 bits is
    'usually safe' but not necessarily 'best practice' and limits future
    (or past) compatibility. 'C' lets you fly free ...
    but that CAN be straight into a window pane

    Assuming an int is 16 bits is not a good idea. I wouldn't even assume a
    short is 16 bits

    It would depend on the programming language you use, it's conformance to standards, and which standard it conforms to.

    The ISO C standards, for instance, dictate that
    - a char is at least 8 bits wide,
    - an unsigned short int must be able to, at least, express values
    between 0 and 65535, and
    - an unsigned int must be able to, at least, express values between 0 and
    65535

    These last two imply that both unsigned short int and int are at least
    16 bits wide. At least, according to the standard.

    Now, you /can/ have a C compiler that DOES NOT comply, PARTIALLY complies,
    or complies (WHEN REQUESTED) to the ISO C standard; for those compilers,
    "you pay your money, and you take your chances"

    HTH
    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to candycanearter07@candycanearter07.n on Fri Jun 27 19:40:12 2025
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from
    their own common mistakes (of not carefully checking error return codes >>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>> the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the >>> problem even though I'm probably screwed at that point. It has pointed out >>> errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen. >>> Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Aren't you supposed to multiply by sizeof as well?

    Only if you are allocating space for a group of things that are
    themselves larger than bytes. Since 'strlen' is a string operator, the individual pieces are "bytes" so you don't need to multiply in this
    instance.

    Those who had worked on that project longer said the bug had been
    latent in the code for several years, most likely with alignment
    padding masking the bug from being discovered. Curiously, the
    bug made itself manifest immediately upon changing from a 32-bit
    build environment to a 64-bit build environment.


    I'm more surprised it didn't segfault. Any idea what caused it to not?
    I know strlen doesn't account for the terminating character, but it
    seems like it should've been TWO bytes shorter...

    Almost all mallocs do not give you /exactly/ the number of bytes you
    request. The actual allocated buffer (by malloc) returned is often
    larger (common choices for rounding up are: cpu bus size, cpu cache
    line size, page size, or next higher power of two). So the
    "misalignment" of the buffer being used being "one more" than the
    actual start of the allocated space would not be noticed if that
    "rounding up" actually added at least one extra byte on the end of the
    buffer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Fri Jun 27 18:16:49 2025
    On 6/27/25 1:40 PM, rbowman wrote:
    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:

    Some of us are old enough to remember when CPUs were
    not always 4/8/16/32/64 ... plus even now they've added a lot of new
    types like 128-bit ints. Simply ASSUMING an int is 16 bits is
    'usually safe' but not necessarily 'best practice' and limits future
    (or past) compatibility. 'C' lets you fly free ...
    but that CAN be straight into a window pane

    Assuming an int is 16 bits is not a good idea. I wouldn't even assume a
    short is 16 bits

    Voice of experience for sure. Things have been
    represented/handled just SO many ways over the
    years. Using sizeof() is 'best practice' even
    if you're Just Sure how wide an int or whatever
    may be. 24 bits are still found in some DSPs
    and you MAY be asked someday to patch or port
    one of the old 12/18/24/36/48 programs.

    Ah ! Found a list of many CPUs, starting with
    the Babbage engine (50-bit words) :

    https://en.wikipedia.org/wiki/Word_(computer_architecture)#Table_of_word_sizes

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Lew Pitcher on Fri Jun 27 23:03:11 2025
    On Fri, 27 Jun 2025 18:20:31 -0000 (UTC), Lew Pitcher wrote:

    These last two imply that both unsigned short int and int are at least
    16 bits wide. At least, according to the standard.

    Or, you know, just rely on the explicit definitions in stdint.h.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Sat Jun 28 01:13:18 2025
    On 6/27/25 7:03 PM, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 18:20:31 -0000 (UTC), Lew Pitcher wrote:

    These last two imply that both unsigned short int and int are at least
    16 bits wide. At least, according to the standard.

    Or, you know, just rely on the explicit definitions in stdint.h.

    sizeof() will give the right sizes.

    Simple, easy to write, 'best practice'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sat Jun 28 06:10:33 2025
    On Sat, 28 Jun 2025 01:13:18 -0400, c186282 wrote:

    On 6/27/25 7:03 PM, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 18:20:31 -0000 (UTC), Lew Pitcher wrote:

    These last two imply that both unsigned short int and int are at least
    16 bits wide. At least, according to the standard.

    Or, you know, just rely on the explicit definitions in stdint.h.

    sizeof() will give the right sizes.

    Simple, easy to write, 'best practice'.

    #include <stdio.h>
    #include <stdlib.h>


    int main(void) {
    printf("sizeof(char) %ld \n", sizeof(char));
    printf("sizeof(short) %ld \n", sizeof(short));
    printf("sizeof(int) %ld \n", sizeof(int));
    printf("sizeof(long) %ld \n", sizeof(long));
    return 0;
    }

    $ ./sizeof
    sizeof(char) 1
    sizeof(short) 2
    sizeof(int) 4
    sizeof(long) 8

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to c186282@nnada.net on Sat Jun 28 08:52:20 2025
    c186282 <c186282@nnada.net> writes:
    On 6/27/25 1:40 PM, rbowman wrote:
    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
    Some of us are old enough to remember when CPUs were not always
    4/8/16/32/64 ... plus even now they've added a lot of new types like
    128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
    but not necessarily 'best practice' and limits future (or past)
    compatibility. 'C' lets you fly free ... but that CAN be straight
    into a window pane
    Assuming an int is 16 bits is not a good idea. I wouldn't even assume
    a short is 16 bits

    (Apart from c186282 who for some reason thinks it’s “usually safe”, nobody here is making any such assumption about int.)

    Voice of experience for sure. Things have been represented/handled
    just SO many ways over the years. Using sizeof() is 'best practice'
    even if you're Just Sure how wide an int or whatever may be. 24 bits
    are still found in some DSPs and you MAY be asked someday to patch or
    port one of the old 12/18/24/36/48 programs.

    The thread is not about the size of int, etc. It’s about the specific
    case of sizeof(char) in C, and that is always 1.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Ahlstrom@21:1/5 to The Natural Philosopher on Sat Jun 28 09:16:10 2025
    The Natural Philosopher wrote this post while blinking in Morse code:

    On 27/06/2025 18:27, c186282 wrote:

    <snip>

      Alas, 'AI', that can and HAS been tampered with to
      achieve PC results for political reasons already.

    Indeed.

    Not just "PC" (I assume that the aspie meant "politically correct" and not "personal computer"), but any point of view.

    It's who trains the AI that counts.

    --
    Humor in the Court:
    Q: What can you tell us about the truthfulness and veracity of this defendant? A: Oh, she will tell the truth. She said she'd kill that sonofabitch--and
    she did!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Sat Jun 28 23:16:53 2025
    On 6/28/25 3:52 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/27/25 1:40 PM, rbowman wrote:
    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
    Some of us are old enough to remember when CPUs were not always
    4/8/16/32/64 ... plus even now they've added a lot of new types like
    128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
    but not necessarily 'best practice' and limits future (or past)
    compatibility. 'C' lets you fly free ... but that CAN be straight
    into a window pane
    Assuming an int is 16 bits is not a good idea. I wouldn't even assume
    a short is 16 bits

    (Apart from c186282 who for some reason thinks it’s “usually safe”, nobody here is making any such assumption about int.)

    Eh ? I've been saying no such thing - instead recommending
    using sizeof() kind of religiously. I remember processors
    with odd word sizes - and assume there may be more in
    the future for whatever reasons.

    Voice of experience for sure. Things have been represented/handled
    just SO many ways over the years. Using sizeof() is 'best practice'
    even if you're Just Sure how wide an int or whatever may be. 24 bits
    are still found in some DSPs and you MAY be asked someday to patch or
    port one of the old 12/18/24/36/48 programs.

    The thread is not about the size of int, etc. It’s about the specific
    case of sizeof(char) in C, and that is always 1.

    CHAR is ONE BYTE of however many bits, but beyond that ..........

    Use sizeof() ...

    One flaw of sizeof() is that it reports in BYTES ... so,
    for example, how many BITS is that ? I've done stuff with
    low-resource micro-controllers and you use bit-fields to
    really pack in the data. Dealing with that level of potential
    incompatibility is a little more difficult and 'limits.h'
    can be helpful. The really rude way would be to have a
    little roll-over test function ... keep counting up until
    the field rolls back to zero. You'd only need it once.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to c186282@nnada.net on Sun Jun 29 08:18:00 2025
    c186282 <c186282@nnada.net> writes:
    On 6/28/25 3:52 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/27/25 1:40 PM, rbowman wrote:
    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
    Some of us are old enough to remember when CPUs were not always
    4/8/16/32/64 ... plus even now they've added a lot of new types like >>>>> 128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    but not necessarily 'best practice' and limits future (or past)
    compatibility. 'C' lets you fly free ... but that CAN be straight
    into a window pane
    Assuming an int is 16 bits is not a good idea. I wouldn't even assume
    a short is 16 bits
    (Apart from c186282 who for some reason thinks it’s “usually safe”,
    nobody here is making any such assumption about int.)

    Eh ? I've been saying no such thing - instead recommending
    using sizeof() kind of religiously. I remember processors
    with odd word sizes - and assume there may be more in
    the future for whatever reasons.

    You wrote it above. Underlined to help you find it again.

    The thread is not about the size of int, etc. It’s about the specific
    case of sizeof(char) in C, and that is always 1.

    CHAR is ONE BYTE of however many bits, but beyond that ..........

    Use sizeof() ...

    One flaw of sizeof() is that it reports in BYTES ... so,
    for example, how many BITS is that ?

    I don’t see a flaw there. If you want to know the number of bytes (in
    the C sense) then that’s what sizeof does. If you want to know the
    number of bits, multiply by CHAR_BIT. If you already have a number of
    bytes, and you want a number of bytes, no need to multiply by anything
    at all.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Sun Jun 29 19:09:23 2025
    On 6/29/25 3:18 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/28/25 3:52 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/27/25 1:40 PM, rbowman wrote:
    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
    Some of us are old enough to remember when CPUs were not always
    4/8/16/32/64 ... plus even now they've added a lot of new types like >>>>>> 128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    but not necessarily 'best practice' and limits future (or past)
    compatibility. 'C' lets you fly free ... but that CAN be straight >>>>>> into a window pane
    Assuming an int is 16 bits is not a good idea. I wouldn't even assume >>>>> a short is 16 bits
    (Apart from c186282 who for some reason thinks it’s “usually safe”, >>> nobody here is making any such assumption about int.)

    Eh ? I've been saying no such thing - instead recommending
    using sizeof() kind of religiously. I remember processors
    with odd word sizes - and assume there may be more in
    the future for whatever reasons.

    You wrote it above. Underlined to help you find it again.


    Mysteriously MISSING THE OTHER HALF OF THE LINE :-)

    The paragraph was :

    "Some of us are old enough to remember when CPUs were
    not always 4/8/16/32/64 ... plus even now they've
    added a lot of new types like 128-bit ints. Simply
    ASSUMING an int is 16 bits is 'usually safe' but
    not necessarily 'best practice' and limits future
    (or past) compatibility. 'C' lets you fly free ...
    but that CAN be straight into a window pane :-) "

    Fri, 27 Jun 2025 13:24:06 -0400

    These days you CAN 'usually' get away with assuming an
    int is 16 bits - but that won't always turn out well.


    The thread is not about the size of int, etc. It’s about the specific
    case of sizeof(char) in C, and that is always 1.

    CHAR is ONE BYTE of however many bits, but beyond that ..........

    Use sizeof() ...

    One flaw of sizeof() is that it reports in BYTES ... so,
    for example, how many BITS is that ?

    I don’t see a flaw there. If you want to know the number of bytes (in
    the C sense) then that’s what sizeof does. If you want to know the
    number of bits, multiply by CHAR_BIT. If you already have a number of
    bytes, and you want a number of bytes, no need to multiply by anything
    at all.

    That'll work fine.

    Not sure EVERY compiler has CHAR_BIT however ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Mon Jun 30 08:36:33 2025
    On 30/06/2025 00:09, c186282 wrote:
    These days you CAN 'usually' get away with assuming an
      int is 16 bits - but that won't always turn out well.

    I thought the default int was 32 bits or 64 bits these days.
    ISTR there is a definition of uint16_t somewhere if that is what you want


    A rapid google shows no one talking about a 16 bit int. Today its
    reckoned to be 32 bit
    But if it matters, use int16_t or uint16_t

    I can find no agreement was to what counts as a short, long, int, at all.
    If it matters, use the length specific variable names.


    --
    There is nothing a fleet of dispatchable nuclear power plants cannot do
    that cannot be done worse and more expensively and with higher carbon
    emissions and more adverse environmental impact by adding intermittent renewable energy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to The Natural Philosopher on Mon Jun 30 08:51:21 2025
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 30/06/2025 00:09, c186282 wrote:
    These days you CAN 'usually' get away with assuming an
      int is 16 bits - but that won't always turn out well.

    I thought the default int was 32 bits or 64 bits these days.
    ISTR there is a definition of uint16_t somewhere if that is what you want


    A rapid google shows no one talking about a 16 bit int. Today its
    reckoned to be 32 bit But if it matters, use int16_t or uint16_t

    I can find no agreement was to what counts as a short, long, int, at all.
    If it matters, use the length specific variable names.

    The language spec guarantees:
    char is at least 8 bits
    short and int are at least 16 bits
    long is at least 32 bits
    long long is at least 64 bits

    There are also some constraints on representation.

    Server/desktop platforms usually have int=32 bits; long is a bit more
    variable. It’d probably be 16 bits on a Z80 or similar where memory and computation are in short supply.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to The Natural Philosopher on Mon Jun 30 07:54:36 2025
    On Mon, 30 Jun 2025 08:36:33 +0100, The Natural Philosopher wrote:

    I can find no agreement was to what counts as a short, long, int, at
    all.

    The convention in the 64-bit *nix world is called “LP64”. This means that “long int” and pointers are 64 bits, while int is 32 bits.

    Microsoft compilers for 64-bit, on the other hand, follow “LLP64”. This means that “int” and “long int” are both 32 bits; if you want a 64-bit integer, you have to say “long long int”.

    If it matters, use the length specific variable names.

    #include <stdint.h> gives you explicit names for the various sizes, in
    both signed and unsigned alternatives.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Richard Kettlewell on Mon Jun 30 08:59:25 2025
    On 30/06/2025 08:51, Richard Kettlewell wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 30/06/2025 00:09, c186282 wrote:
    These days you CAN 'usually' get away with assuming an
      int is 16 bits - but that won't always turn out well.

    I thought the default int was 32 bits or 64 bits these days.
    ISTR there is a definition of uint16_t somewhere if that is what you want


    A rapid google shows no one talking about a 16 bit int. Today its
    reckoned to be 32 bit But if it matters, use int16_t or uint16_t

    I can find no agreement was to what counts as a short, long, int, at all.
    If it matters, use the length specific variable names.

    The language spec guarantees:
    char is at least 8 bits
    short and int are at least 16 bits
    long is at least 32 bits
    long long is at least 64 bits

    There are also some constraints on representation.

    Server/desktop platforms usually have int=32 bits; long is a bit more variable. It’d probably be 16 bits on a Z80 or similar where memory and computation are in short supply.

    I don't remember that at all. Short and int were 16 bits on 8 bit
    compilers, and long was 32 bits

    But it all goes to show that if its in any way important, you should be specific as to how large your variables are.


    --
    “it should be clear by now to everyone that activist environmentalism
    (or environmental activism) is becoming a general ideology about humans,
    about their freedom, about the relationship between the individual and
    the state, and about the manipulation of people under the guise of a
    'noble' idea. It is not an honest pursuit of 'sustainable development,'
    a matter of elementary environmental protection, or a search for
    rational mechanisms designed to achieve a healthy environment. Yet
    things do occur that make you shake your head and remind yourself that
    you live neither in Joseph Stalin’s Communist era, nor in the Orwellian utopia of 1984.”

    Vaclav Klaus

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to c186282@nnada.net on Mon Jun 30 08:56:12 2025
    c186282 <c186282@nnada.net> writes:
    On 6/29/25 3:18 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/28/25 3:52 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 6/27/25 1:40 PM, rbowman wrote:
    On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
    Some of us are old enough to remember when CPUs were not always
    4/8/16/32/64 ... plus even now they've added a lot of new types like >>>>>>> 128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>>>>>> but not necessarily 'best practice' and limits future (or past)
    compatibility. 'C' lets you fly free ... but that CAN be straight >>>>>>> into a window pane
    Assuming an int is 16 bits is not a good idea. I wouldn't even assume >>>>>> a short is 16 bits
    (Apart from c186282 who for some reason thinks it’s “usually safe”, >>>> nobody here is making any such assumption about int.)

    Eh ? I've been saying no such thing - instead recommending
    using sizeof() kind of religiously. I remember processors
    with odd word sizes - and assume there may be more in
    the future for whatever reasons.
    You wrote it above. Underlined to help you find it again.


    Mysteriously MISSING THE OTHER HALF OF THE LINE :-)

    It’s quote above in full, and the bit I didn’t underline doesn’t change your claim that it’s “usually safe”.

    If you disagree with that then argue with your past self, not me, I’m
    just quoting what you wrote.

    The thread is not about the size of int, etc. It’s about the specific >>>> case of sizeof(char) in C, and that is always 1.

    CHAR is ONE BYTE of however many bits, but beyond that ..........

    Use sizeof() ...

    One flaw of sizeof() is that it reports in BYTES ... so,
    for example, how many BITS is that ?
    I don’t see a flaw there. If you want to know the number of bytes
    (in
    the C sense) then that’s what sizeof does. If you want to know the
    number of bits, multiply by CHAR_BIT. If you already have a number of
    bytes, and you want a number of bytes, no need to multiply by anything
    at all.

    That'll work fine.

    Not sure EVERY compiler has CHAR_BIT however ...

    It was introduced in 1989 as a requirement for all C implementations,
    anything that doesn’t have it doesn’t really have a good claim to be C
    any more.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Richard Kettlewell on Mon Jun 30 09:00:11 2025
    Richard Kettlewell <invalid@invalid.invalid> writes:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    I can find no agreement was to what counts as a short, long, int, at all.
    If it matters, use the length specific variable names.

    The language spec guarantees:
    char is at least 8 bits
    short and int are at least 16 bits
    long is at least 32 bits
    long long is at least 64 bits

    There are also some constraints on representation.

    Server/desktop platforms usually have int=32 bits; long is a bit more variable. It’d probably be 16 bits on a Z80 or similar where memory and computation are in short supply.

    To clarify: int would probably be 16 bits on a Z80. long still has to be
    at least 32 bits.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to The Natural Philosopher on Mon Jun 30 08:33:37 2025
    On Mon, 30 Jun 2025 08:59:25 +0100, The Natural Philosopher wrote:

    ... long was 32 bits

    On Microsoft Windows.

    Remember, Unix systems were fully 32-bit right from the 1980s onwards, and embraced 64-bit early on with the DEC Alpha in 1992. So “long” would have been 64 bits from at least that time, because why waste an occurrence of
    the “long” qualifier?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Richard Kettlewell on Mon Jun 30 09:24:35 2025
    On 30/06/2025 09:00, Richard Kettlewell wrote:
    Richard Kettlewell <invalid@invalid.invalid> writes:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    I can find no agreement was to what counts as a short, long, int, at all. >>> If it matters, use the length specific variable names.

    The language spec guarantees:
    char is at least 8 bits
    short and int are at least 16 bits
    long is at least 32 bits
    long long is at least 64 bits

    There are also some constraints on representation.

    Server/desktop platforms usually have int=32 bits; long is a bit more
    variable. It’d probably be 16 bits on a Z80 or similar where memory and
    computation are in short supply.

    To clarify: int would probably be 16 bits on a Z80. long still has to be
    at least 32 bits.

    Yes.
    It was a pain because the natural manipulation on an 8 bit chip is 8 bits. making it 16 bit led to a lot of extra code.
    --
    Gun Control: The law that ensures that only criminals have guns.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to The Natural Philosopher on Mon Jun 30 08:34:37 2025
    On Mon, 30 Jun 2025 09:24:35 +0100, The Natural Philosopher wrote:

    On 30/06/2025 09:00, Richard Kettlewell wrote:

    To clarify: int would probably be 16 bits on a Z80. long still has to
    be at least 32 bits.

    Yes.
    It was a pain because the natural manipulation on an 8 bit chip is 8
    bits. making it 16 bit led to a lot of extra code.

    I guess 8-bit ints were not considered useful on any system.

    You still had “unsigned char”, though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Lawrence D'Oliveiro on Mon Jun 30 18:10:25 2025
    On Mon, 30 Jun 2025 07:54:36 -0000 (UTC), Lawrence D'Oliveiro wrote:

    #include <stdint.h> gives you explicit names for the various sizes, in
    both signed and unsigned alternatives.

    #if __WORDSIZE == 64
    typedef long int int_fast16_t;
    typedef long int int_fast32_t;
    typedef long int int_fast64_t;
    #else
    typedef int int_fast16_t;
    typedef int int_fast32_t;
    __extension__
    typedef long long int int_fast64_t;
    #endif

    I'll admit it isn't clear to me what it's doing there.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Mon Jun 30 23:12:21 2025
    On 6/30/25 3:36 AM, The Natural Philosopher wrote:
    On 30/06/2025 00:09, c186282 wrote:
    These days you CAN 'usually' get away with assuming an
       int is 16 bits - but that won't always turn out well.

    I thought the default int was 32 bits or 64 bits these days.
    ISTR there is a definition of uint16_t somewhere if that is what you want

    Well, you can TEST that easily enough with your
    favorite compiler. Declare an unsigned int, init
    to zero, then count up until it wraps back to zero.

    MOST compilers, I think ints are still almost always
    16-bits as a holdover from the good old days. You
    can declare long and long long ints of course, but
    int alone, expect it to be 16-bit.

    Actually it's become rather annoying ... seems
    like there are way TOO many 'types' these days.
    Everybody invents new ones, and then there's the
    ones M$ invents. Often same thing by many names.
    Anybody for int8, int16, int32, int64, int128
    and that's that ??? It'd make things LOTS easier,
    a lot less conversion/casting involved.

    Guess we'll have to add int256 ... but keep the
    naming simple and no BS.

    A rapid google shows no one talking about a 16 bit int. Today its
    reckoned to be 32 bit
    But if it matters, use int16_t  or uint16_t

    I can find no agreement was to what counts as a short, long, int, at all.
    If it matters, use the length specific variable names.

    My gripe exactly ... the plethora of 'types', the
    evolution of chips, it's just TOO these days. More
    chances to screw up for no good reason.

    The 'programming community' needs to fix this, no
    external force can. AGREE on plain clean obvious
    type defs and USE them everywhere.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jayjwa@21:1/5 to Lawrence D'Oliveiro on Mon Jun 30 22:18:16 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    Remember, Unix systems were fully 32-bit right from the 1980s onwards, and embraced 64-bit early on with the DEC Alpha in 1992. So “long” would have been 64 bits from at least that time, because why waste an occurrence of
    the “long” qualifier?

    $ purge
    $ cc /version
    Compaq C V6.4-005 on OpenVMS VAX V7.3
    $ cc sizeof.c
    $ link sizeof
    $ run sizeof
    Size of int is 4 bytes, 32 bits.
    Size of float is 4 bytes, 32 bits.
    Size of double is 8 bytes, 64 bits.
    Size of char is 1 byte, 8 bits.
    Size of long int is 4 bytes, 32 bits.
    $ type sizeof.c
    /* A program to show the size of various type in bytes and bits. */
    #include <stdio.h>
    #include <stdlib.h>


    int main( void ) {
    printf( "Size of int is %d bytes, %d bits.\n", sizeof( int ), sizeof( int ) * 8 );
    printf( "Size of float is %d bytes, %d bits.\n", sizeof( float ), sizeof( float ) * 8 );
    printf( "Size of double is %d bytes, %d bits.\n", sizeof( double ), sizeof( double ) * 8 );
    printf( "Size of char is %d byte, %d bits.\n", sizeof( char ), sizeof( char ) * 8 );
    printf( "Size of long int is %d bytes, %d bits.\n", sizeof( long int ), sizeof( long int ) * 8 );

    return EXIT_SUCCESS;
    }

    $

    gcc -v
    Reading specs from /usr/lib64/gcc/x86_64-slackware-linux/15.1.0/specs COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-slackware-linux/15.1.0/lto-wrapper Target: x86_64-slackware-linux
    Configured with: ../configure --prefix=/usr --libdir=/usr/lib64 --mandir=/usr/man --infodir=/usr/info --enable-shared --enable-bootstrap --enable-languages=ada,c,c++,d,fortran,go,lto,m2,objc,obj-c++,rust,cobol --enable-threads=posix --enable-checking=
    release --with-system-zlib --enable-libstdcxx-dual-abi --with-default-libstdcxx-abi=new --disable-libstdcxx-pch --disable-libunwind-exceptions --enable-__cxa_atexit --disable-libssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-
    plugin --enable-lto --disable-install-libiberty --disable-werror --with-gnu-ld --with-isl --verbose --with-arch-directory=amd64 --disable-gtktest --enable-clocale=gnu --with-arch=x86-64 --enable-multilib --target=x86_64-slackware-linux --build=x86_64-
    slackware-linux --host=x86_64-slackware-linux
    Thread model: posix
    Supported LTO compression algorithms: zlib zstd
    gcc version 15.1.0 (GCC)

    gcc -o sizeof sizeof.c
    ./sizeof
    Size of int is 4 bytes, 32 bits.
    Size of float is 4 bytes, 32 bits.
    Size of double is 8 bytes, 64 bits.
    Size of char is 1 byte, 8 bits.
    Size of long int is 8 bytes, 64 bits.

    https://www.reddit.com/r/C_Programming/comments/15jtsv8/does_anyone_knows_a_compiler_where_sizeoflong/

    --
    PGP Key ID: 781C A3E2 C6ED 70A6 B356 7AF5 B510 542E D460 5CAE
    "The Internet should always be the Wild West!"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Mon Jun 30 23:26:22 2025
    On 6/30/25 3:51 AM, Richard Kettlewell wrote:
    The Natural Philosopher <tnp@invalid.invalid> writes:
    On 30/06/2025 00:09, c186282 wrote:
    These days you CAN 'usually' get away with assuming an
      int is 16 bits - but that won't always turn out well.

    I thought the default int was 32 bits or 64 bits these days.
    ISTR there is a definition of uint16_t somewhere if that is what you want


    A rapid google shows no one talking about a 16 bit int. Today its
    reckoned to be 32 bit But if it matters, use int16_t or uint16_t

    I can find no agreement was to what counts as a short, long, int, at all.
    If it matters, use the length specific variable names.

    The language spec guarantees:
    char is at least 8 bits
    short and int are at least 16 bits
    long is at least 32 bits
    long long is at least 64 bits

    There are also some constraints on representation.

    Server/desktop platforms usually have int=32 bits; long is a bit more variable. It’d probably be 16 bits on a Z80 or similar where memory and computation are in short supply.

    Note that "at LEAST 8 bits", "at LEAST 16 bits" is
    just BAD. Making use of modulo, roll-over, can be
    very useful sometimes. If uchars are not 8 bits
    exactly you get unrealized mistakes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Mon Jun 30 23:30:48 2025
    On 6/30/25 4:34 AM, Lawrence D'Oliveiro wrote:
    On Mon, 30 Jun 2025 09:24:35 +0100, The Natural Philosopher wrote:

    On 30/06/2025 09:00, Richard Kettlewell wrote:

    To clarify: int would probably be 16 bits on a Z80. long still has to
    be at least 32 bits.

    Yes.
    It was a pain because the natural manipulation on an 8 bit chip is 8
    bits. making it 16 bit led to a lot of extra code.

    I guess 8-bit ints were not considered useful on any system.

    You still had “unsigned char”, though.

    int8s ARE useful - especially on
    low-resource microcontrollers. If
    they fit your need you don't WANT to
    waste a whole other byte.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Tue Jul 1 04:02:55 2025
    On Mon, 30 Jun 2025 23:12:21 -0400, c186282 wrote:

    MOST compilers, I think ints are still almost always 16-bits as a
    holdover from the good old days. You can declare long and long long
    ints of course, but int alone, expect it to be 16-bit.

    Not any compiler I've worked with in the last few decades. That sort of
    went out with CP/M. Way back when bytes were worth their weight in gold someone declared 'short ObjNum;' That's rather important since that is
    the number of objects that can be handled by the system including
    incidents, comments, persons, vehicles, alerts and so forth. Being signed
    and 16 bits the maximum value is 32767

    It got by for an amazingly long time but as larger, busier sites came
    along the system ran out of object number. It wasn't pretty.

    Edit a number of files to make it an unsigned short and you get 65535. It
    was close a couple of times but with some sophisticated reuse strategies
    there never was a disaster.

    Why not make it an int? Even with a signed int 2147483647 wouldn't be a problem. Because an int is 32 bits. Every struct, every XDR encoding, database, and so forth would have to be modified so we crossed our
    fingers. In DB2 SMALLINT is 16 bits, INT is 32 bits. SQL Server is the
    same. Both of them use BIGINT for 64 bits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Tue Jul 1 10:49:55 2025
    On 01/07/2025 04:26, c186282 wrote:
    On 6/30/25 3:51 AM, Richard Kettlewell wrote:

    The language spec guarantees:
       char is at least 8 bits
       short and int are at least 16 bits
       long is at least 32 bits
       long long is at least 64 bits

    There are also some constraints on representation.

    Server/desktop platforms usually have int=32 bits; long is a bit more
    variable. It’d probably be 16 bits on a Z80 or similar where memory and
    computation are in short supply.

      Note that "at LEAST 8 bits", "at LEAST 16 bits" is
      just BAD. Making use of modulo, roll-over, can be
      very useful sometimes. If uchars are not 8 bits
      exactly you get unrealized mistakes.

    As I said, if its important, specify it exactly.
    I am not sure that Richards' 'char is *at least* 8 bits' is correct tho...



    --
    When plunder becomes a way of life for a group of men in a society, over
    the course of time they create for themselves a legal system that
    authorizes it and a moral code that glorifies it.

    Frédéric Bastiat

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to The Natural Philosopher on Tue Jul 1 12:44:20 2025
    On Tue, 01 Jul 2025 10:49:55 +0100, The Natural Philosopher wrote:

    On 01/07/2025 04:26, c186282 wrote:
    On 6/30/25 3:51 AM, Richard Kettlewell wrote:

    The language spec guarantees:
       char is at least 8 bits
       short and int are at least 16 bits
       long is at least 32 bits
       long long is at least 64 bits

    There are also some constraints on representation.

    Server/desktop platforms usually have int=32 bits; long is a bit more
    variable. It’d probably be 16 bits on a Z80 or similar where memory and >>> computation are in short supply.

      Note that "at LEAST 8 bits", "at LEAST 16 bits" is
      just BAD. Making use of modulo, roll-over, can be
      very useful sometimes. If uchars are not 8 bits
      exactly you get unrealized mistakes.

    As I said, if its important, specify it exactly.
    I am not sure that Richards' 'char is *at least* 8 bits' is correct tho...

    In Kernighan & Ritchie's "The C Programming Language", the authors note that, on the four architectures that C had been implemented on (at the time of writing),
    a char occupied 8 bits on three of them. On the fourth, a char occupied 9 bits.

    So, the statement that "char is *at least* 8 bits" seems true for K&R C.

    In the ISO/IEC 9899:1999 draft of the ISO C standard, paragraph 5.2.4.2.1 "Sizes of integer types", the (draft) standard carries a table of required macros and their values, with the caveat that, for each macro, "Their implementation-defined values shall be equal or greater in magnitude
    (absolute value) to those shown, with the same sign". In this table, it
    lists
    "number of bits for smallest object that is not a bit-field (byte)"
    CHAR_BIT 8
    indicating that /the mimimum/ value for CHAR_BIT (and, by implication,
    the mimimum size for the number of bits in a byte) is 8, and that implementations may use larger values for CHAR_BIT.

    So, the statement that "char is *at least* 8 bits" seems true for ISO C'99.

    And, similar language exists in the draft of each subsequent ISO C standard,
    so the statement that "char is *at least* 8 bits" seems true for all standard implementations of C since K&R C was documented.

    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Tue Jul 1 12:42:30 2025
    On 7/1/25 12:02 AM, rbowman wrote:
    On Mon, 30 Jun 2025 23:12:21 -0400, c186282 wrote:

    MOST compilers, I think ints are still almost always 16-bits as a
    holdover from the good old days. You can declare long and long long
    ints of course, but int alone, expect it to be 16-bit.

    Not any compiler I've worked with in the last few decades. That sort of
    went out with CP/M. Way back when bytes were worth their weight in gold someone declared 'short ObjNum;' That's rather important since that is
    the number of objects that can be handled by the system including
    incidents, comments, persons, vehicles, alerts and so forth. Being signed
    and 16 bits the maximum value is 32767

    It got by for an amazingly long time but as larger, busier sites came
    along the system ran out of object number. It wasn't pretty.

    Edit a number of files to make it an unsigned short and you get 65535. It
    was close a couple of times but with some sophisticated reuse strategies there never was a disaster.

    Why not make it an int? Even with a signed int 2147483647 wouldn't be a problem. Because an int is 32 bits. Every struct, every XDR encoding, database, and so forth would have to be modified so we crossed our
    fingers. In DB2 SMALLINT is 16 bits, INT is 32 bits. SQL Server is the
    same. Both of them use BIGINT for 64 bits.


    We keep trying to build skyscrapers - but the
    building blocks are too often non-standard.
    Maybe it's time for one last big re-think before
    'AI' takes over most programming tasks ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to Lew Pitcher on Wed Jul 2 01:13:07 2025
    On 2025-07-01, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:

    In Kernighan & Ritchie's "The C Programming Language", the authors
    note that, on the four architectures that C had been implemented on
    (at the time of writing), a char occupied 8 bits on three of them.
    On the fourth, a char occupied 9 bits.

    Sounds like a Univac 1100-series mainframe in ASCII mode.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Charlie Gibbs on Tue Jul 1 21:46:21 2025
    On 7/1/25 9:13 PM, Charlie Gibbs wrote:
    On 2025-07-01, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:

    In Kernighan & Ritchie's "The C Programming Language", the authors
    note that, on the four architectures that C had been implemented on
    (at the time of writing), a char occupied 8 bits on three of them.
    On the fourth, a char occupied 9 bits.

    Sounds like a Univac 1100-series mainframe in ASCII mode.

    Yep. As said, todays norm isn't always the
    way it was - and may not be in the future.

    Dunno if any of our code will be useful in
    the future, however sometimes you still have
    to repair/update old boxes or port some
    old code over to new boxes. I remember having
    to port over a bunch of FORTRAN statistical
    code from a System-360 to GW-BASIC long back ...
    had to pgm the math over to 8087 using DATA
    statements and pushes. What a pain ! The
    360 used 18-bit words as I recall, made it
    even more fun :-)

    No, the shop didn't want to spring for a FORTRAN
    compiler of their own - cheap-asses.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Charlie Gibbs on Wed Jul 2 16:03:15 2025
    On Wed, 02 Jul 2025 01:13:07 +0000, Charlie Gibbs wrote:

    On 2025-07-01, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:

    In Kernighan & Ritchie's "The C Programming Language", the authors
    note that, on the four architectures that C had been implemented on
    (at the time of writing), a char occupied 8 bits on three of them.
    On the fourth, a char occupied 9 bits.

    Sounds like a Univac 1100-series mainframe in ASCII mode.

    K&R document it as "Honeywell 6000" using ASCII.
    They note that int, short int, and long int were 36 bits
    on that machine, as was float. double was 72 bits long.


    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to rbowman on Sun Jul 20 14:31:35 2025
    rbowman <bowman@montana.com> wrote:

    Some of the legacy code was stingy like they had to pay for every
    byte.

    Depending upon how 'legacy' is legacy, some of that stingy code might
    have been written when it had only 32KiB max core memory to use (and
    that 32KiB supported 24 simultaneous users), so, yes, it did need to be
    rather stingy with allocations of memory for storing things.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to c186282@nnada.net on Sun Jul 20 14:37:54 2025
    c186282 <c186282@nnada.net> wrote:
    On 6/25/25 12:44 PM, John Ames wrote:
    On Wed, 25 Jun 2025 09:32:13 -0700
    John Ames <commodorejohn@gmail.com> wrote:

    Regardless, easy fix - always allocate at least one byte/word/
    whatever more than you THINK you need. Minimal penalty - possibly
    BIG gains.

    That strikes me as a terrible strategy - allocating N elements extra
    won't save you from overstepping into N+1 if and when you finally do

    (Additionally, it won't save you from whatever weird undefined behavior
    may result from reading an element N which isn't even part of the
    "true" range and may have uninitialized/invalid data.)

    I've had good luck doing it that way since K&R.
    Doesn't hurt to null the entire space after
    allocating. Leave a speck of extra space and
    it covers a lot of potential little write
    issues. You still have to take care when reading.

    The problem, which John correctly pointed out, is that if the
    programmer is sloppy enough that this little extra "saves" them today,
    then it is still a ticking time bomb waiting to go off when someone
    futzes the code later and instead of the expected entry of a "serial
    number" of 8 digits and a buffer of 16 bytes, the futzer inserts 9, 10,
    11, 12, 13, 14, 15, 16, 17 (boom).

    Allocating "a little extra" is a feel good way to presume one is
    avoiding buffer overflow issues, but it does nothing to actually
    prevent them from going boom.

    And note, that 'futzing' might not be by an actual code futzer. It
    might just be a normal every day user who picked up the wrong invoice
    that had a 17 digit serial on it instead of the correct invoice that
    had the expected 8 digit serial.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich@21:1/5 to candycanearter07@candycanearter07.n on Sun Jul 20 14:42:47 2025
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT):
    On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from >>>>>> their own common mistakes (of not carefully checking error return codes >>>>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>>>> the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the >>>>> problem even though I'm probably screwed at that point. It has pointed out
    errors for calloc if you've manged to come up with a negative size.

    I have worked with programmers that assumed nothing bad would ever happen.
    Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    I still multiply by sizeof(char), half because of habit and half to make
    it clear to myself I'm making a char array, even if its "redundant". I
    kinda thought that was the "cannonical" way to do that, since you could
    have a weird edge case with a system defining char as something else?

    Per Wikipedia, 'char' in C is defined as "at least 8 bits" (https://en.wikipedia.org/wiki/C_data_types). And that 'at least'
    could have burned one in the past for "odd systems" that might have had
    9 bit characters or 16 bit characters.

    In todays world, for all but the most esoteric (embedded and/or FPGA)
    assuming char is exactly 8 bits is right often enough that no one
    notices. But multiplying by sizeof(char) does avoid it becoming an
    issue later on any unusual setups.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Rich on Sun Jul 20 14:54:00 2025
    On Sun, 20 Jul 2025 14:42:47 +0000, Rich wrote:

    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT): >>> On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
    On 2025-06-21, rbowman <bowman@montana.com> wrote:
    On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:

    Very likely, but the idea was to protect the typical programmer from >>>>>>> their own common mistakes (of not carefully checking error return codes >>>>>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not
    the Dennis Ritchie's of the world.

    I'm paranoid enough that I check the return of malloc and try to log the >>>>>> problem even though I'm probably screwed at that point. It has pointed out
    errors for calloc if you've manged to come up with a negative size. >>>>>>
    I have worked with programmers that assumed nothing bad would ever happen.
    Sadly, some had years of experience.

    Some years ago, I heard of a bug related to use of malloc. The
    code had _intended_ to dynamically allocate storage for a string
    and the terminating null byte. It was _intended_ to do this:

    dest = malloc(strlen(src)+1);

    Instead, a paren was misplaced:

    dest = malloc(strlen(src))+1;

    IIUC, the next line copied the src string into the newly-
    allocated destination.

    Aren't you supposed to multiply by sizeof as well?

    Multiply by sizeof what? sizeof(char)? This was in the
    pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
    still always 1.

    I still multiply by sizeof(char), half because of habit and half to make
    it clear to myself I'm making a char array, even if its "redundant". I
    kinda thought that was the "cannonical" way to do that, since you could
    have a weird edge case with a system defining char as something else?

    Per Wikipedia, 'char' in C is defined as "at least 8 bits" (https://en.wikipedia.org/wiki/C_data_types).

    As per the ISO C standard, 'char' is defined as "at least 8 bits".

    And that 'at least'
    could have burned one in the past for "odd systems" that might have had
    9 bit characters or 16 bit characters.

    In todays world, for all but the most esoteric (embedded and/or FPGA) assuming char is exactly 8 bits is right often enough that no one
    notices. But multiplying by sizeof(char) does avoid it becoming an
    issue later on any unusual setups.

    sizeof(char) /is defined/ as a value of 1. A char is exactly 1 char big. Multiplying allocations by sizeof(char) is futile, and has no beneficial
    effect for "unusual setups".

    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Rich on Sun Jul 20 16:51:57 2025
    On 20/07/2025 15:42, Rich wrote:
    In todays world, for all but the most esoteric (embedded and/or FPGA) assuming char is exactly 8 bits is right often enough that no one
    notices. But multiplying by sizeof(char) does avoid it becoming an
    issue later on any unusual setups.

    That's what I like. Absolutely emphasises the point to the next
    programmer even if the compiler doesn't need to know


    --
    Climate Change: Socialism wearing a lab coat.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to The Natural Philosopher on Sun Jul 20 16:15:54 2025
    On Sun, 20 Jul 2025 16:51:57 +0100, The Natural Philosopher wrote:

    On 20/07/2025 15:42, Rich wrote:
    In todays world, for all but the most esoteric (embedded and/or FPGA)
    assuming char is exactly 8 bits is right often enough that no one
    notices. But multiplying by sizeof(char) does avoid it becoming an
    issue later on any unusual setups.

    That's what I like. Absolutely emphasises the point to the next
    programmer even if the compiler doesn't need to know

    That's an awfully big leap for the next programmer to make, going
    from "I wonder why he multiplies this value by 1" to "Oho!! That
    MUST mean that CHAR_BIT is not 8!"

    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Rich on Sun Jul 20 21:18:54 2025
    Rich <rich@example.invalid> writes:
    In todays world, for all but the most esoteric (embedded and/or FPGA) assuming char is exactly 8 bits is right often enough that no one
    notices. But multiplying by sizeof(char) does avoid it becoming an
    issue later on any unusual setups.

    It avoids nothing. sizeof(char) remains equal to 1 even on platforms
    with CHAR_BIT > 8. This is a basic fact about the language.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Rich on Mon Jul 21 08:42:04 2025
    Rich <rich@example.invalid> writes:
    The problem, which John correctly pointed out, is that if the
    programmer is sloppy enough that this little extra "saves" them today,
    then it is still a ticking time bomb waiting to go off when someone
    futzes the code later and instead of the expected entry of a "serial
    number" of 8 digits and a buffer of 16 bytes, the futzer inserts 9, 10,
    11, 12, 13, 14, 15, 16, 17 (boom).

    Allocating "a little extra" is a feel good way to presume one is
    avoiding buffer overflow issues, but it does nothing to actually
    prevent them from going boom.

    Conservative upper bounds of this kind address two issues:

    1) The possibility that you made a mistake in working out the upper
    bound. Off-by-one errors are such a common category that they get
    their own name; adding even 1 byte of headroom neutralizes them.

    If you think only “sloppy” programmers make this kind of mistake then
    you’re deluded. A more competent programmer may make fewer mistakes
    but no human is perfect.

    2) Approximation can make analysis easier. Why spend an hour proving
    that the maximum size something can be is 37 bytes if a few seconds
    mental arithmetic will prove it’s at most 64 bytes? (Unless you have
    1980s quantities of RAM, of course.)

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to John Ames on Mon Jul 21 18:44:05 2025
    On 7/21/25 17:12, John Ames wrote:
    On Mon, 21 Jul 2025 08:42:04 +0100
    Richard Kettlewell <invalid@invalid.invalid> wrote:

    Conservative upper bounds of this kind address two issues:

    1) The possibility that you made a mistake in working out the upper
    bound. Off-by-one errors are such a common category that they get
    their own name; adding even 1 byte of headroom neutralizes them.

    If you think only “sloppy” programmers make this kind of mistake
    then you’re deluded. A more competent programmer may make fewer
    mistakes but no human is perfect.

    2) Approximation can make analysis easier. Why spend an hour proving
    that the maximum size something can be is 37 bytes if a few seconds
    mental arithmetic will prove it’s at most 64 bytes? (Unless you
    have 1980s quantities of RAM, of course.)

    Sure, memory is cheap and we can often afford reasonably over-specced
    buffer sizes in Our Modern Age - but the fundamental problem remains. Treating "a little extra just to be on the safe side" as a ward against buffer overruns or other boundary errors is pretty much guaranteed to
    run into trouble down the line, and no amount of "nobody's perfect...!"
    will change that. If you're not working in a language that does bounds- checking for you, and your design is not one where you can say with
    *100% certainty* that boundary errors are literally impossible, CHECK
    YER DANG BOUNDS. Simple as that.


    An upper bound is certain. It is a fundamental concept in mathematical
    proofs. It is not just giving a bit extra and hoping for the best.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to John Ames on Mon Jul 21 20:47:23 2025
    John Ames <commodorejohn@gmail.com> writes:

    On Mon, 21 Jul 2025 08:42:04 +0100
    Richard Kettlewell <invalid@invalid.invalid> wrote:

    Conservative upper bounds of this kind address two issues:

    1) The possibility that you made a mistake in working out the upper
    bound. Off-by-one errors are such a common category that they get
    their own name; adding even 1 byte of headroom neutralizes them.

    If you think only “sloppy” programmers make this kind of mistake
    then you’re deluded. A more competent programmer may make fewer
    mistakes but no human is perfect.

    2) Approximation can make analysis easier. Why spend an hour proving
    that the maximum size something can be is 37 bytes if a few seconds
    mental arithmetic will prove it’s at most 64 bytes? (Unless you
    have 1980s quantities of RAM, of course.)

    Sure, memory is cheap and we can often afford reasonably over-specced
    buffer sizes in Our Modern Age - but the fundamental problem remains. Treating "a little extra just to be on the safe side" as a ward against buffer overruns or other boundary errors is pretty much guaranteed to
    run into trouble down the line, and no amount of "nobody's perfect...!"
    will change that. If you're not working in a language that does bounds- checking for you, and your design is not one where you can say with
    *100% certainty* that boundary errors are literally impossible, CHECK
    YER DANG BOUNDS. Simple as that.

    In real life a buffer overrun is not the only outcome to be avoided. If
    you need 20 bytes and you’ve only got 10, _something_ is going to go
    wrong. A bounds check will avoid the outcome being a buffer overrun, but you’re still going to have to report an error, or exit the program, or
    some other undesired behaviour, when what you actually wanted was the
    full 20-byte result. That’s what a conservative bound helps you with.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to Richard Kettlewell on Mon Jul 21 21:14:20 2025
    On 2025-07-21, Richard Kettlewell <invalid@invalid.invalid> wrote:

    John Ames <commodorejohn@gmail.com> writes:

    On Mon, 21 Jul 2025 08:42:04 +0100
    Richard Kettlewell <invalid@invalid.invalid> wrote:

    Conservative upper bounds of this kind address two issues:

    1) The possibility that you made a mistake in working out the upper
    bound. Off-by-one errors are such a common category that they get
    their own name; adding even 1 byte of headroom neutralizes them.

    If you think only “sloppy” programmers make this kind of mistake
    then you’re deluded. A more competent programmer may make fewer
    mistakes but no human is perfect.

    2) Approximation can make analysis easier. Why spend an hour proving
    that the maximum size something can be is 37 bytes if a few seconds
    mental arithmetic will prove it’s at most 64 bytes? (Unless you
    have 1980s quantities of RAM, of course.)

    Sure, memory is cheap and we can often afford reasonably over-specced
    buffer sizes in Our Modern Age - but the fundamental problem remains.
    Treating "a little extra just to be on the safe side" as a ward against
    buffer overruns or other boundary errors is pretty much guaranteed to
    run into trouble down the line, and no amount of "nobody's perfect...!"
    will change that. If you're not working in a language that does bounds-
    checking for you, and your design is not one where you can say with
    *100% certainty* that boundary errors are literally impossible, CHECK
    YER DANG BOUNDS. Simple as that.

    In real life a buffer overrun is not the only outcome to be avoided. If
    you need 20 bytes and you’ve only got 10, _something_ is going to go
    wrong. A bounds check will avoid the outcome being a buffer overrun, but you’re still going to have to report an error, or exit the program, or
    some other undesired behaviour, when what you actually wanted was the
    full 20-byte result. That’s what a conservative bound helps you with.

    The top entry in my list of Famous Last Words is "Oh, don't worry
    about that - it'll never happen." I had learned that "never" is
    usually about six months. At the very least, if your program issues
    an appropriate error message before aborting, you'll have a chance
    of finding and fixing the deficiency. These days, I've gotten into
    using realloc() to enlarge the area in question; if it works, I quietly continue, and if not I put out a nasty error message and quit.

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Charlie Gibbs on Mon Jul 21 22:19:20 2025
    On 21/07/2025 22:14, Charlie Gibbs wrote:
    The top entry in my list of Famous Last Words is "Oh, don't worry
    about that - it'll never happen." I had learned that "never" is
    usually about six months.

    Funny you should say that..My friend who was at Acorn a little before
    the ARM years says they had an issue that very occasionally the micro
    they were working on would freeze up. I can't remember why, but the
    solution was to add a wait state. "This reduced the frequency to about
    one in every thousand years- And we reckoned the user would shrug,
    reboot and it would never happen to him again"


    --
    I was brought up to believe that you should never give offence if you
    can avoid it; the new culture tells us you should always take offence if
    you can. There are now experts in the art of taking offence, indeed
    whole academic subjects, such as 'gender studies', devoted to it.

    Sir Roger Scruton

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Charlie Gibbs on Tue Jul 22 02:10:00 2025
    On Mon, 21 Jul 2025 21:14:20 GMT, Charlie Gibbs wrote:

    The top entry in my list of Famous Last Words is "Oh, don't worry about
    that - it'll never happen." I had learned that "never" is usually about
    six months. At the very least, if your program issues an appropriate
    error message before aborting, you'll have a chance of finding and
    fixing the deficiency. These days, I've gotten into using realloc() to enlarge the area in question; if it works, I quietly continue, and if
    not I put out a nasty error message and quit.

    I test the return of malloc(), calloc(), and realloc() and attempt to log
    the error. I have caught a calloc() error whem the nmemb parameter was
    negative due to bad math but I'm not that optimistic about logging being successful when memory allocation is failing. Still, I tried...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to John Ames on Wed Jul 23 07:22:21 2025
    On 7/21/25 21:31, John Ames wrote:
    On Mon, 21 Jul 2025 20:47:23 +0100
    Richard Kettlewell <invalid@invalid.invalid> wrote:

    In real life a buffer overrun is not the only outcome to be avoided.
    If you need 20 bytes and you’ve only got 10, _something_ is going to
    go wrong. A bounds check will avoid the outcome being a buffer
    overrun, but you’re still going to have to report an error, or exit
    the program, or some other undesired behaviour, when what you
    actually wanted was the full 20-byte result. That’s what a
    conservative bound helps you with.

    Sure - there's nothing wrong with "reserve a bit more than you think
    you'll need" in and of itself. But what's been at issue from the start
    of this branch discussion is specifically the practice (as was being advocated) of doing this *as a safeguard* against buffer overruns - a
    problem that it does not actually *solve,* just forestalls long enough
    for some buggy solution to get embedded and only discovered 20 yrs.
    later at some Godforsaken field installation deep in the Pottsylvanian hinterlands* rather than being caught during development/testing or in
    some early deployment.

    * (At which point, the field-service tech having finally arrived back
    at the office with a pack of hyenas and the curse of Baba Yaga on
    his/her heels, every other install in the world will abruptly start
    breaking.)


    You appear to be advocating for using an "assert" type paradigm. This
    doesn't need to be coupled to actual reservation size.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to John Ames on Wed Jul 23 20:04:14 2025
    On 23/07/2025 16:04, John Ames wrote:
    My point is simply that, unless you're using a language where bounds- checking is provided for "free" behind the scenes, boundary errors will *always* be a hazard, and working in conscious recognition of that is a
    far more responsible approach than relying on superstitious warding
    practices - even if the practices in question may be valid design
    choices for other reasons.

    I have to agree, and philosophically it is a criticism of our whole 'kindergarten' approach to life In Europe.

    If people expect all potential hazards to have been removed, they will
    neither recognise nor respond appropriately when they meet one.

    Darwin might have a theory about that.


    --
    “when things get difficult you just have to lie”

    ― Jean Claud Jüncker

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to John Ames on Wed Jul 23 21:53:47 2025
    On 7/23/25 16:04, John Ames wrote:
    On Wed, 23 Jul 2025 07:22:21 +0100
    Pancho <Pancho.Jones@protonmail.com> wrote:

    You appear to be advocating for using an "assert" type paradigm. This
    doesn't need to be coupled to actual reservation size.

    I'm not so much advocating for any specific coding practice in any
    specific language - asserts work, but so does designing algorithms such
    that bounds violations can never happen (e.g. #define BUFFER_BOUND and
    then loop from 0 to BUFFER_BOUND - if there's no other indexing, it
    will never go off the end, unless the compiler is just broken,) where possible.

    My point is simply that, unless you're using a language where bounds- checking is provided for "free" behind the scenes, boundary errors will *always* be a hazard, and working in conscious recognition of that is a
    far more responsible approach than relying on superstitious warding
    practices - even if the practices in question may be valid design
    choices for other reasons.


    This isn't about bounds checking. I haven't used a non-bounds checked
    language for decades. The difference between a bounds checked language
    and a non-bounds checked language is that the bounds check produces a
    clear deterministic error. They are still both errors.

    The discussion is about good codling practice. Perhaps it would be
    clearer if we discussed a specific case: how much memory is needed to
    store the elements of a symmetric n-square matrix? In practice, you
    might not wish to check that the matrix will always be symmetric, or
    you may want to implement a simple indexing scheme. If n is small, it
    probably isn't worth the time thinking about it, so you just allocate
    n^2 elements. There is nothing superstitious or dangerous about this. It
    just recognises that the extra coding time is not worth the memory cost.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to The Natural Philosopher on Wed Jul 23 22:47:18 2025
    On Wed, 23 Jul 2025 20:04:14 +0100, The Natural Philosopher wrote:

    On 23/07/2025 16:04, John Ames wrote:
    My point is simply that, unless you're using a language where bounds-
    checking is provided for "free" behind the scenes, boundary errors will
    *always* be a hazard, and working in conscious recognition of that is
    a far more responsible approach than relying on superstitious warding
    practices - even if the practices in question may be valid design
    choices for other reasons.

    I have to agree, and philosophically it is a criticism of our whole 'kindergarten' approach to life In Europe.

    If people expect all potential hazards to have been removed, they will neither recognise nor respond appropriately when they meet one.

    Darwin might have a theory about that.

    Your favorite philosopher, Nietzasche, did. "Was mich nicht umbringt,
    macht mich stärker."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to John Ames on Thu Jul 24 00:29:55 2025
    On 7/23/25 22:28, John Ames wrote:
    On Wed, 23 Jul 2025 21:53:47 +0100
    Pancho <Pancho.Jones@protonmail.com> wrote:

    If n is small, it probably isn't worth the time thinking about it, so
    you just allocate n^2 elements. There is nothing superstitious or
    dangerous about this. It just recognises that the extra coding time
    is not worth the memory cost.

    That's fair enough - but it's also not what was being discussed. This
    branch of the discussion started off, specifically, with the suggestion
    that allocating extra was a helpful ward against running off the end of
    a buffer/array and stomping on the next allocation, which it really,
    really isn't.

    Yes, I understand that. That is what using n^2 for a symmetric matrix is
    doing. That is what using the maximum of fence posts and fence panels is
    doing. Allocating n+1 instead of n is the same trick as the matrix, just simpler.

    Programming is about good enough, we have no particular reason to think programmers can specify an equality more reliably than an inequality.
    Indeed, an awful lot of the time inequalities are much easier to specify reliably.

    How many prime numbers less than 2n? I'm very sure that it is less than
    n + 1. I can prove it. If I use an array bound of n+1, it is good, it
    won't go wrong with time. In 20 years time, it will still be good. I
    could give a smaller upper bound, but it would take more time and be
    less reliable.

    You appear to have a prejudice for equality. A prejudice that
    programmers should think hard about every problem they encounter. A
    prejudice that a simple, but good enough answer is lazy. Given
    programming time is limited, you are not explaining how this strategy
    improves code reliability, in general.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to rbowman on Thu Jul 24 09:56:25 2025
    On 23/07/2025 23:47, rbowman wrote:
    On Wed, 23 Jul 2025 20:04:14 +0100, The Natural Philosopher wrote:

    On 23/07/2025 16:04, John Ames wrote:
    My point is simply that, unless you're using a language where bounds-
    checking is provided for "free" behind the scenes, boundary errors will
    *always* be a hazard, and working in conscious recognition of that is
    a far more responsible approach than relying on superstitious warding
    practices - even if the practices in question may be valid design
    choices for other reasons.

    I have to agree, and philosophically it is a criticism of our whole
    'kindergarten' approach to life In Europe.

    If people expect all potential hazards to have been removed, they will
    neither recognise nor respond appropriately when they meet one.

    Darwin might have a theory about that.

    Your favorite philosopher, Nietzasche, did. "Was mich nicht umbringt,
    macht mich stärker."

    Nietzsche is hardly my favourite philosopher. By and large Nietzsche was
    a total cunt.

    A misogynist, a racist, a white supremacist, just the man to bring out
    your inner Nazi.
    Along with Karl Marx responsible for most of the problems of the 20th
    century.

    --
    “Those who can make you believe absurdities, can make you commit atrocities.”

    ― Voltaire, Questions sur les Miracles à M. Claparede, Professeur de Théologie à Genève, par un Proposant: Ou Extrait de Diverses Lettres de
    M. de Voltaire

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to John Ames on Thu Jul 24 14:42:56 2025
    On 2025-07-23, John Ames <commodorejohn@gmail.com> wrote:

    On Wed, 23 Jul 2025 21:53:47 +0100
    Pancho <Pancho.Jones@protonmail.com> wrote:

    If n is small, it probably isn't worth the time thinking about it, so
    you just allocate n^2 elements. There is nothing superstitious or
    dangerous about this. It just recognises that the extra coding time
    is not worth the memory cost.

    That's fair enough - but it's also not what was being discussed. This
    branch of the discussion started off, specifically, with the suggestion
    that allocating extra was a helpful ward against running off the end of
    a buffer/array and stomping on the next allocation, which it really,
    really isn't.

    https://en.wikipedia.org/wiki/Cargo_cult_programming

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Charlie Gibbs on Thu Jul 24 18:05:51 2025
    On Thu, 24 Jul 2025 14:42:56 GMT, Charlie Gibbs wrote:

    On 2025-07-23, John Ames <commodorejohn@gmail.com> wrote:

    On Wed, 23 Jul 2025 21:53:47 +0100 Pancho <Pancho.Jones@protonmail.com>
    wrote:

    If n is small, it probably isn't worth the time thinking about it, so
    you just allocate n^2 elements. There is nothing superstitious or
    dangerous about this. It just recognises that the extra coding time is
    not worth the memory cost.

    That's fair enough - but it's also not what was being discussed. This
    branch of the discussion started off, specifically, with the suggestion
    that allocating extra was a helpful ward against running off the end of
    a buffer/array and stomping on the next allocation, which it really,
    really isn't.

    https://en.wikipedia.org/wiki/Cargo_cult_programming

    The intersection of cargo cult and vibe programming should be able to
    generate a mass of unmaintainable crap that makes the sins of my
    generation look benign.

    https://en.wikipedia.org/wiki/Mondo_Cane

    That's a very strange '60s movie with footage of the mock runways and
    control tower the cultist built. During the war planes with good stuff
    landed at the nearby airport and the natives figured if they built one
    planes with good stuff would come.

    The theme from the movie made the charts too.

    https://www.youtube.com/watch?v=yBj9KMQ2BNs

    If you can find it it's a fun excursion into the weird.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to John Ames on Thu Jul 24 21:51:24 2025
    John Ames <commodorejohn@gmail.com> writes:
    Pancho <Pancho.Jones@protonmail.com> wrote:
    You appear to have a prejudice for equality. A prejudice that
    programmers should think hard about every problem they encounter. A
    prejudice that a simple, but good enough answer is lazy.

    I'm not advocating for approaching every aspect of development with a monomania for absolute ideal design and optimal implementation, and I
    have no idea where you're getting that from.

    What I *am* saying is that dealing with a certain class of bug-hazards
    is inevitable when using tools that don't include built-in safeguards
    against them, and that you ignore that - or ward against it via magical thinking - at your peril. Over-speccing because it's simpler than
    working out the Most Optimum answer is one thing; over-speccing because
    you hope it'll save you from dealing with genuine bugs is superstitious folly.

    One person might have been arguing for that, though I’m not even
    confident of that since their posts have expired here; they’ve been out
    of this thread for that long. You don’t have to argue that point with everyone else.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Charlie Gibbs on Thu Jul 24 21:16:15 2025
    On Thu, 24 Jul 2025 14:42:56 GMT, Charlie Gibbs wrote:

    https://en.wikipedia.org/wiki/Cargo_cult_programming

    Seems an apt description of, for example, those who say you must never
    write actual SQL code in your programs, always use an ORM or templating
    system or something.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to John Ames on Thu Jul 24 23:10:26 2025
    On Thu, 24 Jul 2025 11:14:51 -0700, John Ames wrote:

    On 24 Jul 2025 18:05:51 GMT rbowman <bowman@montana.com> wrote:

    The intersection of cargo cult and vibe programming should be able to
    generate a mass of unmaintainable crap that makes the sins of my
    generation look benign.

    It's already happening. I wish the author here had left the original
    article up, but the comments on HN alone should give you some idea what
    kind of absolute fiascoes we're gonna see in the future:

    https://news.ycombinator.com/item?id=44512368

    https://arstechnica.com/information-technology/2025/07/ai-coding- assistants-chase-phantoms-destroy-real-user-data/


    According to that not only does the AI screw up royally but it lies about
    what it's doing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Lawrence D'Oliveiro on Thu Jul 24 23:21:54 2025
    On Thu, 24 Jul 2025 21:16:15 -0000 (UTC), Lawrence D'Oliveiro wrote:

    On Thu, 24 Jul 2025 14:42:56 GMT, Charlie Gibbs wrote:

    https://en.wikipedia.org/wiki/Cargo_cult_programming

    Seems an apt description of, for example, those who say you must never
    write actual SQL code in your programs, always use an ORM or templating system or something.

    https://en.wikipedia.org/wiki/ Object%E2%80%93relational_mapping#Comparison_with_traditional_data_access_techniques

    I've never used an ORM but I might learn to like it. I've done my share of database programming, both embedded and CLI, and it always seemed like shoveling shit against the tide with a lot of boilerplate to get where
    you're going particularly when a join won't do the trick.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lew Pitcher on Fri Jul 25 00:31:01 2025
    On 7/20/25 12:15 PM, Lew Pitcher wrote:
    On Sun, 20 Jul 2025 16:51:57 +0100, The Natural Philosopher wrote:

    On 20/07/2025 15:42, Rich wrote:
    In todays world, for all but the most esoteric (embedded and/or FPGA)
    assuming char is exactly 8 bits is right often enough that no one
    notices. But multiplying by sizeof(char) does avoid it becoming an
    issue later on any unusual setups.

    That's what I like. Absolutely emphasises the point to the next
    programmer even if the compiler doesn't need to know

    That's an awfully big leap for the next programmer to make, going
    from "I wonder why he multiplies this value by 1" to "Oho!! That
    MUST mean that CHAR_BIT is not 8!"

    Try including a clear/concise COMMENT after most every
    line in your code - a sort of narration of what/why.

    Almost every function I write has a 10-20 line comment
    at the top explaining what/why/how as well.

    Do that and 'future programmers' should Get It.
    If they don't then they shouldn't be programmers.

    Bytes/words/etc are NOT always multiples of 8
    even now. DSP processors often use 24 bit words,
    has to do with, the common three 8-bit input
    channels. If you get a job maintaining 'legacy'
    systems then you should NEVER assume 4/8/16/32/64.

    And yes, there ARE still a LOT of legacy systems
    still out there, usually COBOL, doing their thing
    reliably since 1968 and nobody has the money/time
    or NERVE to re-do them in anything more modern.
    The IRS is probably still doing some of your tax
    stuff on some late 60s COBOL boxes. Your payroll
    calx/checks may be the same. Municipal job-
    scheduling and some banking too. Those old
    narrow-tie crew-cut Dilberts were GOOD at
    writing dead-solid software.

    Actually dunno if there are any 4-bit processors
    left - Epson was making the last survivors. You
    CAN still find 'em - but they're mask-programmed,
    not soft-programmable, and intended for ultra-low
    power apps. Read the Epson docs sometime though,
    INCREDIBLY capable/versatile devices. You can do
    a LOT with just 4 bits ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Fri Jul 25 05:53:26 2025
    On Fri, 25 Jul 2025 00:31:01 -0400, c186282 wrote:

    Try including a clear/concise COMMENT after most every line in your
    code - a sort of narration of what/why.

    I only add a comment if I'm doing something not apparent to a competent programmer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to John Ames on Fri Jul 25 07:06:27 2025
    On 7/24/25 23:06, John Ames wrote:
    On Thu, 24 Jul 2025 21:51:24 +0100
    Richard Kettlewell <invalid@invalid.invalid> wrote:

    One person might have been arguing for that, though I’m not even
    confident of that since their posts have expired here; they’ve been
    out of this thread for that long. You don’t have to argue that point
    with everyone else.

    I wouldn't be, if people didn't keep responding to what they *think* I
    said, as opposed to what I actually *did* say.


    You keep making overly dogmatic comments about over speccing in order to
    avoid errors. I have been using reliable over speccing in order to
    counter this, because it is an easy argument to make. However, I could
    also defend unreliable over speccing. Taking a chance something is good
    enough and not checking properly. The type of design decision that does
    lead to bugs.

    The fundamental metric to judge software is usefulness. That is why we
    have so much buggy code, people want code that does stuff rather than
    code that is perfectly bug free but doesn't do as much. In my career, a
    lot of time I have not been given the budget to develop quality code, so
    I cut corners. Fortunately I don't develop SSL, chip microcode or
    aircraft controllers. People accept my code falls over occasionally.
    However, it is better if code falls over less often. Over speccing and
    hoping for the best is an economical way of achieving code that falls
    over less than code only using expected allocation requirements, that is
    why people do it.

    If we aren't honest with ourselves and use ivory tower arguments, it is
    harder to tackle problems.

    This is the way structural engineering works. Bridge building etc.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to c186282@nnada.net on Fri Jul 25 08:43:19 2025
    c186282 <c186282@nnada.net> writes:
    On 7/20/25 12:15 PM, Lew Pitcher wrote:
    On Sun, 20 Jul 2025 16:51:57 +0100, The Natural Philosopher wrote:
    On 20/07/2025 15:42, Rich wrote:
    In todays world, for all but the most esoteric (embedded and/or
    FPGA) assuming char is exactly 8 bits is right often enough that no
    one notices. But multiplying by sizeof(char) does avoid it
    becoming an issue later on any unusual setups.

    That's what I like. Absolutely emphasises the point to the next
    programmer even if the compiler doesn't need to know
    That's an awfully big leap for the next programmer to make, going
    from "I wonder why he multiplies this value by 1" to "Oho!! That
    MUST mean that CHAR_BIT is not 8!"

    Try including a clear/concise COMMENT after most every
    line in your code - a sort of narration of what/why.

    Almost every function I write has a 10-20 line comment
    at the top explaining what/why/how as well.

    Do that and 'future programmers' should Get It.
    If they don't then they shouldn't be programmers.

    Bytes/words/etc are NOT always multiples of 8 even now. DSP
    processors often use 24 bit words, has to do with, the common three
    8-bit input channels. If you get a job maintaining 'legacy' systems
    then you should NEVER assume 4/8/16/32/64.

    Agreed. A concrete example is https://downloads.ti.com/docs/esd/SPRU514/data-types-stdz0555922.html
    where char is a 16-bit type. This links back to the nonsensical earlier
    claim that multiplying by sizeof(char) would somehow ‘avoid it becoming
    an issue’ because as that page notes, sizeof(char) remains equal to 1 on
    that platform (as it has to).

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Fri Jul 25 04:39:25 2025
    On 7/25/25 3:43 AM, Richard Kettlewell wrote:
    c186282 <c186282@nnada.net> writes:
    On 7/20/25 12:15 PM, Lew Pitcher wrote:
    On Sun, 20 Jul 2025 16:51:57 +0100, The Natural Philosopher wrote:
    On 20/07/2025 15:42, Rich wrote:
    In todays world, for all but the most esoteric (embedded and/or
    FPGA) assuming char is exactly 8 bits is right often enough that no
    one notices. But multiplying by sizeof(char) does avoid it
    becoming an issue later on any unusual setups.

    That's what I like. Absolutely emphasises the point to the next
    programmer even if the compiler doesn't need to know
    That's an awfully big leap for the next programmer to make, going
    from "I wonder why he multiplies this value by 1" to "Oho!! That
    MUST mean that CHAR_BIT is not 8!"

    Try including a clear/concise COMMENT after most every
    line in your code - a sort of narration of what/why.

    Almost every function I write has a 10-20 line comment
    at the top explaining what/why/how as well.

    Do that and 'future programmers' should Get It.
    If they don't then they shouldn't be programmers.

    Bytes/words/etc are NOT always multiples of 8 even now. DSP
    processors often use 24 bit words, has to do with, the common three
    8-bit input channels. If you get a job maintaining 'legacy' systems
    then you should NEVER assume 4/8/16/32/64.

    Agreed. A concrete example is https://downloads.ti.com/docs/esd/SPRU514/data-types-stdz0555922.html
    where char is a 16-bit type. This links back to the nonsensical earlier
    claim that multiplying by sizeof(char) would somehow ‘avoid it becoming
    an issue’ because as that page notes, sizeof(char) remains equal to 1 on that platform (as it has to).

    I started on a PDP-11 ... with that new 'C' language and
    lots of punch-cards .......

    More fun than FORTRAN and COBOL ......

    However the PDP-11 was a relatively NEW computer. The
    old mainframes were still The Standard. They are STILL
    standard, embedded deep in corporate/govt systems.
    They DID tend to use odd word/char sizes. You must
    never ASSUME 4/8/16/32 ...

    The Future ... who knows ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Fri Jul 25 05:05:27 2025
    On 7/25/25 1:53 AM, rbowman wrote:
    On Fri, 25 Jul 2025 00:31:01 -0400, c186282 wrote:

    Try including a clear/concise COMMENT after most every line in your
    code - a sort of narration of what/why.

    I only add a comment if I'm doing something not apparent to a competent programmer.

    Ummm ... I'd suggest doing it ALWAYS ... not only
    for "Them" but for YOU few years down the line.

    It's not hard.

    There's a 'psychic zone' involved in programming.
    You JUST GET IT at the time. However The Time
    tends to PASS. Then you wonder WHY you did
    that, what it accomplishes.

    Just sayin'

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to who on Fri Jul 25 10:59:22 2025
    On 25/07/2025 10:05, c186282 wrote:
    On 7/25/25 1:53 AM, rbowman wrote:
    On Fri, 25 Jul 2025 00:31:01 -0400, c186282 wrote:

        Try including a clear/concise COMMENT after most every line in your >>>     code - a sort of narration of what/why.

    I only add a comment if I'm doing something not apparent to a competent
    programmer.

      Ummm ... I'd suggest doing it ALWAYS ... not only
      for "Them" but for YOU few years down the line.

    I have looked at code I wrote years ago and beyond thinking 'this guy
    codes the way I would' I have simply not recognised anything in it at all.

    I get into 'the zone' and become essentially possessed by code demons
    who write it all for me.

    And afterwards its hard to remember doing it.


      It's not hard.

      There's a 'psychic zone' involved in programming.
      You JUST GET IT at the time. However The Time
      tends to PASS. Then you wonder WHY you did
      that, what it accomplishes.

      Just sayin'

    Exactly.




    --
    “It is hard to imagine a more stupid decision or more dangerous way of
    making decisions than by putting those decisions in the hands of people
    who pay no price for being wrong.”

    Thomas Sowell

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to The Natural Philosopher on Fri Jul 25 16:20:03 2025
    The Natural Philosopher <tnp@invalid.invalid> wrote at 09:59 this Friday (GMT):
    On 25/07/2025 10:05, c186282 wrote:
    On 7/25/25 1:53 AM, rbowman wrote:
    On Fri, 25 Jul 2025 00:31:01 -0400, c186282 wrote:

        Try including a clear/concise COMMENT after most every line in your >>>>     code - a sort of narration of what/why.

    I only add a comment if I'm doing something not apparent to a competent
    programmer.

      Ummm ... I'd suggest doing it ALWAYS ... not only
      for "Them" but for YOU few years down the line.

    I have looked at code I wrote years ago and beyond thinking 'this guy
    codes the way I would' I have simply not recognised anything in it at all.

    I get into 'the zone' and become essentially possessed by code demons
    who write it all for me.

    And afterwards its hard to remember doing it.

    Agreed, but do be careful not to go too verbose! Also proper variable
    naming is important.

      It's not hard.

      There's a 'psychic zone' involved in programming.
      You JUST GET IT at the time. However The Time
      tends to PASS. Then you wonder WHY you did
      that, what it accomplishes.

      Just sayin'

    Exactly.


    Can confirm.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Pancho on Sat Jul 26 18:02:48 2025
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed
    while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively over specced. I don't think they really understood enough to make appropriate structures for the required load.

    The same is true of much Victorian engineering.

    Lacking the detailed mathematical analyses it was easier to just make it
    bloody big, and thank god they did. London's main sewer is still able to
    cope with the load.

    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.

    --
    In todays liberal progressive conflict-free education system, everyone
    gets full Marx.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to John Ames on Sat Jul 26 17:54:15 2025
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed
    while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively over
    specced. I don't think they really understood enough to make appropriate structures for the required load.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert Riches@21:1/5 to The Natural Philosopher on Sun Jul 27 04:04:20 2025
    On 2025-07-26, The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed
    while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively over
    specced. I don't think they really understood enough to make appropriate
    structures for the required load.

    The same is true of much Victorian engineering.

    Lacking the detailed mathematical analyses it was easier to just make it bloody big, and thank god they did. London's main sewer is still able to
    cope with the load.

    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.

    For the Romans, can you imagine doing finite element analysis by
    hand using ROMAN NUMERALS????? Even the thought of doing long
    division in Roman numerals scares me--despite being a math nerd
    since early childhood.

    --
    Robert Riches
    spamtrap42@jacob21819.net
    (Yes, that is one of my email addresses.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Robert Riches on Sun Jul 27 01:50:02 2025
    On 7/27/25 12:04 AM, Robert Riches wrote:
    On 2025-07-26, The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed >>>> while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively over
    specced. I don't think they really understood enough to make appropriate >>> structures for the required load.

    The same is true of much Victorian engineering.

    Lacking the detailed mathematical analyses it was easier to just make it
    bloody big, and thank god they did. London's main sewer is still able to
    cope with the load.

    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.

    For the Romans, can you imagine doing finite element analysis by
    hand using ROMAN NUMERALS????? Even the thought of doing long
    division in Roman numerals scares me--despite being a math nerd
    since early childhood.

    Heh heh ... CAN be done - but you'd better get
    good damned PAY for it ! :-)

    The old Greeks had a number system with no decimals,
    no zero. The original science types had to convert
    everything to the Babylonian system, do the calx,
    then convert back. "Officially" the Greeks didn't
    BELIEVE in zero. There's a book about it ... got
    it in one of my stacks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to The Natural Philosopher on Sun Jul 27 10:23:33 2025
    On 7/26/25 18:02, The Natural Philosopher wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed
    while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively over
    specced. I don't think they really understood enough to make
    appropriate structures for the required load.

    The same is true of much Victorian engineering.

    Lacking the detailed mathematical analyses it was easier to just make it bloody big, and thank god they did. London's main sewer is still able to
    cope with the load.

    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.


    A bit like the old software accounting systems. I don't know why they
    are reliable, but I doubt it is just down to good design.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Kettlewell@21:1/5 to Pancho on Sun Jul 27 10:55:52 2025
    Pancho <Pancho.Jones@protonmail.com> writes:
    On 7/26/25 18:02, The Natural Philosopher wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:

    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they
    designed while the keystone was put in place and the supports
    removed.

    The Romans built bridges that stayed the #&@! up."

    I think this is an urban myth. Naming a Roman-era work that describes
    the practice would settle the question.

    AIUI, The reason Roman bridges stayed put, is that they massively
    over specced. I don't think they really understood enough to make
    appropriate structures for the required load.

    The same is true of much Victorian engineering.
    Lacking the detailed mathematical analyses it was easier to just
    make it bloody big, and thank god they did. London's main sewer is
    still able to cope with the load.
    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.

    Yes, I expect lots of Roman bridges collapsed...

    A bit like the old software accounting systems. I don't know why they
    are reliable, but I doubt it is just down to good design.

    The longer it’s been around, the longer it’s had for the bugs to be
    found and fixed. Old software also usually does fewer and simpler things
    than more recent software, so less room to contain bugs.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Robert Riches on Sun Jul 27 12:07:32 2025
    On 27/07/2025 05:04, Robert Riches wrote:
    For the Romans, can you imagine doing finite element analysis by
    hand using ROMAN NUMERALS????? Even the thought of doing long
    division in Roman numerals scares me--despite being a math nerd
    since early childhood.

    Abacus...

    --
    Gun Control: The law that ensures that only criminals have guns.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to Pancho on Sun Jul 27 12:11:40 2025
    On 27/07/2025 10:23, Pancho wrote:
    On 7/26/25 18:02, The Natural Philosopher wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed >>>> while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively
    over specced. I don't think they really understood enough to make
    appropriate structures for the required load.

    The same is true of much Victorian engineering.

    Lacking the detailed mathematical analyses it was easier to just make
    it bloody big, and thank god they did. London's main sewer is still
    able to cope with the load.

    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.


    A bit like the old software accounting systems. I don't know why they
    are reliable, but I doubt it is just down to good design.

    An Ex GF of mine trained on IBM kit and COBOL in an IBM software house
    back in the day. (1982)

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    Back then there was a rigorous process of business analysis, code and
    data specification, coding and stress testing.

    And it was expensive. Damned expensicve. But it damn well worked.
    --
    Gun Control: The law that ensures that only criminals have guns.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to The Natural Philosopher on Sun Jul 27 22:02:19 2025
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Pancho on Sun Jul 27 21:09:50 2025
    On 7/27/25 5:23 AM, Pancho wrote:
    On 7/26/25 18:02, The Natural Philosopher wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed >>>> while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively
    over specced. I don't think they really understood enough to make
    appropriate structures for the required load.

    The same is true of much Victorian engineering.

    Lacking the detailed mathematical analyses it was easier to just make
    it bloody big, and thank god they did. London's main sewer is still
    able to cope with the load.

    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.


    A bit like the old software accounting systems. I don't know why they
    are reliable, but I doubt it is just down to good design.

    "Old" generally also means "simple" - and simple is
    a lot easier to debug/maintain. Throw in 900% more
    GUI/connectedness stuff and I don't think even the
    AIs can get it all straight.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to The Natural Philosopher on Sun Jul 27 21:31:57 2025
    On 7/27/25 7:11 AM, The Natural Philosopher wrote:
    On 27/07/2025 10:23, Pancho wrote:
    On 7/26/25 18:02, The Natural Philosopher wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:


    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed >>>>> while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."


    AIUI, The reason Roman bridges stayed put, is that they massively
    over specced. I don't think they really understood enough to make
    appropriate structures for the required load.

    The same is true of much Victorian engineering.

    Lacking the detailed mathematical analyses it was easier to just make
    it bloody big, and thank god they did. London's main sewer is still
    able to cope with the load.

    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.


    A bit like the old software accounting systems. I don't know why they
    are reliable, but I doubt it is just down to good design.

    An Ex GF of mine trained on IBM kit and COBOL in an IBM software house
    back in the day. (1982)

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    Back then there was a rigorous process of business analysis, code and
    data specification, coding and stress testing.

    And it was expensive. Damned expensicve. But it damn well worked.


    Yep, that's how they used to do it - and produced GOOD
    software. However as the PC-Gen rose, more and more
    little hacks got into things and kind of smothered the
    output of the old narrow-tie Dilberts. Some of the stuff
    like VisiCalc and WordStar ... the programmers were still
    good and went off in directions (and price-group) the
    big old corps wouldn't.

    Leave it to Big Blue and CDC and such and 'word-processing'
    workstations would cost $50k each and only work with their
    mini/mainframes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Richard Kettlewell on Sun Jul 27 21:23:31 2025
    On 7/27/25 5:55 AM, Richard Kettlewell wrote:
    Pancho <Pancho.Jones@protonmail.com> writes:
    On 7/26/25 18:02, The Natural Philosopher wrote:
    On 26/07/2025 17:54, Pancho wrote:
    On 7/25/25 18:39, John Ames wrote:

    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they
    designed while the keystone was put in place and the supports
    removed.

    The Romans built bridges that stayed the #&@! up."

    I think this is an urban myth. Naming a Roman-era work that describes
    the practice would settle the question.

    Some of their aqueducts are still in use, some of
    their roads/bridges are still in use.

    One poster said they just heavily over-constructed,
    more brute-force than engineering acumen. Likely
    true to a point. However they clearly had the basics
    of how to engineer strong and functional structures
    as well.

    If the Empire had lasted just a BIT longer they'd
    have had steam power, electricity, hydraulics, TNT
    and eventually nukes a lot sooner than the post-empire
    did. This MIGHT have been bad because of "thinking" -
    imperial, enslaving, genocidal. In practice the Empire
    wasn't much diff from the NAZIs.

    AIUI, The reason Roman bridges stayed put, is that they massively
    over specced. I don't think they really understood enough to make
    appropriate structures for the required load.

    The same is true of much Victorian engineering.
    Lacking the detailed mathematical analyses it was easier to just
    make it bloody big, and thank god they did. London's main sewer is
    still able to cope with the load.
    On the other hand, many structures have failed. We only see the ones
    that didn't fall down.

    Yes, I expect lots of Roman bridges collapsed...

    A bit like the old software accounting systems. I don't know why they
    are reliable, but I doubt it is just down to good design.

    The longer it’s been around, the longer it’s had for the bugs to be
    found and fixed. Old software also usually does fewer and simpler things
    than more recent software, so less room to contain bugs.

    The longer it's been around the more times it has
    been "improved" - fatter and fatter and fatter code
    with more and more gimmicks and eye-candy and attempted
    connectedness features. Soon it just CAN'T be de-bugged.

    There's still MUCH to be said for the KISS principle.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Lawrence D'Oliveiro on Mon Jul 28 04:58:27 2025
    On Sun, 27 Jul 2025 22:02:19 -0000 (UTC), Lawrence D'Oliveiro wrote:

    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    At one time we had a very diligent QA manager. I think a large part of it
    was she thought she would be blamed for any bugs that made it to the wild
    but she wouldn't sign off on releases. It was a great CYA move but it
    always left you wondering if there were bugs she knew about or if she
    thought there might be bugs,

    To her credit years after she left we had The Release That Shall Not Be Mentioned. We did a M$ style rollback after the canaries on the first few
    sites died and skipped to a new minor version number.

    The funny part is after consigning 4.3 to the memory hole and going to 4.4
    we never went past 4.4. Even though the version numbers were displayed on
    the title bar no client ever asked why they had been on 4.4 for years. The stuff worked and that was all they were concerned about. It made the
    builds a lot easier, just keep bumping the patch number, leave the major, minor, and revision numbers alone.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Mon Jul 28 04:45:01 2025
    On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:

    One poster said they just heavily over-constructed, more brute-force
    than engineering acumen. Likely true to a point. However they clearly
    had the basics of how to engineer strong and functional structures as
    well.

    If you really want to scratch your head...

    https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm

    The punch line is there is no evidence the culture used wheels for transportation. The concept was not unknown; some finds included kids'
    pull toys.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Mon Jul 28 05:03:50 2025
    On Sun, 27 Jul 2025 21:31:57 -0400, c186282 wrote:

    Leave it to Big Blue and CDC and such and 'word-processing'
    workstations would cost $50k each and only work with their
    mini/mainframes.

    Y2K was the watershed for our clients. IBM only patched the latest OS and
    it wouldn't run on older systems. The sites looked at the cost of
    replacing their whole RS/6000 hardware and Windows started looking pretty
    damn good.

    Despite the importance of the 911 system not many government agencies
    throw wads of cash at the PSAPs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Mon Jul 28 02:14:50 2025
    On 7/28/25 12:45 AM, rbowman wrote:
    On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:

    One poster said they just heavily over-constructed, more brute-force
    than engineering acumen. Likely true to a point. However they clearly
    had the basics of how to engineer strong and functional structures as
    well.

    If you really want to scratch your head...

    https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm

    The punch line is there is no evidence the culture used wheels for transportation. The concept was not unknown; some finds included kids'
    pull toys.

    "History" is complicated - SO many
    revisionist versions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to rbowman on Mon Jul 28 02:19:04 2025
    On 7/28/25 1:03 AM, rbowman wrote:
    On Sun, 27 Jul 2025 21:31:57 -0400, c186282 wrote:

    Leave it to Big Blue and CDC and such and 'word-processing'
    workstations would cost $50k each and only work with their
    mini/mainframes.

    Y2K was the watershed for our clients. IBM only patched the latest OS and
    it wouldn't run on older systems. The sites looked at the cost of
    replacing their whole RS/6000 hardware and Windows started looking pretty damn good.

    Despite the importance of the 911 system not many government agencies
    throw wads of cash at the PSAPs.

    Money and Politics - what's new ???

    Nothing really wrong with Big Blue and
    friends - they make Good Stuff.

    But, surprise, it's Good Stuff that
    profits THEM.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to rbowman on Mon Jul 28 13:44:53 2025
    On 28/07/2025 05:45, rbowman wrote:
    On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:

    One poster said they just heavily over-constructed, more
    brute-force than engineering acumen. Likely true to a point.
    However they clearly had the basics of how to engineer strong and
    functional structures as well.

    If you really want to scratch your head...

    https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm

    The punch line is there is no evidence the culture used wheels for transportation. The concept was not unknown; some finds included
    kids' pull toys.


    "In a society that had neither pack animals nor wheeled vehicles it is
    unclear what, if any, practical need could have required roads thirty
    feet wide."

    Nobody is really sure who invented the wheel or the roller, but felled
    trees in abundance would seem to be a necessary precondition and flat
    ground and draught animals like oxen or horses would be a second - or
    access to plenty of slaves.

    But paved roads for marching by humans with packs is not improbable.


    --
    The theory of Communism may be summed up in one sentence: Abolish all
    private property.

    Karl Marx

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Mon Jul 28 13:39:34 2025
    On 28/07/2025 02:23, c186282 wrote:
    If the Empire had lasted just a BIT longer they'd
      have had steam power, electricity, hydraulics, TNT
      and eventually nukes a lot sooner than the post-empire
      did. This MIGHT have been bad because of "thinking" -
      imperial, enslaving, genocidal. In practice the Empire
      wasn't much diff from the NAZIs.

    The reason the empire collapsed is because they didn't have those things. Britain lost America because of the lack of the Telegraph. You cant run
    a colony 3000 miles away on packet ships and sealing wax sealed letters.
    The Roman Empire collapse because the point of having it was to exploit resources and use military muscle to do it.
    But that took taxes, and when Rome demanded 1000 tons of grain delivered
    from your farm in Gaul, the horses to draw the wagons would have eaten
    it all just to get there.,

    Local barbarian chief says 'I will throw the romans out for 10% ' and
    naturally people said 'great'.

    The size of an Empire is crucially dependent on the cost and speed of communications and transport.

    Roman engineering helped reduce, but could not eliminate the transport
    costs.

    In the Bronze age Britain had tin - a rare metal in those days - and it
    was near the sea. Tin trading by ship was very cost effective, so Brian
    -0 parts of it - got colonised.

    So maritime based empires were more viable than land based ones.

    Especially if you were the peoples that invented and developed 'steam
    power, electricity, hydraulics, TNT, and eventually nukes'



    --
    Ideas are more powerful than guns. We would not let our enemies have
    guns, why should we let them have ideas?

    Josef Stalin

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From The Natural Philosopher@21:1/5 to All on Mon Jul 28 13:48:56 2025
    On 28/07/2025 07:14, c186282 wrote:
    On 7/28/25 12:45 AM, rbowman wrote:
    On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:

        One poster said they just heavily over-constructed, more brute-force >>>     than engineering acumen. Likely true to a point. However they
    clearly
        had the basics of how to engineer strong and functional
    structures as
        well.

    If you really want to scratch your head...

    https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm

    The punch line is there is no evidence the culture used wheels for
    transportation. The concept was not unknown; some finds included kids'
    pull toys.

      "History" is complicated - SO many
      revisionist versions.


    You can take beeswax, a reed and some bird feathers and make a passable
    model glider in a matter of minutes.

    With paper, it's even quicker.

    Kites have been ariound thousands of years

    But to make a useful aircraft means a motor built from aluminium running
    on gasoline and spark ignition.

    Once you have an industrial base that has those things, heavier than air aircraft are simply inevitable




    --
    How fortunate for governments that the people they administer don't think.

    Adolf Hitler

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charlie Gibbs@21:1/5 to John Ames on Mon Jul 28 16:34:30 2025
    On 2025-07-25, John Ames <commodorejohn@gmail.com> wrote:

    On Fri, 25 Jul 2025 07:06:27 +0100
    Pancho <Pancho.Jones@protonmail.com> wrote:

    Fortunately I don't develop SSL, chip microcode or aircraft
    controllers. People accept my code falls over occasionally.

    To be perfectly frank, it's *very* fortunate that you don't develop
    aircraft controllers.

    Pancho seems to have adopted Microsoft's quality criteria:
    "Sort of works, most of the time."

    Microsoft's crime against humanity is getting people to
    lower their standards enough to accept bad software.

    This is the way structural engineering works. Bridge building etc.

    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed
    while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."

    It is hard to imagine a more stupid decision or more
    dangerous way of making decisions than by putting those
    decisions in the hands of people who pay no price for
    being wrong. -- Thomas Sowell

    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to The Natural Philosopher on Mon Jul 28 20:38:49 2025
    On Mon, 28 Jul 2025 13:48:56 +0100, The Natural Philosopher wrote:

    But to make a useful aircraft means a motor built from aluminium running
    on gasoline and spark ignition.

    Once you have an industrial base that has those things, heavier than air aircraft are simply inevitable

    Or you build your own...

    https://wright-brothers.org/Information_Desk/Just_the_Facts/ Engines_&_Props/1903_Engine.htm

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to John Ames on Mon Jul 28 20:46:53 2025
    On Mon, 28 Jul 2025 10:17:39 -0700, John Ames wrote:

    On Sat, 26 Jul 2025 17:54:15 +0100 Pancho <Pancho.Jones@protonmail.com> wrote:

    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they
    designed while the keystone was put in place and the supports
    removed.

    The Romans built bridges that stayed the #&@! up."

    AIUI, The reason Roman bridges stayed put, is that they massively over
    specced. I don't think they really understood enough to make
    appropriate structures for the required load.

    That may well be so - but I'd be willing to bet that they *didn't* make
    a habit of *not checking where they'd put the end of the bridge* and
    trusting that it'd work itself out as long as they built extra.

    I'm not sure how to parse the double negatives but it reminded me of a
    local project to build a railroad overpass over a new road. Short story --
    the crew building the rail bed missed the bridge. It was only by a few
    feet but I'm sure it was quite a cat fight while they figured out who
    screwed up.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Mon Jul 28 20:32:18 2025
    On Mon, 28 Jul 2025 02:14:50 -0400, c186282 wrote:

    On 7/28/25 12:45 AM, rbowman wrote:
    On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:

    One poster said they just heavily over-constructed, more
    brute-force than engineering acumen. Likely true to a point.
    However they clearly had the basics of how to engineer strong and
    functional structures as well.

    If you really want to scratch your head...

    https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm

    The punch line is there is no evidence the culture used wheels for
    transportation. The concept was not unknown; some finds included kids'
    pull toys.

    "History" is complicated - SO many revisionist versions.

    I like visiting historic sites and I find it amusing when I revisit years
    later and find the narrative has changed. At one time there was a Custer Battlefield National Monument. Now it's the Little Bighorn Battlefield
    National Monument. fwiw that effete bastard George HW Bush signed that
    law. Monuments and markers have been added for the Indians. I suppose
    that's fitting since they won.

    https://en.wikipedia.org/wiki/ Little_Bighorn_Battlefield_National_Monument#/media/File:CheyenneStone.JPG

    I'm not sure they sent the best and brightest to fight the Indian Wars.

    https://en.wikipedia.org/wiki/Fetterman_Fight

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Charlie Gibbs on Mon Jul 28 20:48:29 2025
    On Mon, 28 Jul 2025 16:34:30 GMT, Charlie Gibbs wrote:

    On 2025-07-25, John Ames <commodorejohn@gmail.com> wrote:

    On Fri, 25 Jul 2025 07:06:27 +0100 Pancho <Pancho.Jones@protonmail.com>
    wrote:

    Fortunately I don't develop SSL, chip microcode or aircraft
    controllers. People accept my code falls over occasionally.

    To be perfectly frank, it's *very* fortunate that you don't develop
    aircraft controllers.

    Pancho seems to have adopted Microsoft's quality criteria:
    "Sort of works, most of the time."

    Microsoft's crime against humanity is getting people to lower their
    standards enough to accept bad software.

    This is the way structural engineering works. Bridge building etc.

    Funny you should cite bridge-building. As a friend once observed:

    "The Romans made their architects stand under the arches they designed
    while the keystone was put in place and the supports removed.

    The Romans built bridges that stayed the #&@! up."

    It is hard to imagine a more stupid decision or more dangerous way
    of making decisions than by putting those decisions in the hands of
    people who pay no price for being wrong. -- Thomas Sowell

    That sums up the American Way nicely...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bobbie Sellers@21:1/5 to rbowman on Mon Jul 28 14:17:24 2025
    On 7/28/25 13:32, rbowman wrote:
    On Mon, 28 Jul 2025 02:14:50 -0400, c186282 wrote:

    On 7/28/25 12:45 AM, rbowman wrote:
    On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:

    One poster said they just heavily over-constructed, more
    brute-force than engineering acumen. Likely true to a point.
    However they clearly had the basics of how to engineer strong and >>>> functional structures as well.

    If you really want to scratch your head...

    https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm

    The punch line is there is no evidence the culture used wheels for
    transportation. The concept was not unknown; some finds included kids'
    pull toys.

    "History" is complicated - SO many revisionist versions.

    I like visiting historic sites and I find it amusing when I revisit years later and find the narrative has changed. At one time there was a Custer Battlefield National Monument. Now it's the Little Bighorn Battlefield National Monument. fwiw that effete bastard George HW Bush signed that
    law. Monuments and markers have been added for the Indians. I suppose
    that's fitting since they won.

    https://en.wikipedia.org/wiki/ Little_Bighorn_Battlefield_National_Monument#/media/File:CheyenneStone.JPG

    I'm not sure they sent the best and brightest to fight the Indian Wars.

    https://en.wikipedia.org/wiki/Fetterman_Fight


    Well the Native Americans won at the Little Bighorn but that followed a massacre by Custer of a village that was peaceful. That enable the war
    leaders to pull together
    a winning force. Custer was not the best and brightest perhaps but he
    had distinguished himself in the Civil War aka The War Between the
    States to maintain
    the Union. The Confederacy only wanted to keep slavery but Lincoln and the United States were interested in maintaining the Union (of the States).

    We won that time but the Confederacy after defeat pursued the path of propaganda and won the peace. Even Northerners were convinced that
    there was some heroism in the fight to maintain slavery because that
    cause of war was not mentioned too much anywhere.

    Lincoln had wanted to compensate the planter class where he had
    in-laws by the way, for the freedom of their slaves and hoped to ship
    them back to Africa. The USA was rife then with anti-black racism
    from North to South but the last gasp of the abolition was sealed with
    the split election of Rutherford B. Hayes which was decided by
    withdrawal of the Federal Troops from the South where upon the
    Black Codes were put in place to keep the blacks who had stayed
    in the South subservient to the White citizens. Blacks were not
    permitted to move around except on the White employer's
    business. The Franchise was denied them. Lynching became
    the terrorist aspect of this outrage against human liberty.

    bliss

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Charlie Gibbs on Tue Jul 29 01:00:36 2025
    On Mon, 28 Jul 2025 16:34:30 GMT, Charlie Gibbs wrote:

    It is hard to imagine a more stupid decision or more dangerous way
    of making decisions than by putting those decisions in the hands of
    people who pay no price for being wrong. -- Thomas Sowell

    I’m not sure how else you would do it, though. If a politician in charge
    of health denies life-saving vaccines to the populace, and a few thousand people die, should he be put on trial for their murder? If a judge
    sentences a person to death who later turns out to be innocent, should the judge be charged with murder as well?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to The Natural Philosopher on Tue Jul 29 01:03:57 2025
    On Mon, 28 Jul 2025 13:39:34 +0100, The Natural Philosopher wrote:

    The reason the empire collapsed is because they didn't have those
    things. Britain lost America because of the lack of the Telegraph. You
    cant run a colony 3000 miles away on packet ships and sealing wax sealed letters.

    They managed to hold on to places like Palestine, South Africa, Hong Kong, Malaya and Australia, all of which were even further away, for even
    longer.

    But then, of course, none of those places had slaves, at least not like
    the American Colonies did.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to Bobbie Sellers on Tue Jul 29 05:08:55 2025
    On Mon, 28 Jul 2025 14:17:24 -0700, Bobbie Sellers wrote:

    Well the Native Americans won at the Little Bighorn but that
    followed a
    massacre by Custer of a village that was peaceful. That enable the war leaders to pull together a winning force.

    Custer was big on taking the women and children hostage. His Crow scouts
    told him the encampment across the river was really big but he may have
    thought the warriors were elsewhere. We'll never know. Reno had engaged
    and made a hasty fighting retreat but battlefield comms were nonexistent. Another thing we'll never know for sure is what Benteen did or didn't do.
    He had despised Custer since Washita and may have hung Custer out to dry.

    In the long term the Crow won, sort of. Their rez is a whole lot of
    nothing. They tried a carpet mill like Anadarko but that failed. They
    built a casino at near the battlefield to lure in tourists but that
    failed.

    https://www.ypradio.org/tribal-affairs/2019-09-13/crow-consider- legalizing-alcohol-to-revitalize-economy

    They're sitting on one of the largest coal deposits in the country but
    Obama & Crew rained on that parade. Hardin is not on the rez but a private detention facility would have offered employment. That failed.

    https://en.wikipedia.org/wiki/Two_Rivers_Detention_Facility

    I don't know what the BIA is doing with it. Maybe they can turn it into Alligator Alcatraz North.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Pancho on Tue Jul 29 23:05:01 2025
    On Tue, 29 Jul 2025 10:07:13 +0100, Pancho wrote:

    The VMS software development process seems almost inconceivable to me
    now. No unit tests, no systematic logging, no QA, no source code
    control, no Google, crappy languages, slow builds, vt00 terminals,
    crappy editor (sorry Steve). It took ages to develop stuff.

    It had source-code control, but it was of the clunky, bureaucratic kind
    that dated from the era where it was assumed that letting two different
    people check out the same source file for modification would bring about
    the End Times or something.

    Their answer to Unix makefiles was similarly clunky. I never used either.

    (Also, remember in those days companies charged extra for development
    tools like these.)

    The symbolic debugger was pretty good. It benefited a lot from the
    commonality of different language runtimes on the VAX, down to even how exceptions were handled.

    As for editors -- I hated EDT (having become accustomed to a TECO-based
    editor before that), but TPU was quite tolerable. DEC were basically
    trying to invent their own version of Emacs, poorly, and introducing yet another proprietary language of their own for the purpose.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Wed Jul 30 02:43:38 2025
    On 7/29/25 7:05 PM, Lawrence D'Oliveiro wrote:
    On Tue, 29 Jul 2025 10:07:13 +0100, Pancho wrote:

    The VMS software development process seems almost inconceivable to me
    now. No unit tests, no systematic logging, no QA, no source code
    control, no Google, crappy languages, slow builds, vt00 terminals,
    crappy editor (sorry Steve). It took ages to develop stuff.

    It had source-code control, but it was of the clunky, bureaucratic kind
    that dated from the era where it was assumed that letting two different people check out the same source file for modification would bring about
    the End Times or something.

    Yea, that kinda sums it up ! :-)

    And, then, it was kind of TRUE.

    Their answer to Unix makefiles was similarly clunky. I never used either.

    (Also, remember in those days companies charged extra for development
    tools like these.)

    The symbolic debugger was pretty good. It benefited a lot from the commonality of different language runtimes on the VAX, down to even how exceptions were handled.

    As for editors -- I hated EDT (having become accustomed to a TECO-based editor before that), but TPU was quite tolerable. DEC were basically
    trying to invent their own version of Emacs, poorly, and introducing yet another proprietary language of their own for the purpose.

    VMS was very good - for its time and, maybe, even for now.
    I'd like to see a new Linus adapt it for the modern world.
    We need OPTIONS folks. Linux/BSD are good, but I'd pref to
    see at least one other open-source way to go that's good
    enough for Real Stuff. STILL think M$ is plotting legal ways
    to claim ownership of Linux. It has LOTS of lawyers. M$
    apps also seems to be inflitrating slowly but surely,
    becoming 'invaluable'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIE@21:1/5 to All on Fri Aug 1 19:13:32 2025
    Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    Another way is to have a program which does nothing. Without
    functionalities come zero bugs. When you want something which does a lot
    of things and you don't want to wait long years for it, you have to
    compromise.

    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to All on Fri Aug 1 20:38:38 2025
    On Fri, 01 Aug 2025 19:13:32 +0000, Stéphane CARPENTIER wrote:

    Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    Another way is to have a program which does nothing. Without
    functionalities come zero bugs. When you want something which does a lot
    of things and you don't want to wait long years for it, you have to compromise.

    Counterpoint...
    Several decades ago, IBM wrote a simple utility program for the MVS
    operating system. Much like the Unix "true" command, IEFBR14 simply
    executed a return to the operating system, leaving any actual action
    to the scripting ("JCL" in IBM-speak) that invoked the utility. The
    IEFBR14 utility was a simple placeholder in a scripting stream that
    required the execution of a program in order to permit the script
    to proceed. For what it's worth, this version of IEFBR14 was a single
    instruction long: a "BR 14", which was the MVS way to terminate
    a program.

    After a while, though, IBM /changed/ IEFBR14. MVS required a returncode
    in register 15. Up until the time of this IEFBR14 change, the OS
    initialized register 15 with 0 prior to invoking the called program,
    and a return value of 0 in register 15 meant "all is well" (like the
    Unix "true" command).

    However, an unrelated MVS change had now left register 15 in an
    unknown state before MVS invoked the called program, and (because
    IEFBR14 did not manipulate register 15 in any way), IEFBR14 now
    died ("abended" in IBM-speak) in random ways.

    So, the solution was to "fix" IEFBR14 so that it initialized
    register 15 to 0 before returning. This doubled the linecount
    from one instruction to two, and IEFBR14 became
    XR 15,15
    BR 14

    So, even a program that "does nothing" can have bugs in it.


    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Lew Pitcher on Sat Aug 2 00:01:36 2025
    On Fri, 1 Aug 2025 20:38:38 -0000 (UTC), Lew Pitcher wrote:

    After a while, though, IBM /changed/ IEFBR14. MVS required a
    returncode in register 15. Up until the time of this IEFBR14 change,
    the OS initialized register 15 with 0 prior to invoking the called
    program, and a return value of 0 in register 15 meant "all is well"
    (like the Unix "true" command).

    However, an unrelated MVS change had now left register 15 in an
    unknown state before MVS invoked the called program, and (because
    IEFBR14 did not manipulate register 15 in any way), IEFBR14 now died ("abended" in IBM-speak) in random ways.

    So, the solution was to "fix" IEFBR14 so that it initialized
    register 15 to 0 before returning. This doubled the linecount from
    one instruction to two, and IEFBR14 became XR 15,15 BR 14

    So, even a program that "does nothing" can have bugs in it.

    A couple of questions occur to me. What was the documented interface to
    program startup? Did it specify that registers were in an “undefined” state? Because otherwise it looks like you have one piece of IBM code
    making assumptions about the internals of another piece of IBM code,
    bypassing the public interfaces available to ordinary users.

    And what does it mean, in a protected OS, to enter user code with
    registers in an “undefined” state, anyway? Sounds like there is the potential to leak confidential data to userland, whether from some
    internal part of the OS or even from another user process. It seems
    obvious that registers should always be initialized to a known state to
    avoid such vulnerabilities.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to All on Sat Aug 2 02:24:48 2025
    On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
    Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    Another way is to have a program which does nothing. Without
    functionalities come zero bugs. When you want something which does a lot
    of things and you don't want to wait long years for it, you have to compromise.

    Yea ... but only to a POINT. It's better to
    put another week or month into "hardening" than
    to suffer the results of too many 'compromises'.

    I typically wrote code that WORKED ... but
    what if it DIDN'T, what if something screwed
    up somewhere for no obvious reason ? I'd go
    back over the code and add lots of TRY/EXCEPT
    around critical-seeming sections with
    appropriate "Oh SHIT !" responses. That'd
    take more time - but WORTH it.

    Oh, create a log-file writer you can use in
    the 'EXCEPT' condition ... can be very simple,
    just a reference code to where the fail was.
    All my pgms were full of that ... just an
    error string var, updated as you moved thru
    the code, a decimal number. It'd narrow
    down the point of error nicely with very
    minimal code bloat.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Pancho@21:1/5 to All on Sat Aug 2 11:34:23 2025
    On 8/2/25 07:24, c186282 wrote:
    On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
    Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    Another way is to have a program which does nothing. Without
    functionalities come zero bugs. When you want something which does a lot
    of things and you don't want to wait long years for it, you have to
    compromise.

      Yea ... but only to a POINT. It's better to
      put another week or month into "hardening" than
      to suffer the results of too many 'compromises'.


    In my case, that was for the client to decide. You explain the risks,
    they decide how they want you to spend your time. They nearly always had something else they needed me to do.

    If I were developing my own product, for sale, or for use in my own
    business. I would probably make much the same decision as the
    hypothetical client mentioned above.

      I typically wrote code that WORKED ... but
      what if it DIDN'T, what if something screwed
      up somewhere for no obvious reason ? I'd go
      back over the code and add lots of TRY/EXCEPT
      around critical-seeming sections with
      appropriate "Oh SHIT !" responses. That'd
      take more time - but WORTH it.


    Why would you add lots of try/catch blocks. Every language I have used
    had the capability for exceptions to provide a stack trace. A single
    high level try/catch block will catch everything.

    The only reason for nested local try/catch blocks is if you know how to
    handle a specific exception. Back when the world was young, and I still
    worked, it was common to see bad code, with exception blocks that did
    nothing apart from log and rethrow, or often far worse, silently ate the exception.



      Oh, create a log-file writer you can use in
      the 'EXCEPT' condition ... can be very simple,
      just a reference code to where the fail was.
      All my pgms were full of that ... just an
      error string var, updated as you moved thru
      the code, a decimal number. It'd narrow
      down the point of error nicely with very
      minimal code bloat.

    Yes, I agree, standard logger frameworks were one of the great leaps
    forward. So simple and yet so profound. I look back and think, why
    didn't I implement a standard logger framework day one of my career.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Andreas Eder@21:1/5 to Pancho on Sat Aug 2 18:11:18 2025
    On Di 29 Jul 2025 at 10:07, Pancho <Pancho.Jones@protonmail.com> wrote:

    On 7/28/25 17:34, Charlie Gibbs wrote:
    On 2025-07-25, John Ames <commodorejohn@gmail.com> wrote:

    On Fri, 25 Jul 2025 07:06:27 +0100
    Pancho <Pancho.Jones@protonmail.com> wrote:

    Fortunately I don't develop SSL, chip microcode or aircraft
    controllers. People accept my code falls over occasionally.

    To be perfectly frank, it's *very* fortunate that you don't develop
    aircraft controllers.
    Pancho seems to have adopted Microsoft's quality criteria:
    "Sort of works, most of the time."


    Pancho has adopted Microsoft's criteria of giving customers what they want.

    To continue the bridge analogy. When the US army was trying to cross the Rhine in March 1945, they didn't commission some solid Roman style bridges, capable of lasting 1000 years. No, they used pontoon bridges.

    Microsoft's crime against humanity is getting people to
    lower their standards enough to accept bad software.


    Professionally, I started on VMS, I can assure you the most recent software
    I developed on Windows was hugely better, more reliable than the stuff I wrote for VMS.

    Well, that doesn't tell us much since that is a relative measure.

    The VMS software development process seems almost inconceivable to me now.
    No unit tests, no systematic logging, no QA, no source code control, no Google, crappy languages, slow builds, vt00 terminals, crappy editor (sorry Steve). It took ages to develop stuff.

    That hasn't much to do with VMS since all that could as well be done under VMS.

    'Andreas
    --
    ceterum censeo redmondinem esse delendam

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Pancho on Sat Aug 2 21:02:11 2025
    On 8/2/25 6:34 AM, Pancho wrote:
    On 8/2/25 07:24, c186282 wrote:
    On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
    Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    Another way is to have a program which does nothing. Without
    functionalities come zero bugs. When you want something which does a lot >>> of things and you don't want to wait long years for it, you have to
    compromise.

       Yea ... but only to a POINT. It's better to
       put another week or month into "hardening" than
       to suffer the results of too many 'compromises'.


    In my case, that was for the client to decide. You explain the risks,
    they decide how they want you to spend your time. They nearly always had something else they needed me to do.

    If I were developing my own product, for sale, or for use in my own
    business. I would probably make much the same decision as the
    hypothetical client mentioned above.

       I typically wrote code that WORKED ... but
       what if it DIDN'T, what if something screwed
       up somewhere for no obvious reason ? I'd go
       back over the code and add lots of TRY/EXCEPT
       around critical-seeming sections with
       appropriate "Oh SHIT !" responses. That'd
       take more time - but WORTH it.


    Why would you add lots of try/catch blocks. Every language I have used
    had the capability for exceptions to provide a stack trace. A single
    high level try/catch block will catch everything.

    The only reason for nested local try/catch blocks is if you know how to handle a specific exception. Back when the world was young, and I still worked, it was common to see bad code, with exception blocks that did
    nothing apart from log and rethrow, or often far worse, silently ate the exception.



       Oh, create a log-file writer you can use in
       the 'EXCEPT' condition ... can be very simple,
       just a reference code to where the fail was.
       All my pgms were full of that ... just an
       error string var, updated as you moved thru
       the code, a decimal number. It'd narrow
       down the point of error nicely with very
       minimal code bloat.

    Yes, I agree, standard logger frameworks were one of the great leaps
    forward. So simple and yet so profound. I look back and think, why
    didn't I implement a standard logger framework day one of my career.

    Let's see ... you "explain" it to the client. Then
    it doesn't work. The client calls up yer boss and
    threatens to trash the company rep and sue and insists
    you be fired ..........

    99.9%, the client doesn't know DICK and really CANNOT
    give an 'informed' decision.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Pancho on Sun Aug 3 02:08:52 2025
    On Sat, 2 Aug 2025 11:34:23 +0100, Pancho wrote:

    The only reason for nested local try/catch blocks is if you know how
    to handle a specific exception. Back when the world was young, and I
    still worked, it was common to see bad code, with exception blocks
    that did nothing apart from log and rethrow, or often far worse,
    silently ate the exception.

    Yup. Catch the specific exceptions you care about, let the rest go to
    the default handler, and leave it to output a stack trace to stderr.
    You *are* capturing stderr to a log somewhere, aren’t you?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From c186282@21:1/5 to Lawrence D'Oliveiro on Sun Aug 3 01:00:08 2025
    On 8/2/25 10:08 PM, Lawrence D'Oliveiro wrote:
    On Sat, 2 Aug 2025 11:34:23 +0100, Pancho wrote:

    The only reason for nested local try/catch blocks is if you know how
    to handle a specific exception. Back when the world was young, and I
    still worked, it was common to see bad code, with exception blocks
    that did nothing apart from log and rethrow, or often far worse,
    silently ate the exception.

    Yup. Catch the specific exceptions you care about, let the rest go to
    the default handler, and leave it to output a stack trace to stderr.
    You *are* capturing stderr to a log somewhere, aren’t you?

    'Empty' exception blocks are usually a DEVELOPMENT
    dodge ... SHOULD help you understand what's wrong.
    Keeping logs can be very useful.

    On SOME code, it IS acceptable to just eat the
    exceptions. I have a security-cam pgm that rotates
    through a number of wifi cams. Sometimes they just
    can't be reached THIS round. The solution is to
    just IGNORE that and move on to the next cam
    in the loop. Next time around they'll probably
    show up.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIE@21:1/5 to All on Sat Aug 9 10:19:09 2025
    Le 02-08-2025, c186282 <c186282@nnada.net> a écrit :
    On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
    Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    Another way is to have a program which does nothing. Without
    functionalities come zero bugs. When you want something which does a lot
    of things and you don't want to wait long years for it, you have to
    compromise.

    Yea ... but only to a POINT. It's better to
    put another week or month into "hardening" than
    to suffer the results of too many 'compromises'.

    That's what I'm saying. You delay your product for one month to fix
    something, and then something new need to be fixed. And ten years later,
    you are still fixing things with no product.

    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIE@21:1/5 to All on Sat Aug 9 10:26:10 2025
    Le 02-08-2025, Pancho <Pancho.Jones@protonmail.com> a écrit :
    On 8/2/25 07:24, c186282 wrote:
    On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
    Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
    On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:

    The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.

    One easy way to achieve that is not to have a bug-reporting mechanism.

    Another way is to have a program which does nothing. Without
    functionalities come zero bugs. When you want something which does a lot >>> of things and you don't want to wait long years for it, you have to
    compromise.

      Yea ... but only to a POINT. It's better to
      put another week or month into "hardening" than
      to suffer the results of too many 'compromises'.


    In my case, that was for the client to decide. You explain the risks,
    they decide how they want you to spend your time. They nearly always had something else they needed me to do.

    Fixing a bug is not always more important to the client than developing
    a new functionality. It depends on the impact of the bug and the impact
    of the functionality.

    If I were developing my own product, for sale, or for use in my own
    business. I would probably make much the same decision as the
    hypothetical client mentioned above.

    Yes. Because if you sell your product, you need money, so you can't wait
    for years before delivering something. And if you develop for you, you
    know your needs and you know if a bug is easier to avoid than to fix.

    So, in either case, you know the cost of fixing a bug compared of the
    cost implied by the bug.

    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From rbowman@21:1/5 to All on Sat Aug 9 20:00:34 2025
    On 09 Aug 2025 10:26:10 GMT, Stéphane CARPENTIER wrote:

    Fixing a bug is not always more important to the client than developing
    a new functionality. It depends on the impact of the bug and the impact
    of the functionality.

    I've fixed bugs that some users thought were neat functionality. 'You can please some of the people all of the time etc.'

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)