I've still got my 3-inch (now painfully) small-type VMS
manual.
This was one of the genius systems - WAY beyond its time.
If you were, maybe, the Hilton hotel chain and wanted to
keep current with systems world-wide - over SLOW modems -
VMS was set up to do it, even late 70s.
This was a WELL thought-out operating system.
Now, alas, somebody BOUGHT all the code and BIOS
stuff. No longer 'free' for development. They will
hold it hostage for the last nickel until it's
utterly obsolete.
Tragic.
I still have HOPE there will be a New Linus - someone
who sees the value of the system/approach and writes
an updated work-alike.
Various corps DO seem to be scheming against Linux.
They somehow want to claim ownership and then
absorb/destroy the system. The increasing M$ content
is part of that scheme. A *FREE* OS - horrors !!!
Some OTHER capable system is Disaster-Proofing the future.
The other oddball is Plan-9 ... but it was never meant
for 'home/small-biz' computers. They DID get it to
run on the latest IBM mainframes though - there
are celebration videos.
Yea yea, there are a few other potentials, even
BeOS, but they're just not nearly as capable
as Linux or VMS. Amiga-OS ... sorry, no. Have
less experience with the Control Data systems.
MIGHT be useful.
Just saying - Linux/BSD is great, but there ARE
people legally conspiring against them. Really
good alts DO need to Be There, SOON.
Keep up with Distrowatch: They reported month ago
that some group is writing a kernel in Rust to go with
a new OS. Sorry but I lost the name of this one.
You meaybe thinking of Redox OS https://www.redox-os.org/
The slight irony is that the name “Rust” does not come from the well-known redox reaction that iron undergoes with water in the presence
of oxygen (catalyzed by a little bit of polar contaminants such as
common salt), but from the name of a kind of fungus.
On 6/13/25 22:15, c186282 wrote:
I've still got my 3-inch (now painfully) small-type VMS
manual.
This was one of the genius systems - WAY beyond its time.
If you were, maybe, the Hilton hotel chain and wanted to
keep current with systems world-wide - over SLOW modems -
VMS was set up to do it, even late 70s.
This was a WELL thought-out operating system.
Now, alas, somebody BOUGHT all the code and BIOS
stuff. No longer 'free' for development. They will
hold it hostage for the last nickel until it's
utterly obsolete.
Tragic.
I still have HOPE there will be a New Linus - someone
who sees the value of the system/approach and writes
an updated work-alike.
Various corps DO seem to be scheming against Linux.
They somehow want to claim ownership and then
absorb/destroy the system. The increasing M$ content
is part of that scheme. A *FREE* OS - horrors !!!
Some OTHER capable system is Disaster-Proofing the future.
The other oddball is Plan-9 ... but it was never meant
for 'home/small-biz' computers. They DID get it to
run on the latest IBM mainframes though - there
are celebration videos.
Yea yea, there are a few other potentials, even
BeOS, but they're just not nearly as capable
as Linux or VMS. Amiga-OS ... sorry, no. Have
less experience with the Control Data systems.
MIGHT be useful.
Just saying - Linux/BSD is great, but there ARE
people legally conspiring against them. Really
good alts DO need to Be There, SOON.
Keep up with Distrowatch: They reported month ago
that some group is writing a kernel in Rust to go with
a new OS. Sorry but I lost the name of this one.
On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:
You meaybe thinking of Redox OS https://www.redox-os.org/
That name is obviously meant to be a kind of word play on “Rust”. As I recall from my high-school chemistry lessons, a “redox reaction” is one where one reactant is “reduced” (gains electrons) while the other is “oxidized” (loses them). This may or may not involve actual oxygen atoms (which are notorious eaters of electrons), but the concept has been generalized from that.
The slight irony is that the name “Rust” does not come from the well-known
redox reaction that iron undergoes with water in the presence of oxygen (catalyzed by a little bit of polar contaminants such as common salt), but from the name of a kind of fungus.
On Sat, 14 Jun 2025 23:27:38 -0000 (UTC), Lawrence D'Oliveiro wrote:
The slight irony is that the name “Rust” does not come from the
well-known redox reaction that iron undergoes with water in the presence
of oxygen (catalyzed by a little bit of polar contaminants such as
common salt), but from the name of a kind of fungus.
Even more ironical, rust is a pathogen that the Romans sacrificed a dog in hopes of preventing,
https://penelope.uchicago.edu/encyclopaedia_romana/calendar/robigalia.html
I don't think the Old People were very aware of
zinc. Bronze came early, but brass didn't really
show up until much later.
Probably because they didn't have any VMS units
to help with analysis 🙂
I've nothing AGAINST Rust ... though frankly it seems
redundant, you could do it almost as easily in 'C'.
Too many 'new languages' just seem to be 'C' knock-offs
with crappier syntax.
On 6/14/25 8:57 PM, rbowman wrote:robigalia.html
On Sat, 14 Jun 2025 23:27:38 -0000 (UTC), Lawrence D'Oliveiro wrote:
The slight irony is that the name “Rust” does not come from the
well-known redox reaction that iron undergoes with water in the
presence of oxygen (catalyzed by a little bit of polar contaminants
such as common salt), but from the name of a kind of fungus.
Even more ironical, rust is a pathogen that the Romans sacrificed a dog
in hopes of preventing,
https://penelope.uchicago.edu/encyclopaedia_romana/calendar/
Hmmmmmmmm ... in THEORY a dose of iron-containing hemoglobin in the
vicinity COULD delay rusting ...
On 15/06/2025 04:32, c186282 wrote:
I don't think the Old People were very aware ofMm. Iron came and hordes of bronze bars became worthless.
zinc. Bronze came early, but brass didn't really
show up until much later.
Talk about disruptive technology.
The history of technology is fascinating
Probably because they didn't have any VMS units
to help with analysis 🙂
Very likely true
On Sat, 14 Jun 2025 23:32:37 -0400, c186282 wrote:
On 6/14/25 8:57 PM, rbowman wrote:robigalia.html
On Sat, 14 Jun 2025 23:27:38 -0000 (UTC), Lawrence D'Oliveiro wrote:
The slight irony is that the name “Rust” does not come from the
well-known redox reaction that iron undergoes with water in the
presence of oxygen (catalyzed by a little bit of polar contaminants
such as common salt), but from the name of a kind of fungus.
Even more ironical, rust is a pathogen that the Romans sacrificed a dog
in hopes of preventing,
https://penelope.uchicago.edu/encyclopaedia_romana/calendar/
Hmmmmmmmm ... in THEORY a dose of iron-containing hemoglobin in the
vicinity COULD delay rusting ...
afaik wheat rust has nothing to do with iron. Odd to name your programming language after a fungus that has been destroying crops for millennia.
c186282 <c186282@nnada.net> wrote:
I've nothing AGAINST Rust ... though frankly it seems
redundant, you could do it almost as easily in 'C'.
Too many 'new languages' just seem to be 'C' knock-offs
with crappier syntax.
Rust's big claim to fame was/is memory safety -- that you can't have
buffer overflows or writes to unallocated memory. And that by making
such actions impossible, Rust programs can not suffer from the security breaches that occur when someone exploits a buffer overflow in an
existing C program.
In essence it does for you all the "checking error codes" and "checking buffer sizes for sufficient space before writing" that C programmers
have to had manually, and sometimes forget to include.
Otherwise however, the RUST syntax in general just seems more
unpleasant than 'C'. It's like someone deliberately wanted to screw
with people.
As the subject seemed to be IRON I was commenting on adding 'free Fe'
into an environment were lots of iron was involved.
On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:
Otherwise however, the RUST syntax in general just seems more
unpleasant than 'C'. It's like someone deliberately wanted to screw
with people.
println!() would be a show stopper for me. Why everybody has to come up
with their special snowflake function to write to the console is beyond
me. Won't even go into cout << "hello world" << endl;
On Sun, 15 Jun 2025 22:45:18 -0400, c186282 wrote:
As the subject seemed to be IRON I was commenting on adding 'free Fe'
into an environment were lots of iron was involved.
I believe it all started with the rust programming language which was
named after a fungus. No iron involved.
Now if the discussion were about IronPython...
Babbage was making his computers using BRASS gears
and cogs - not bronze or steel. Lovelace didn't
live long enough to invent VMS alas.
Hmm, how WOULD you network Babbage AEs using the
tech of the time ? The telegraph was demonstrated
just a few years after he proposed the AE ... maybe
a two baud connection ? :-)
On 2025-06-16, c186282 <c186282@nnada.net> wrote:
Babbage was making his computers using BRASS gears
and cogs - not bronze or steel. Lovelace didn't
live long enough to invent VMS alas.
Hmm, how WOULD you network Babbage AEs using the
tech of the time ? The telegraph was demonstrated
just a few years after he proposed the AE ... maybe
a two baud connection ? :-)
Well, Teletypes managed 110 baud (even 150 on the model 37
but that was pushing it). I have a 35RO on which I did a
complete adjustment and lubrication schedule according to
the manual. In the process I got a good look at how it
decoded incoming data with nothing more than a honking big
solenoid and a bunch of very clever little cams and pawls.
Pretty awesome, actually.
Old telegraphs were interesting - because the data was essentially
'binary' - ones and zeros, contact or not. This made it possible to
use simple relays as repeater/amplifiers. Easy 1800s tech.
On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:
You meaybe thinking of Redox OS https://www.redox-os.org/
That name is obviously meant to be a kind of word play on “Rust”. As I >> recall from my high-school chemistry lessons, a “redox reaction” is one >> where one reactant is “reduced” (gains electrons) while the other is
“oxidized” (loses them). This may or may not involve actual oxygen atoms >> (which are notorious eaters of electrons), but the concept has been
generalized from that.
The slight irony is that the name “Rust” does not come from the well-known
redox reaction that iron undergoes with water in the presence of oxygen
(catalyzed by a little bit of polar contaminants such as common salt), but >> from the name of a kind of fungus.
"Fungus" ??? TOO CRUEL !
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:
You meaybe thinking of Redox OS https://www.redox-os.org/
That name is obviously meant to be a kind of word play on “Rust”. As I >>> recall from my high-school chemistry lessons, a “redox reaction” is one >>> where one reactant is “reduced” (gains electrons) while the other is >>> “oxidized” (loses them). This may or may not involve actual oxygen atoms
(which are notorious eaters of electrons), but the concept has been
generalized from that.
The slight irony is that the name “Rust” does not come from the well-known
redox reaction that iron undergoes with water in the presence of oxygen
(catalyzed by a little bit of polar contaminants such as common salt), but >>> from the name of a kind of fungus.
"Fungus" ??? TOO CRUEL !
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
Rust I personally dislike the syntax of, AND its development team is apparently pretty controversial.
On Tue, 17 Jun 2025 23:20:24 -0400, c186282 wrote:
Old telegraphs were interesting - because the data was essentially
'binary' - ones and zeros, contact or not. This made it possible to
use simple relays as repeater/amplifiers. Easy 1800s tech.
Sort of. The first attempts were complex.
https://en.wikipedia.org/wiki/Needle_telegraph
Morse and the refiners of his system introduced a time element, with a dot being three dits.
On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:
Otherwise however, the RUST syntax in general just seems more
unpleasant than 'C'. It's like someone deliberately wanted to screw
with people.
println!() would be a show stopper for me. Why everybody has to come up
with their special snowflake function to write to the console is beyond
me. Won't even go into cout << "hello world" << endl;
On 6/18/25 1:30 AM, candycanearter07 wrote:
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:
You meaybe thinking of Redox OS https://www.redox-os.org/
That name is obviously meant to be a kind of word play on “Rust”. As I >>>> recall from my high-school chemistry lessons, a “redox reaction” is one
where one reactant is “reduced” (gains electrons) while the other is >>>> “oxidized” (loses them). This may or may not involve actual oxygen atoms
(which are notorious eaters of electrons), but the concept has been
generalized from that.
The slight irony is that the name “Rust” does not come from the well-known
redox reaction that iron undergoes with water in the presence of oxygen >>>> (catalyzed by a little bit of polar contaminants such as common salt), but >>>> from the name of a kind of fungus.
"Fungus" ??? TOO CRUEL !
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
Rust I personally dislike the syntax of, AND its development team is
apparently pretty controversial.
IMHO, stick to 'C' ... but use GOOD PRACTICES.
c186282 <c186282@nnada.net> wrote at 06:09 this Wednesday (GMT):
On 6/18/25 1:30 AM, candycanearter07 wrote:
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:
You meaybe thinking of Redox OS https://www.redox-os.org/
That name is obviously meant to be a kind of word play on “Rust”. As I
recall from my high-school chemistry lessons, a “redox reaction” is one
where one reactant is “reduced” (gains electrons) while the other is >>>>> “oxidized” (loses them). This may or may not involve actual oxygen atoms
(which are notorious eaters of electrons), but the concept has been
generalized from that.
The slight irony is that the name “Rust” does not come from the well-known
redox reaction that iron undergoes with water in the presence of oxygen >>>>> (catalyzed by a little bit of polar contaminants such as common salt), but
from the name of a kind of fungus.
"Fungus" ??? TOO CRUEL !
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
Rust I personally dislike the syntax of, AND its development team is
apparently pretty controversial.
IMHO, stick to 'C' ... but use GOOD PRACTICES.
Makes sense to me.
candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
c186282 <c186282@nnada.net> wrote at 06:09 this Wednesday (GMT):
IMHO, stick to 'C' ... but use GOOD PRACTICES.
Makes sense to me.
Yes, assuming a perfectly infallable programmer, C can be "memory safe"
as well.
Unfortunately, there is no such "perfectly infallable programmer" and
trying to do so is much like trying to remain anonymous online when the
FBI, CIA and NSA are all out to find you. You have to be *absolutely perfect* in your OPSEC, every single time. The FBI, CIA and NSA can
just patiently wait for that one time you slighly slip up, and
*gotcha*.
rbowman <bowman@montana.com> wrote:
On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:
Otherwise however, the RUST syntax in general just seems more
unpleasant than 'C'. It's like someone deliberately wanted to screw
with people.
println!() would be a show stopper for me. Why everybody has to come up
with their special snowflake function to write to the console is beyond
me. Won't even go into cout << "hello world" << endl;
"println" (without the !) makes me think someone was very much a Pascal disciple (with it's write/writeln) for output.
candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
wrote:
c186282 <c186282@nnada.net> wrote at 06:09 this Wednesday (GMT):
On 6/18/25 1:30 AM, candycanearter07 wrote:
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
On 6/14/25 7:27 PM, Lawrence D'Oliveiro wrote:
On Sat, 14 Jun 2025 20:30:34 +0200, Andreas Eder wrote:
You meaybe thinking of Redox OS https://www.redox-os.org/
That name is obviously meant to be a kind of word play on “Rust”. >>>>>> As I recall from my high-school chemistry lessons, a “redox
reaction” is one where one reactant is “reduced” (gains electrons) >>>>>> while the other is “oxidized” (loses them). This may or may not >>>>>> involve actual oxygen atoms (which are notorious eaters of
electrons), but the concept has been generalized from that.
The slight irony is that the name “Rust” does not come from the >>>>>> well-known redox reaction that iron undergoes with water in the
presence of oxygen (catalyzed by a little bit of polar contaminants >>>>>> such as common salt), but from the name of a kind of fungus.
"Fungus" ??? TOO CRUEL !
Rust is perfectly OK ... but I don't see much advantage over
plain 'C'. Lots of 'new langs' are like that, just 'C' with
nastier syntax.
Rust I personally dislike the syntax of, AND its development team is
apparently pretty controversial.
IMHO, stick to 'C' ... but use GOOD PRACTICES.
Makes sense to me.
Yes, assuming a perfectly infallable programmer, C can be "memory safe"
as well.
rbowman <bowman@montana.com> wrote:
On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:
Otherwise however, the RUST syntax in general just seems more
unpleasant than 'C'. It's like someone deliberately wanted to screw
with people.
println!() would be a show stopper for me. Why everybody has to come up
with their special snowflake function to write to the console is beyond
me. Won't even go into cout << "hello world" << endl;
"println" (without the !) makes me think someone was very much a Pascal disciple (with it's write/writeln) for output.
On 6/18/25 1:40 PM, Rich wrote:
rbowman <bowman@montana.com> wrote:
On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:
Otherwise however, the RUST syntax in general just seems more
unpleasant than 'C'. It's like someone deliberately wanted to screw >>>> with people.
println!() would be a show stopper for me. Why everybody has to come up
with their special snowflake function to write to the console is beyond
me. Won't even go into cout << "hello world" << endl;
"println" (without the !) makes me think someone was very much a Pascal
disciple (with it's write/writeln) for output.
I still do Pascal ... writeln() simply tacks on a '\n' to each
line. It's a convenience.
c186282 <c186282@nnada.net> wrote:
On 6/18/25 1:40 PM, Rich wrote:
rbowman <bowman@montana.com> wrote:
On Sun, 15 Jun 2025 22:26:33 -0400, c186282 wrote:
Otherwise however, the RUST syntax in general just seems more
unpleasant than 'C'. It's like someone deliberately wanted to screw >>>>> with people.
println!() would be a show stopper for me. Why everybody has to come up >>>> with their special snowflake function to write to the console is beyond >>>> me. Won't even go into cout << "hello world" << endl;
"println" (without the !) makes me think someone was very much a Pascal
disciple (with it's write/writeln) for output.
I still do Pascal ... writeln() simply tacks on a '\n' to each
line. It's a convenience.
My point was the chosen spelling makes it look like someone liked
Pascal's function name style, but preferred "print" to "write" for some reason.
Wrote my fair share of Pascal back in the day (Apple II UCSD, Turbo
Pascal 4 on a PC clone, some University's Pascal compiler (I've long
since forgotten the name) for the CDC Cyber 7600 during college). I
know what the 'ln' suffix on Pascal's write (and read) does.
Now, what would be surprising would be if a Pascal deciple decided on "println" (why the ! I don't know) but then made it not append a new
line to the output.
Python sometimes annoys be for NOT having a println()
On 6/18/25 1:30 AM, candycanearter07 wrote:
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
Rust I personally dislike the syntax of, AND its development team is
apparently pretty controversial.
IMHO, stick to 'C' ... but use GOOD PRACTICES.
(Counting C++ as a dialect of C for the purposes of this posting, which isn’t true in general, but doesn’t really affect the point.)
c186282 <c186282@nnada.net> writes:
On 6/18/25 1:30 AM, candycanearter07 wrote:
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
It is absolutely not C with different syntax. Language designers have
learned a lot since C.
Rust I personally dislike the syntax of, AND its development team is
apparently pretty controversial.
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does not
work.
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does
not work.
At some point, soon, they need to start flagging the unsafe functions
as ERRORS, not just WARNINGS.
The software industry has been trying this for decades now. It does not
work.
At some point, soon, they need to start flagging
the unsafe functions as ERRORS, not just WARNINGS.
The software industry has been trying this for decades now. It does not
work.
c186282 <c186282@nnada.net> writes:
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does
not work.
At some point, soon, they need to start flagging the unsafe functions
as ERRORS, not just WARNINGS.
The problem is not just a subset of unsafe functions. The whole language
is riddled with unsafe semantics.
There is some movement towards fixing the easy issues, e.g. [1]. But the wider issues are a lot harder to truly fix, so much so that one of the
more promising options is an architecture extension[2]; and there
remains considerable resistance[3] in the standards body to fixing other issues, despite their recurring role in defects and vulnerabilities.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
[2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
[3] https://www.youtube.com/watch?v=DRgoEKrTxXY
Most languages after C designed these issues out, one way or another.
The clever bit is figuring out how to combine performance and safety,
and that’s what language designers have been working out, increasingly successfully.
On 20/06/2025 09:00, Richard Kettlewell wrote:
Most languages after C designed these issues out, one way or
another.
The clever bit is figuring out how to combine performance and safety,
and that’s what language designers have been working out, increasingly
successfully.
I don't really see how you can have a program that cannot write or
read memory beyond the intentions of the original programmer.
On 20/06/2025 09:00, Richard Kettlewell wrote:
c186282 <c186282@nnada.net> writes:I don't really see how you can have a program that cannot write or read memory beyond the intentions of the original programmer.
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does
not work.
At some point, soon, they need to start flagging the unsafe functions
as ERRORS, not just WARNINGS.
The problem is not just a subset of unsafe functions. The whole language
is riddled with unsafe semantics.
There is some movement towards fixing the easy issues, e.g. [1]. But the
wider issues are a lot harder to truly fix, so much so that one of the
more promising options is an architecture extension[2]; and there
remains considerable resistance[3] in the standards body to fixing other
issues, despite their recurring role in defects and vulnerabilities.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
[2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
[3] https://www.youtube.com/watch?v=DRgoEKrTxXY
Most languages after C designed these issues out, one way or another.
The clever bit is figuring out how to combine performance and safety,
and that’s what language designers have been working out, increasingly
successfully.
On 20/06/2025 05:43, c186282 wrote:
The software industry has been trying this for decades now. It does not
work.
At some point, soon, they need to start flagging
the unsafe functions as ERRORS, not just WARNINGS.
The problem is that C was designed by two smart people to run on small hardware for use by other smart people.
Although they didn't *invent* stack based temporary variables I was
totally impressed when I discovered how they worked.
But the potential for overrunning *any* piece of memory allocated for a variable is always there unless you are using the equivalent of a whole
other CPU to manage memory and set hard limits.
You can get rid of using the program stack which helps, but the problem remains
On 6/19/25 3:40 AM, Richard Kettlewell wrote:
(Counting C++ as a dialect of C for the purposes of this posting, which
isn’t true in general, but doesn’t really affect the point.)
c186282 <c186282@nnada.net> writes:
On 6/18/25 1:30 AM, candycanearter07 wrote:
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
It is absolutely not C with different syntax. Language designers have
learned a lot since C.
Ummmmmmmm ... nothing GOOD that I can tell :-)
Rust I personally dislike the syntax of, AND its development team is
apparently pretty controversial.
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does not
work.
At some point, soon, they need to start flagging
the unsafe functions as ERRORS, not just WARNINGS.
c186282 <c186282@nnada.net> wrote:
On 6/19/25 3:40 AM, Richard Kettlewell wrote:
(Counting C++ as a dialect of C for the purposes of this posting, which
isn’t true in general, but doesn’t really affect the point.)
c186282 <c186282@nnada.net> writes:
On 6/18/25 1:30 AM, candycanearter07 wrote:
c186282 <c186282@nnada.net> wrote at 03:03 this Sunday (GMT):
Rust is perfectly OK ... but I don't see much advantage
over plain 'C'. Lots of 'new langs' are like that, just
'C' with nastier syntax.
It is absolutely not C with different syntax. Language designers have
learned a lot since C.
Ummmmmmmm ... nothing GOOD that I can tell :-)
Rust I personally dislike the syntax of, AND its development team is >>>>> apparently pretty controversial.
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does not
work.
At some point, soon, they need to start flagging
the unsafe functions as ERRORS, not just WARNINGS.
That's not enough. It is very easy in C to use a "safe" function
unsafely. Writing "safe" C code requires a very knowledgable (about
C), very careful, programmer. The vast majority of those writing C are neither.
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 20/06/2025 09:00, Richard Kettlewell wrote:
c186282 <c186282@nnada.net> writes:I don't really see how you can have a program that cannot write or read
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does
not work.
At some point, soon, they need to start flagging the unsafe functions
as ERRORS, not just WARNINGS.
The problem is not just a subset of unsafe functions. The whole language >>> is riddled with unsafe semantics.
There is some movement towards fixing the easy issues, e.g. [1]. But the >>> wider issues are a lot harder to truly fix, so much so that one of the
more promising options is an architecture extension[2]; and there
remains considerable resistance[3] in the standards body to fixing other >>> issues, despite their recurring role in defects and vulnerabilities.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
[2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
[3] https://www.youtube.com/watch?v=DRgoEKrTxXY
Most languages after C designed these issues out, one way or another.
The clever bit is figuring out how to combine performance and safety,
and that’s what language designers have been working out, increasingly >>> successfully.
memory beyond the intentions of the original programmer.
Ada accomplished it years ago (i.e., Rust is nothing new in that
regard). But.... it did so by inserting in the compiled output all
the checks for buffer sizes before use and checks of error return codes
that so often get omitted in C code. And the performance hit was
sufficient that Ada only found a niche in very safety critical
environments (aircraft avionics, etc.).
Ada accomplished it years ago (i.e., Rust is nothing new in that
regard). But.... it did so by inserting in the compiled output all
the checks for buffer sizes before use and checks of error return codes
that so often get omitted in C code. And the performance hit was
sufficient that Ada only found a niche in very safety critical
environments (aircraft avionics, etc.).
On 20/06/2025 14:36, Rich wrote:
The Natural Philosopher <tnp@invalid.invalid> wrote:I bet a bad (or extremely good) programmer could circumvfent that
On 20/06/2025 09:00, Richard Kettlewell wrote:
c186282 <c186282@nnada.net> writes:I don't really see how you can have a program that cannot write or read
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does >>>>>> not work.
At some point, soon, they need to start flagging the unsafe functions >>>>> as ERRORS, not just WARNINGS.
The problem is not just a subset of unsafe functions. The whole language >>>> is riddled with unsafe semantics.
There is some movement towards fixing the easy issues, e.g. [1]. But the >>>> wider issues are a lot harder to truly fix, so much so that one of the >>>> more promising options is an architecture extension[2]; and there
remains considerable resistance[3] in the standards body to fixing other >>>> issues, despite their recurring role in defects and vulnerabilities.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
[2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
[3] https://www.youtube.com/watch?v=DRgoEKrTxXY
Most languages after C designed these issues out, one way or another.
The clever bit is figuring out how to combine performance and safety,
and that’s what language designers have been working out, increasingly >>>> successfully.
memory beyond the intentions of the original programmer.
Ada accomplished it years ago (i.e., Rust is nothing new in that
regard). But.... it did so by inserting in the compiled output all
the checks for buffer sizes before use and checks of error return codes
that so often get omitted in C code. And the performance hit was
sufficient that Ada only found a niche in very safety critical
environments (aircraft avionics, etc.).
Rich <rich@example.invalid> writes:
Ada accomplished it years ago (i.e., Rust is nothing new in that
regard). But.... it did so by inserting in the compiled output all
the checks for buffer sizes before use and checks of error return codes
that so often get omitted in C code. And the performance hit was
sufficient that Ada only found a niche in very safety critical
environments (aircraft avionics, etc.).
I don’t know what Ada’s approach was in detail, but I have a few points to make here.
First, just because an automated check isn’t reflected in comparable C
code doesn’t mean the check isn’t necessary; and as the stream of vulnerabilities over the last few decades show, often omitted checks
_are_ necessary. Comparing buggy C code with correctly functioning Ada
code is not really an argument for using C.
I don't really see how you can have a program that cannot write or
read memory beyond the intentions of the original programmer.
Secondly, many checks can be optimized out. e.g. iterating over an array
(or a prefix of it) doesn’t need a check on every access, it just needs
a check that the loop bound doesn’t exceed the array bound[1]. This kind
of optimization is easy mode for compilers;
https://godbolt.org/z/Tz5KGq6vais shows an example in C++ (the at()
method is bounds-checked array indexing).
Finally, on all but the least powerful microprocessors, a correctly
predicted branch is almost free, and a passed bounds check is easy mode
for a branch predictor.
With that in mind, with compilers and microprocessors from this century,
the impact of this sort of thing is rather small. (Ada dates back to
1980, at which time a lot of these technologies were much less mature.)
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 20/06/2025 14:36, Rich wrote:
The Natural Philosopher <tnp@invalid.invalid> wrote:I bet a bad (or extremely good) programmer could circumvfent that
On 20/06/2025 09:00, Richard Kettlewell wrote:
c186282 <c186282@nnada.net> writes:I don't really see how you can have a program that cannot write or read >>>> memory beyond the intentions of the original programmer.
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does >>>>>>> not work.
At some point, soon, they need to start flagging the unsafe functions >>>>>> as ERRORS, not just WARNINGS.
The problem is not just a subset of unsafe functions. The whole language >>>>> is riddled with unsafe semantics.
There is some movement towards fixing the easy issues, e.g. [1]. But the >>>>> wider issues are a lot harder to truly fix, so much so that one of the >>>>> more promising options is an architecture extension[2]; and there
remains considerable resistance[3] in the standards body to fixing other >>>>> issues, despite their recurring role in defects and vulnerabilities. >>>>>
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
[2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
[3] https://www.youtube.com/watch?v=DRgoEKrTxXY
Most languages after C designed these issues out, one way or another. >>>>> The clever bit is figuring out how to combine performance and safety, >>>>> and that’s what language designers have been working out, increasingly >>>>> successfully.
Ada accomplished it years ago (i.e., Rust is nothing new in that
regard). But.... it did so by inserting in the compiled output all
the checks for buffer sizes before use and checks of error return codes
that so often get omitted in C code. And the performance hit was
sufficient that Ada only found a niche in very safety critical
environments (aircraft avionics, etc.).
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return codes
or buffer lengths, etc.). I.e. the typical 9-5 contract programmer,
not the Dennis Ritchie's of the world.
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return codes
or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not
the Dennis Ritchie's of the world.
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return codes
or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not
the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the problem even though I'm probably screwed at that point. It has pointed out errors for calloc if you've manged to come up with a negative size.
I have worked with programmers that assumed nothing bad would ever happen. Sadly, some had years of experience.
c186282 <c186282@nnada.net> writes:
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does
not work.
At some point, soon, they need to start flagging the unsafe functions
as ERRORS, not just WARNINGS.
The problem is not just a subset of unsafe functions. The whole language
is riddled with unsafe semantics.
There is some movement towards fixing the easy issues, e.g. [1]. But the wider issues are a lot harder to truly fix, so much so that one of the
more promising options is an architecture extension[2]; and there
remains considerable resistance[3] in the standards body to fixing other issues, despite their recurring role in defects and vulnerabilities.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
[2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
[3] https://www.youtube.com/watch?v=DRgoEKrTxXY
Most languages after C designed these issues out, one way or another.
The clever bit is figuring out how to combine performance and safety,
and that’s what language designers have been working out, increasingly successfully.
On 20/06/2025 05:43, c186282 wrote:
The software industry has been trying this for decades now. It does not
work.
At some point, soon, they need to start flagging
the unsafe functions as ERRORS, not just WARNINGS.
The problem is that C was designed by two smart people to run on small hardware for use by other smart people.
Although they didn't *invent* stack based temporary variables I was
totally impressed when I discovered how they worked.
But the potential for overrunning *any* piece of memory allocated for a variable is always there unless you are using the equivalent of a whole
other CPU to manage memory and set hard limits.
You can get rid of using the program stack which helps, but the problem remains
On Thu, 19 Jun 2025 08:40:31 +0100, Richard Kettlewell wrote:
The software industry has been trying this for decades now. It does not
work.
There is a safety-critical spec for writing C code. It has been in
production use in the automotive industry for some years, decades now. How often do you hear about software bugs in safety-critical car systems?
On 20/06/2025 09:00, Richard Kettlewell wrote:
c186282 <c186282@nnada.net> writes:I don't really see how you can have a program that cannot write or read memory beyond the intentions of the original programmer.
On 6/19/25 3:40 AM, Richard Kettlewell wrote:>
c186282 <c186282@nnada.net> writes:
IMHO, stick to 'C' ... but use GOOD PRACTICES.
The software industry has been trying this for decades now. It does
not work.
At some point, soon, they need to start flagging the unsafe functions
as ERRORS, not just WARNINGS.
The problem is not just a subset of unsafe functions. The whole language
is riddled with unsafe semantics.
There is some movement towards fixing the easy issues, e.g. [1]. But the
wider issues are a lot harder to truly fix, so much so that one of the
more promising options is an architecture extension[2]; and there
remains considerable resistance[3] in the standards body to fixing other
issues, despite their recurring role in defects and vulnerabilities.
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3322.pdf
[2] https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/
[3] https://www.youtube.com/watch?v=DRgoEKrTxXY
Most languages after C designed these issues out, one way or another.
The clever bit is figuring out how to combine performance and safety,
and that’s what language designers have been working out, increasingly
successfully.
Sure if its a different process but simply reading one byte beyond the
end of a buffer is going to be hard.
And probably make the language very hard to use when you are dealing
with multi-typed data.
I do like, at a cursory glance, the second link. Hardware that protects memory is a great leap forward
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return codes
or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>> the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the
problem even though I'm probably screwed at that point. It has pointed out >> errors for calloc if you've manged to come up with a negative size.
I have worked with programmers that assumed nothing bad would ever happen. >> Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The
code had _intended_ to dynamically allocate storage for a string
and the terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Those who had worked on that project longer said the bug had been
latent in the code for several years, most likely with alignment
padding masking the bug from being discovered. Curiously, the
bug made itself manifest immediately upon changing from a 32-bit
build environment to a 64-bit build environment.
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return
codes or buffer lengths, etc.). I.e. the typical 9-5 contract
programmer, not the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log
the problem even though I'm probably screwed at that point. It has
pointed out errors for calloc if you've manged to come up with a
negative size.
I have worked with programmers that assumed nothing bad would ever
happen.
Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The code had _intended_ to dynamically allocate storage for a string and the
terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly- allocated destination.
Those who had worked on that project longer said the bug had been latent
in the code for several years, most likely with alignment padding
masking the bug from being discovered. Curiously, the bug made itself manifest immediately upon changing from a 32-bit build environment to a 64-bit build environment.
Oh, just a question ... how do we KNOW they've successfully
designed-out all these problems ???
How many more obscure issues were created ???
On Sat, 21 Jun 2025 01:10:05 -0400, c186282 wrote:
Oh, just a question ... how do we KNOW they've successfully
designed-out all these problems ???
How many more obscure issues were created ???
That remains to be seen as rust becomes more widely used and the new generation of programmers becomes complacent thinking the language will
clean up after them.
Secondly, many checks can be optimized out. e.g. iterating over an array
(or a prefix of it) doesn’t need a check on every access, it just needs
a check that the loop bound doesn’t exceed the array bound[1].
Although they didn't *invent* stack based temporary variables I was
totally impressed when I discovered how they worked.
On Fri, 20 Jun 2025 10:12:28 +0100, The Natural Philosopher wrote:
Although they didn't *invent* stack based temporary variables I was
totally impressed when I discovered how they worked.
ALGOL 60 had that worked out years earlier. The implementors even figured
out how to declare one routine inside another, such that the inner one
could access the outer one’s locals.
C never had that, even to this day†. Even after Pascal showed how to implement the same idea.
†Not officially. But GNU C does. (Oddly, GNU C++ does not.)
Rust's "memory safety" is nothing new. New maybe to "Today's 10,000" (https://xkcd.com/1053/) but not new to the world of programming.
Finally, on all but the least powerful microprocessors, a correctly
predicted branch is almost free, and a passed bounds check is easy mode
for a branch predictor.
With that in mind, with compilers and microprocessors from this century,
the impact of this sort of thing is rather small. (Ada dates back to
1980, at which time a lot of these technologies were much less mature.)
Indeed, yes, on a modern CPU much of the runtime checking is less
performance eventful than it was on 1980's CPUs. It is not free by any measure either, some short number of cycles are consumed by that
correctly predicted branch. For all but the most performance critical
the loss is well worth the gain in safety. And one could argue that "performance critical" which in the end results in some sigificant
security breech might not be as "performance critical" as it seems when
the whole picture is taken into account.
On Fri, 20 Jun 2025 21:19:35 +0100, Richard Kettlewell wrote:
Secondly, many checks can be optimized out. e.g. iterating over an array
(or a prefix of it) doesn’t need a check on every access, it just needs
a check that the loop bound doesn’t exceed the array bound[1].
And remember, Ada has subrange types, just like Pascal before it. This
means, if you have something like (excuse any errors in Ada syntax)
nr_elements : constant integer := 10;
subtype index is
1 .. nr_elements;
buffer : array index of elt_type;
buffer_index : index;
then an array access like
buffer(buffer_index)
doesn’t actually need to be range-checked at that point, because the
value of buffer_index is already known to be within the valid range.
On 20/06/2025 05:43, c186282 wrote:
The software industry has been trying this for decades now. It doesAt some point, soon, they need to start flagging
not work.
the unsafe functions as ERRORS, not just WARNINGS.
The problem is that C was designed by two smart people to run on small hardware for use by other smart people.
The Natural Philosopher <tnp@invalid.invalid> writes:
On 20/06/2025 05:43, c186282 wrote:
The software industry has been trying this for decades now. It doesAt some point, soon, they need to start flagging
not work.
the unsafe functions as ERRORS, not just WARNINGS.
The problem is that C was designed by two smart people to run on small
hardware for use by other smart people.
Well, maybe, but the original Unix team still ended up with buffer
overruns in their code. There’s a famous one in V7 mkdir, which ran with elevated privileged due to the inadequate kernel API. I’ve not tried to exploit it but it’s a pretty straightforward array overrun so almost certainly exploitable to escalate from a mortal user to root.
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return codes
or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>> the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the
problem even though I'm probably screwed at that point. It has pointed out >> errors for calloc if you've manged to come up with a negative size.
I have worked with programmers that assumed nothing bad would ever happen. >> Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The
code had _intended_ to dynamically allocate storage for a string
and the terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Those who had worked on that project longer said the bug had been
latent in the code for several years, most likely with alignment
padding masking the bug from being discovered. Curiously, the
bug made itself manifest immediately upon changing from a 32-bit
build environment to a 64-bit build environment.
Segmentation faults don’t happen for all out of bounds accesses, they happen if you access a page which isn’t mapped at all or if you don’t have permission on that page for the operation you’re attempting. The example discussed here would only trigger a segmentation fault if the allocation finished at the end of a page, otherwise you’ll just read or write padding bytes, or the header of the next allocation.
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
Some years ago, I heard of a bug related to use of malloc. The code
had _intended_ to dynamically allocate storage for a string and the
terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Aren't you supposed to multiply by sizeof as well?
Those who had worked on that project longer said the bug had been
latent in the code for several years, most likely with alignment
padding masking the bug from being discovered. Curiously, the
bug made itself manifest immediately upon changing from a 32-bit
build environment to a 64-bit build environment.
I'm more surprised it didn't segfault. Any idea what caused it to not?
I know strlen doesn't account for the terminating character, but it
seems like it should've been TWO bytes shorter...
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday
(GMT):
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return
codes or buffer lengths, etc.). I.e. the typical 9-5 contract
programmer, not the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log
the problem even though I'm probably screwed at that point. It has
pointed out errors for calloc if you've manged to come up with a
negative size.
I have worked with programmers that assumed nothing bad would ever
happen.
Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The code
had _intended_ to dynamically allocate storage for a string and the
terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly- allocated
destination.
Aren't you supposed to multiply by sizeof as well?
On 22/06/2025 15:27, Richard Kettlewell wrote:
Segmentation faults don’t happen for all out of bounds accesses, they
happen if you access a page which isn’t mapped at all or if you don’t
have permission on that page for the operation you’re attempting. The
example discussed here would only trigger a segmentation fault if the
allocation finished at the end of a page, otherwise you’ll just read or
write padding bytes, or the header of the next allocation.
That, is a really useful factoid...
Thanks
On Sun, 22 Jun 2025 13:50:03 -0000 (UTC), candycanearter07 wrote:
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday
(GMT):
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from >>>>> their own common mistakes (of not carefully checking error return
codes or buffer lengths, etc.). I.e. the typical 9-5 contract
programmer, not the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log
the problem even though I'm probably screwed at that point. It has
pointed out errors for calloc if you've manged to come up with a
negative size.
I have worked with programmers that assumed nothing bad would ever
happen.
Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The code
had _intended_ to dynamically allocate storage for a string and the
terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly- allocated
destination.
Aren't you supposed to multiply by sizeof as well?
No, malloc is N bytes. calloc is N elements of sizeof(foo). Also
malloc() doesn't initialize the memory but calloc() zeroes it out. That
can be another pitfall if you're using something like memcpy() with
strings and don't copy in the terminating NUL. If you try something like printf("%s", my_string) if you're really lucky there will have been a NUL
in the garbage; if not the string will be terminated somewhere, maybe.
calloc() is to be preferred imnsho. In many cases you're going to memset() the malloc'd memory to 0 so you might as well get it over with.
calloc() is to be preferred imnsho. In many cases you're going to
memset()
the malloc'd memory to 0 so you might as well get it over with.
Fair, but some might find it redundant to set the memory to 0 and
immidietly write data over those null bytes.
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return codes >>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>> the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the >>> problem even though I'm probably screwed at that point. It has pointed out >>> errors for calloc if you've manged to come up with a negative size.
I have worked with programmers that assumed nothing bad would ever happen. >>> Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The
code had _intended_ to dynamically allocate storage for a string
and the terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Aren't you supposed to multiply by sizeof as well?
Those who had worked on that project longer said the bug had been
latent in the code for several years, most likely with alignment
padding masking the bug from being discovered. Curiously, the
bug made itself manifest immediately upon changing from a 32-bit
build environment to a 64-bit build environment.
I'm more surprised it didn't segfault. Any idea what caused it to not?
I know strlen doesn't account for the terminating character, but it
seems like it should've been TWO bytes shorter...
IIUC, heap-based malloc _usually_ returns a larger allocation
block than you really asked for. As long as malloc gave you at
least 2 extra bytes, you'd never see any misbehavior. Even if it
didn't give you 2 or more extra bytes, it's fairly likely you'd
just get lucky and never see the program crash or otherwise
misbehavior in a significant way.
For example, if you stomped on
the header of the next allocation block, as long as nothing ever
read and acted upon the data in said header, you'd never see it.
On 2025-06-24, Robert Riches <spamtrap42@jacob21819.net> wrote:
IIUC, heap-based malloc _usually_ returns a larger allocation block
than you really asked for. As long as malloc gave you at least 2 extra
bytes, you'd never see any misbehavior. Even if it didn't give you 2
or more extra bytes, it's fairly likely you'd just get lucky and never
see the program crash or otherwise misbehavior in a significant way.
Or if malloc() rounds the size of the block up to, say, the next
multiple of 8, odds are good that you'll be clobbering an unused byte.
For example, if you stomped on
the header of the next allocation block, as long as nothing ever read
and acted upon the data in said header, you'd never see it.
If you wrote a NUL to a byte that's normally zero, you might still get
away with it even if that header is referenced.
But worst case, a program that ran flawlessly for years might suddenly
bomb because you happened to write a different number of bytes to the
area than you normally do. That one can be a nightmare to debug.
On Tue, 24 Jun 2025 04:52:21 GMT, Charlie Gibbs wrote:
On 2025-06-24, Robert Riches <spamtrap42@jacob21819.net> wrote:
IIUC, heap-based malloc _usually_ returns a larger allocation block
than you really asked for. As long as malloc gave you at least 2 extra
bytes, you'd never see any misbehavior. Even if it didn't give you 2
or more extra bytes, it's fairly likely you'd just get lucky and never
see the program crash or otherwise misbehavior in a significant way.
Or if malloc() rounds the size of the block up to, say, the next
multiple of 8, odds are good that you'll be clobbering an unused byte.
For example, if you stomped on
the header of the next allocation block, as long as nothing ever read
and acted upon the data in said header, you'd never see it.
If you wrote a NUL to a byte that's normally zero, you might still get
away with it even if that header is referenced.
But worst case, a program that ran flawlessly for years might suddenly
bomb because you happened to write a different number of bytes to the
area than you normally do. That one can be a nightmare to debug.
"Do you feel lucky, punk?" isn't a great programming philosophy.
Don't worry, you're out of business soon. 'AI' will program
everything. The pointy-haired bosses reign supreme ... just roughly
describe what they think they want. Any probs, blame the 'AI' -
paycheck safe.
Do you have ANY issues with this vision of the Near Future ???
On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
IIUC, heap-based malloc _usually_ returns a larger allocation
block than you really asked for.
As long as malloc gave you at least 2 extra bytes, you'd never see any misbehavior. Even if it didn't give you 2 or more extra bytes, it's
fairly likely you'd just get lucky and never see the program crash or otherwise misbehavior in a significant way. For example, if you
stomped on the header of the next allocation block, as long as nothing
ever read and acted upon the data in said header, you'd never see it.
On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:
Don't worry, you're out of business soon. 'AI' will program
everything. The pointy-haired bosses reign supreme ... just roughly
describe what they think they want. Any probs, blame the 'AI' -
paycheck safe.
Do you have ANY issues with this vision of the Near Future ???
None at all.
“The leveling of the European man is the great process which cannot be obstructed; it should even be accelerated. The necessity of cleaving
gulfs, distance, order of rank, is therefore imperative —not the necessity of retarding this process. This homogenizing species requires
justification as soon as it is attained: its justification is that it lies
in serving a higher and sovereign race which stands upon the former and
can raise itself this task only by doing this. Not merely a race of
masters whose sole task is to rule, but a race with its own sphere of
life, with an overflow of energy for beauty, bravery, culture, and
manners, even for the most abstract thought; a yea-saying race that may
grant itself every great luxury —strong enough to have no need of the tyranny of the virtue-imperative, rich enough to have no need of economy
or pedantry; beyond good and evil; a hothouse for rare and exceptional plants.”
Friedrich Nietzsche
Robert Riches <spamtrap42@jacob21819.net> writes:
On 2025-06-22, candycanearter07
<candycanearter07@candycanearter07.nomail.afraid> wrote:
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
Yes. char is the unit in which sizeof measures things. Multiplying by ‘sizeof (char)’ is a completely incoherent thing to do.
And as noted elsewhere, doing the multiplication yourself is generally
the wrong approach.
IIUC, heap-based malloc _usually_ returns a larger allocation
block than you really asked for.
Yes.
As long as malloc gave you at least 2 extra bytes, you'd never see any
misbehavior. Even if it didn't give you 2 or more extra bytes, it's
fairly likely you'd just get lucky and never see the program crash or
otherwise misbehavior in a significant way. For example, if you
stomped on the header of the next allocation block, as long as nothing
ever read and acted upon the data in said header, you'd never see it.
This is wrong. Exceeding the space allocated by even 1 byte is undefined behavior, even if the allocation happens to have been sufficiently
padded. What this means in practice is very situational but
optimizations exploiting the freedom that undefined behavior provides to
the compiler routinely result in defects.
On 24/06/2025 07:49, rbowman wrote:
On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:What a wanker Nietzsche really was.
Don't worry, you're out of business soon. 'AI' will program
everything. The pointy-haired bosses reign supreme ... just roughly >>> describe what they think they want. Any probs, blame the 'AI' -
paycheck safe.
Do you have ANY issues with this vision of the Near Future ???
None at all.
“The leveling of the European man is the great process which cannot be
obstructed; it should even be accelerated. The necessity of cleaving
gulfs, distance, order of rank, is therefore imperative —not the
necessity
of retarding this process. This homogenizing species requires
justification as soon as it is attained: its justification is that it
lies
in serving a higher and sovereign race which stands upon the former and
can raise itself this task only by doing this. Not merely a race of
masters whose sole task is to rule, but a race with its own sphere of
life, with an overflow of energy for beauty, bravery, culture, and
manners, even for the most abstract thought; a yea-saying race that may
grant itself every great luxury —strong enough to have no need of the
tyranny of the virtue-imperative, rich enough to have no need of economy
or pedantry; beyond good and evil; a hothouse for rare and exceptional
plants.”
Friedrich Nietzsche
On 2025-06-24, Richard Kettlewell <invalid@invalid.invalid> wrote:
Robert Riches <spamtrap42@jacob21819.net> writes:
On 2025-06-22, candycanearter07
<candycanearter07@candycanearter07.nomail.afraid> wrote:
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
Yes. char is the unit in which sizeof measures things. Multiplying by
‘sizeof (char)’ is a completely incoherent thing to do.
And as noted elsewhere, doing the multiplication yourself is generally
the wrong approach.
IIUC, heap-based malloc _usually_ returns a larger allocation
block than you really asked for.
Yes.
As long as malloc gave you at least 2 extra bytes, you'd never see any
misbehavior. Even if it didn't give you 2 or more extra bytes, it's
fairly likely you'd just get lucky and never see the program crash or
otherwise misbehavior in a significant way. For example, if you
stomped on the header of the next allocation block, as long as nothing
ever read and acted upon the data in said header, you'd never see it.
This is wrong. Exceeding the space allocated by even 1 byte is undefined
behavior, even if the allocation happens to have been sufficiently
padded. What this means in practice is very situational but
optimizations exploiting the freedom that undefined behavior provides to
the compiler routinely result in defects.
Please remember that this was an unintended _BUG_ in some old
code, _NOT_ a deliberately chosen strategy. What I was
describing was one possible explanation for how the bug remained
undetected for some number of years.
On 6/24/25 5:31 AM, The Natural Philosopher wrote:
On 24/06/2025 07:49, rbowman wrote:
On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:What a wanker Nietzsche really was.
Don't worry, you're out of business soon. 'AI' will program
everything. The pointy-haired bosses reign supreme ... just roughly >>>> describe what they think they want. Any probs, blame the 'AI' - >>>> paycheck safe.
Do you have ANY issues with this vision of the Near Future ???
None at all.
“The leveling of the European man is the great process which cannot be >>> obstructed; it should even be accelerated. The necessity of cleaving
gulfs, distance, order of rank, is therefore imperative —not the
necessity
of retarding this process. This homogenizing species requires
justification as soon as it is attained: its justification is that it
lies
in serving a higher and sovereign race which stands upon the former and
can raise itself this task only by doing this. Not merely a race of
masters whose sole task is to rule, but a race with its own sphere of
life, with an overflow of energy for beauty, bravery, culture, and
manners, even for the most abstract thought; a yea-saying race that may
grant itself every great luxury —strong enough to have no need of the
tyranny of the virtue-imperative, rich enough to have no need of economy >>> or pedantry; beyond good and evil; a hothouse for rare and exceptional
plants.”
Friedrich Nietzsche
He WAS a wanker ... but kinda too often CORRECT.
Even loons get SOME stuff right.
Regardless, easy fix - always allocate at least one
byte/word/whatever more than you THINK you need. Minimal penalty -
possibly BIG gains.
On 25/06/2025 06:36, c186282 wrote:
On 6/24/25 5:31 AM, The Natural Philosopher wrote:
On 24/06/2025 07:49, rbowman wrote:
On Tue, 24 Jun 2025 01:36:28 -0400, c186282 wrote:What a wanker Nietzsche really was.
Don't worry, you're out of business soon. 'AI' will programNone at all.
everything. The pointy-haired bosses reign supreme ... just
roughly
describe what they think they want. Any probs, blame the 'AI' - >>>>> paycheck safe.
Do you have ANY issues with this vision of the Near Future ??? >>>>
“The leveling of the European man is the great process which cannot be >>>> obstructed; it should even be accelerated. The necessity of cleaving
gulfs, distance, order of rank, is therefore imperative —not the
necessity
of retarding this process. This homogenizing species requires
justification as soon as it is attained: its justification is that
it lies
in serving a higher and sovereign race which stands upon the former and >>>> can raise itself this task only by doing this. Not merely a race of
masters whose sole task is to rule, but a race with its own sphere of
life, with an overflow of energy for beauty, bravery, culture, and
manners, even for the most abstract thought; a yea-saying race that may >>>> grant itself every great luxury —strong enough to have no need of the >>>> tyranny of the virtue-imperative, rich enough to have no need of
economy
or pedantry; beyond good and evil; a hothouse for rare and exceptional >>>> plants.”
Friedrich Nietzsche
He WAS a wanker ... but kinda too often CORRECT.
Even loons get SOME stuff right.
Like king Donald?
On Wed, 25 Jun 2025 09:32:13 -0700
John Ames <commodorejohn@gmail.com> wrote:
Regardless, easy fix - always allocate at least one byte/word/
whatever more than you THINK you need. Minimal penalty - possibly
BIG gains.
That strikes me as a terrible strategy - allocating N elements extra
won't save you from overstepping into N+1 if and when you finally do
(Additionally, it won't save you from whatever weird undefined behavior
may result from reading an element N which isn't even part of the
"true" range and may have uninitialized/invalid data.)
On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from >>>>> their own common mistakes (of not carefully checking error return codes >>>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>>> the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the >>>> problem even though I'm probably screwed at that point. It has pointed out >>>> errors for calloc if you've manged to come up with a negative size.
I have worked with programmers that assumed nothing bad would ever happen. >>>> Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The
code had _intended_ to dynamically allocate storage for a string
and the terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
Those who had worked on that project longer said the bug had been
latent in the code for several years, most likely with alignment
padding masking the bug from being discovered. Curiously, the
bug made itself manifest immediately upon changing from a 32-bit
build environment to a 64-bit build environment.
I'm more surprised it didn't segfault. Any idea what caused it to not?
I know strlen doesn't account for the terminating character, but it
seems like it should've been TWO bytes shorter...
IIUC, heap-based malloc _usually_ returns a larger allocation
block than you really asked for. As long as malloc gave you at
least 2 extra bytes, you'd never see any misbehavior. Even if it
didn't give you 2 or more extra bytes, it's fairly likely you'd
just get lucky and never see the program crash or otherwise
misbehavior in a significant way. For example, if you stomped on
the header of the next allocation block, as long as nothing ever
read and acted upon the data in said header, you'd never see it.
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT):
<candycanearter07@candycanearter07.nomail.afraid> wrote:
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
I still multiply by sizeof(char), half because of habit and half to
make it clear to myself I'm making a char array, even if its
"redundant". I kinda thought that was the "cannonical" way to do that,
since you could have a weird edge case with a system defining char as something else?
A programmer can adopt a personal style of redundantly multiplying by 1
if they like, it’ll be a useful hint to anyone else reading the code
that the author didn’t know the language very well.
But in no way is
anyone ‘supposed’ to do it.
On 27/06/2025 08:37, Richard Kettlewell wrote:
But in no way is anyone ‘supposed’ to do it.
Morality rarely enters into code writing, unless introduced by parties
with 'political' aims.
On Fri, 27 Jun 2025 08:45:43 +0100, The Natural Philosopher wrote:
On 27/06/2025 08:37, Richard Kettlewell wrote:
But in no way is anyone ‘supposed’ to do it.
Morality rarely enters into code writing, unless introduced by parties
with 'political' aims.
I must learn to use that as an excuse: the next time someone complains
about the way I write my code, I can tell them that their criticism is “political”.
Some of us are old enough to remember when CPUs were
not always 4/8/16/32/64 ... plus even now they've added a lot of new
types like 128-bit ints. Simply ASSUMING an int is 16 bits is
'usually safe' but not necessarily 'best practice' and limits future
(or past) compatibility. 'C' lets you fly free ...
but that CAN be straight into a window pane
candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
writes:
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT): >>> <candycanearter07@candycanearter07.nomail.afraid> wrote:
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
I still multiply by sizeof(char), half because of habit and half to
make it clear to myself I'm making a char array, even if its
"redundant". I kinda thought that was the "cannonical" way to do that,
since you could have a weird edge case with a system defining char as
something else?
Whatever the representation of char, sizeof(char)=1. That’s what the definition of sizeof is - char is the unit it counts in.
From the language specification:
When sizeof is applied to an operand that has type char, unsigned
char, or signed char, (or a qualified version thereof) the result is
1. When applied to an operand that has array type, the result is the
total number of bytes in the array.) When applied to an operand that
has structure or union type, the result is the total number of bytes
in such an object, including internal and trailing padding.
A programmer can adopt a personal style of redundantly multiplying by 1
if they like, it’ll be a useful hint to anyone else reading the code
that the author didn’t know the language very well. But in no way is
anyone ‘supposed’ to do it.
On 6/27/25 4:14 AM, Lawrence D'Oliveiro wrote:
On Fri, 27 Jun 2025 08:45:43 +0100, The Natural Philosopher wrote:
On 27/06/2025 08:37, Richard Kettlewell wrote:
But in no way is anyone ‘supposed’ to do it.
Morality rarely enters into code writing, unless introduced by parties
with 'political' aims.
I must learn to use that as an excuse: the next time someone complains
about the way I write my code, I can tell them that their criticism is
“political”.
Hey ... it could WORK ! :-)
"Racist", "colonialist" and "gender-fascist" should work
in some locales as well
Alas, 'AI', that can and HAS been tampered with to
achieve PC results for political reasons already.
On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
Some of us are old enough to remember when CPUs were
not always 4/8/16/32/64 ... plus even now they've added a lot of new
types like 128-bit ints. Simply ASSUMING an int is 16 bits is
'usually safe' but not necessarily 'best practice' and limits future
(or past) compatibility. 'C' lets you fly free ...
but that CAN be straight into a window pane
Assuming an int is 16 bits is not a good idea. I wouldn't even assume a
short is 16 bits
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from
their own common mistakes (of not carefully checking error return codes >>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>> the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the >>> problem even though I'm probably screwed at that point. It has pointed out >>> errors for calloc if you've manged to come up with a negative size.
I have worked with programmers that assumed nothing bad would ever happen. >>> Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The
code had _intended_ to dynamically allocate storage for a string
and the terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Aren't you supposed to multiply by sizeof as well?
Those who had worked on that project longer said the bug had been
latent in the code for several years, most likely with alignment
padding masking the bug from being discovered. Curiously, the
bug made itself manifest immediately upon changing from a 32-bit
build environment to a 64-bit build environment.
I'm more surprised it didn't segfault. Any idea what caused it to not?
I know strlen doesn't account for the terminating character, but it
seems like it should've been TWO bytes shorter...
On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
Some of us are old enough to remember when CPUs were
not always 4/8/16/32/64 ... plus even now they've added a lot of new
types like 128-bit ints. Simply ASSUMING an int is 16 bits is
'usually safe' but not necessarily 'best practice' and limits future
(or past) compatibility. 'C' lets you fly free ...
but that CAN be straight into a window pane
Assuming an int is 16 bits is not a good idea. I wouldn't even assume a
short is 16 bits
These last two imply that both unsigned short int and int are at least
16 bits wide. At least, according to the standard.
On Fri, 27 Jun 2025 18:20:31 -0000 (UTC), Lew Pitcher wrote:
These last two imply that both unsigned short int and int are at least
16 bits wide. At least, according to the standard.
Or, you know, just rely on the explicit definitions in stdint.h.
On 6/27/25 7:03 PM, Lawrence D'Oliveiro wrote:
On Fri, 27 Jun 2025 18:20:31 -0000 (UTC), Lew Pitcher wrote:
These last two imply that both unsigned short int and int are at least
16 bits wide. At least, according to the standard.
Or, you know, just rely on the explicit definitions in stdint.h.
sizeof() will give the right sizes.
Simple, easy to write, 'best practice'.
On 6/27/25 1:40 PM, rbowman wrote:
On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
Some of us are old enough to remember when CPUs were not alwaysAssuming an int is 16 bits is not a good idea. I wouldn't even assume
4/8/16/32/64 ... plus even now they've added a lot of new types like
128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
but not necessarily 'best practice' and limits future (or past)
compatibility. 'C' lets you fly free ... but that CAN be straight
into a window pane
a short is 16 bits
Voice of experience for sure. Things have been represented/handled
just SO many ways over the years. Using sizeof() is 'best practice'
even if you're Just Sure how wide an int or whatever may be. 24 bits
are still found in some DSPs and you MAY be asked someday to patch or
port one of the old 12/18/24/36/48 programs.
On 27/06/2025 18:27, c186282 wrote:
<snip>
Alas, 'AI', that can and HAS been tampered with to
achieve PC results for political reasons already.
Indeed.
c186282 <c186282@nnada.net> writes:
On 6/27/25 1:40 PM, rbowman wrote:
On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
Some of us are old enough to remember when CPUs were not alwaysAssuming an int is 16 bits is not a good idea. I wouldn't even assume
4/8/16/32/64 ... plus even now they've added a lot of new types like
128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
but not necessarily 'best practice' and limits future (or past)
compatibility. 'C' lets you fly free ... but that CAN be straight
into a window pane
a short is 16 bits
(Apart from c186282 who for some reason thinks it’s “usually safe”, nobody here is making any such assumption about int.)
Voice of experience for sure. Things have been represented/handled
just SO many ways over the years. Using sizeof() is 'best practice'
even if you're Just Sure how wide an int or whatever may be. 24 bits
are still found in some DSPs and you MAY be asked someday to patch or
port one of the old 12/18/24/36/48 programs.
The thread is not about the size of int, etc. It’s about the specific
case of sizeof(char) in C, and that is always 1.
On 6/28/25 3:52 AM, Richard Kettlewell wrote:^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
c186282 <c186282@nnada.net> writes:
On 6/27/25 1:40 PM, rbowman wrote:
On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
Some of us are old enough to remember when CPUs were not always
4/8/16/32/64 ... plus even now they've added a lot of new types like >>>>> 128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
(Apart from c186282 who for some reason thinks it’s “usually safe”,but not necessarily 'best practice' and limits future (or past)Assuming an int is 16 bits is not a good idea. I wouldn't even assume
compatibility. 'C' lets you fly free ... but that CAN be straight
into a window pane
a short is 16 bits
nobody here is making any such assumption about int.)
Eh ? I've been saying no such thing - instead recommending
using sizeof() kind of religiously. I remember processors
with odd word sizes - and assume there may be more in
the future for whatever reasons.
The thread is not about the size of int, etc. It’s about the specific
case of sizeof(char) in C, and that is always 1.
CHAR is ONE BYTE of however many bits, but beyond that ..........
Use sizeof() ...
One flaw of sizeof() is that it reports in BYTES ... so,
for example, how many BITS is that ?
c186282 <c186282@nnada.net> writes:
On 6/28/25 3:52 AM, Richard Kettlewell wrote:^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
c186282 <c186282@nnada.net> writes:
On 6/27/25 1:40 PM, rbowman wrote:
On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
Some of us are old enough to remember when CPUs were not always
4/8/16/32/64 ... plus even now they've added a lot of new types like >>>>>> 128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
(Apart from c186282 who for some reason thinks it’s “usually safe”, >>> nobody here is making any such assumption about int.)but not necessarily 'best practice' and limits future (or past)Assuming an int is 16 bits is not a good idea. I wouldn't even assume >>>>> a short is 16 bits
compatibility. 'C' lets you fly free ... but that CAN be straight >>>>>> into a window pane
Eh ? I've been saying no such thing - instead recommending
using sizeof() kind of religiously. I remember processors
with odd word sizes - and assume there may be more in
the future for whatever reasons.
You wrote it above. Underlined to help you find it again.
The thread is not about the size of int, etc. It’s about the specific
case of sizeof(char) in C, and that is always 1.
CHAR is ONE BYTE of however many bits, but beyond that ..........
Use sizeof() ...
One flaw of sizeof() is that it reports in BYTES ... so,
for example, how many BITS is that ?
I don’t see a flaw there. If you want to know the number of bytes (in
the C sense) then that’s what sizeof does. If you want to know the
number of bits, multiply by CHAR_BIT. If you already have a number of
bytes, and you want a number of bytes, no need to multiply by anything
at all.
These days you CAN 'usually' get away with assuming an
int is 16 bits - but that won't always turn out well.
On 30/06/2025 00:09, c186282 wrote:
These days you CAN 'usually' get away with assuming an
int is 16 bits - but that won't always turn out well.
I thought the default int was 32 bits or 64 bits these days.
ISTR there is a definition of uint16_t somewhere if that is what you want
A rapid google shows no one talking about a 16 bit int. Today its
reckoned to be 32 bit But if it matters, use int16_t or uint16_t
I can find no agreement was to what counts as a short, long, int, at all.
If it matters, use the length specific variable names.
I can find no agreement was to what counts as a short, long, int, at
all.
If it matters, use the length specific variable names.
The Natural Philosopher <tnp@invalid.invalid> writes:
On 30/06/2025 00:09, c186282 wrote:
These days you CAN 'usually' get away with assuming an
int is 16 bits - but that won't always turn out well.
I thought the default int was 32 bits or 64 bits these days.
ISTR there is a definition of uint16_t somewhere if that is what you want
A rapid google shows no one talking about a 16 bit int. Today its
reckoned to be 32 bit But if it matters, use int16_t or uint16_t
I can find no agreement was to what counts as a short, long, int, at all.
If it matters, use the length specific variable names.
The language spec guarantees:
char is at least 8 bits
short and int are at least 16 bits
long is at least 32 bits
long long is at least 64 bits
There are also some constraints on representation.
Server/desktop platforms usually have int=32 bits; long is a bit more variable. It’d probably be 16 bits on a Z80 or similar where memory and computation are in short supply.
On 6/29/25 3:18 AM, Richard Kettlewell wrote:
c186282 <c186282@nnada.net> writes:
On 6/28/25 3:52 AM, Richard Kettlewell wrote:^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>>>>>> but not necessarily 'best practice' and limits future (or past)
c186282 <c186282@nnada.net> writes:
On 6/27/25 1:40 PM, rbowman wrote:
On Fri, 27 Jun 2025 13:24:06 -0400, c186282 wrote:
Some of us are old enough to remember when CPUs were not always
4/8/16/32/64 ... plus even now they've added a lot of new types like >>>>>>> 128-bit ints. Simply ASSUMING an int is 16 bits is 'usually safe'
You wrote it above. Underlined to help you find it again.(Apart from c186282 who for some reason thinks it’s “usually safe”, >>>> nobody here is making any such assumption about int.)compatibility. 'C' lets you fly free ... but that CAN be straight >>>>>>> into a window paneAssuming an int is 16 bits is not a good idea. I wouldn't even assume >>>>>> a short is 16 bits
Eh ? I've been saying no such thing - instead recommending
using sizeof() kind of religiously. I remember processors
with odd word sizes - and assume there may be more in
the future for whatever reasons.
Mysteriously MISSING THE OTHER HALF OF THE LINE :-)
I don’t see a flaw there. If you want to know the number of bytesThe thread is not about the size of int, etc. It’s about the specific >>>> case of sizeof(char) in C, and that is always 1.
CHAR is ONE BYTE of however many bits, but beyond that ..........
Use sizeof() ...
One flaw of sizeof() is that it reports in BYTES ... so,
for example, how many BITS is that ?
(in
the C sense) then that’s what sizeof does. If you want to know the
number of bits, multiply by CHAR_BIT. If you already have a number of
bytes, and you want a number of bytes, no need to multiply by anything
at all.
That'll work fine.
Not sure EVERY compiler has CHAR_BIT however ...
The Natural Philosopher <tnp@invalid.invalid> writes:
I can find no agreement was to what counts as a short, long, int, at all.
If it matters, use the length specific variable names.
The language spec guarantees:
char is at least 8 bits
short and int are at least 16 bits
long is at least 32 bits
long long is at least 64 bits
There are also some constraints on representation.
Server/desktop platforms usually have int=32 bits; long is a bit more variable. It’d probably be 16 bits on a Z80 or similar where memory and computation are in short supply.
... long was 32 bits
Richard Kettlewell <invalid@invalid.invalid> writes:
The Natural Philosopher <tnp@invalid.invalid> writes:
I can find no agreement was to what counts as a short, long, int, at all. >>> If it matters, use the length specific variable names.
The language spec guarantees:
char is at least 8 bits
short and int are at least 16 bits
long is at least 32 bits
long long is at least 64 bits
There are also some constraints on representation.
Server/desktop platforms usually have int=32 bits; long is a bit more
variable. It’d probably be 16 bits on a Z80 or similar where memory and
computation are in short supply.
To clarify: int would probably be 16 bits on a Z80. long still has to be
at least 32 bits.
On 30/06/2025 09:00, Richard Kettlewell wrote:
Yes.
To clarify: int would probably be 16 bits on a Z80. long still has to
be at least 32 bits.
It was a pain because the natural manipulation on an 8 bit chip is 8
bits. making it 16 bit led to a lot of extra code.
#include <stdint.h> gives you explicit names for the various sizes, in
both signed and unsigned alternatives.
On 30/06/2025 00:09, c186282 wrote:
These days you CAN 'usually' get away with assuming an
int is 16 bits - but that won't always turn out well.
I thought the default int was 32 bits or 64 bits these days.
ISTR there is a definition of uint16_t somewhere if that is what you want
A rapid google shows no one talking about a 16 bit int. Today its
reckoned to be 32 bit
But if it matters, use int16_t or uint16_t
I can find no agreement was to what counts as a short, long, int, at all.
If it matters, use the length specific variable names.
Remember, Unix systems were fully 32-bit right from the 1980s onwards, and embraced 64-bit early on with the DEC Alpha in 1992. So “long” would have been 64 bits from at least that time, because why waste an occurrence of
the “long” qualifier?
The Natural Philosopher <tnp@invalid.invalid> writes:
On 30/06/2025 00:09, c186282 wrote:
These days you CAN 'usually' get away with assuming an
int is 16 bits - but that won't always turn out well.
I thought the default int was 32 bits or 64 bits these days.
ISTR there is a definition of uint16_t somewhere if that is what you want
A rapid google shows no one talking about a 16 bit int. Today its
reckoned to be 32 bit But if it matters, use int16_t or uint16_t
I can find no agreement was to what counts as a short, long, int, at all.
If it matters, use the length specific variable names.
The language spec guarantees:
char is at least 8 bits
short and int are at least 16 bits
long is at least 32 bits
long long is at least 64 bits
There are also some constraints on representation.
Server/desktop platforms usually have int=32 bits; long is a bit more variable. It’d probably be 16 bits on a Z80 or similar where memory and computation are in short supply.
On Mon, 30 Jun 2025 09:24:35 +0100, The Natural Philosopher wrote:
On 30/06/2025 09:00, Richard Kettlewell wrote:
Yes.
To clarify: int would probably be 16 bits on a Z80. long still has to
be at least 32 bits.
It was a pain because the natural manipulation on an 8 bit chip is 8
bits. making it 16 bit led to a lot of extra code.
I guess 8-bit ints were not considered useful on any system.
You still had “unsigned char”, though.
MOST compilers, I think ints are still almost always 16-bits as a
holdover from the good old days. You can declare long and long long
ints of course, but int alone, expect it to be 16-bit.
On 6/30/25 3:51 AM, Richard Kettlewell wrote:
The language spec guarantees:
char is at least 8 bits
short and int are at least 16 bits
long is at least 32 bits
long long is at least 64 bits
There are also some constraints on representation.
Server/desktop platforms usually have int=32 bits; long is a bit more
variable. It’d probably be 16 bits on a Z80 or similar where memory and
computation are in short supply.
Note that "at LEAST 8 bits", "at LEAST 16 bits" is
just BAD. Making use of modulo, roll-over, can be
very useful sometimes. If uchars are not 8 bits
exactly you get unrealized mistakes.
On 01/07/2025 04:26, c186282 wrote:
On 6/30/25 3:51 AM, Richard Kettlewell wrote:
The language spec guarantees:
char is at least 8 bits
short and int are at least 16 bits
long is at least 32 bits
long long is at least 64 bits
There are also some constraints on representation.
Server/desktop platforms usually have int=32 bits; long is a bit more
variable. It’d probably be 16 bits on a Z80 or similar where memory and >>> computation are in short supply.
Note that "at LEAST 8 bits", "at LEAST 16 bits" is
just BAD. Making use of modulo, roll-over, can be
very useful sometimes. If uchars are not 8 bits
exactly you get unrealized mistakes.
As I said, if its important, specify it exactly.
I am not sure that Richards' 'char is *at least* 8 bits' is correct tho...
On Mon, 30 Jun 2025 23:12:21 -0400, c186282 wrote:
MOST compilers, I think ints are still almost always 16-bits as a
holdover from the good old days. You can declare long and long long
ints of course, but int alone, expect it to be 16-bit.
Not any compiler I've worked with in the last few decades. That sort of
went out with CP/M. Way back when bytes were worth their weight in gold someone declared 'short ObjNum;' That's rather important since that is
the number of objects that can be handled by the system including
incidents, comments, persons, vehicles, alerts and so forth. Being signed
and 16 bits the maximum value is 32767
It got by for an amazingly long time but as larger, busier sites came
along the system ran out of object number. It wasn't pretty.
Edit a number of files to make it an unsigned short and you get 65535. It
was close a couple of times but with some sophisticated reuse strategies there never was a disaster.
Why not make it an int? Even with a signed int 2147483647 wouldn't be a problem. Because an int is 32 bits. Every struct, every XDR encoding, database, and so forth would have to be modified so we crossed our
fingers. In DB2 SMALLINT is 16 bits, INT is 32 bits. SQL Server is the
same. Both of them use BIGINT for 64 bits.
In Kernighan & Ritchie's "The C Programming Language", the authors
note that, on the four architectures that C had been implemented on
(at the time of writing), a char occupied 8 bits on three of them.
On the fourth, a char occupied 9 bits.
On 2025-07-01, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:
In Kernighan & Ritchie's "The C Programming Language", the authors
note that, on the four architectures that C had been implemented on
(at the time of writing), a char occupied 8 bits on three of them.
On the fourth, a char occupied 9 bits.
Sounds like a Univac 1100-series mainframe in ASCII mode.
On 2025-07-01, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:
In Kernighan & Ritchie's "The C Programming Language", the authors
note that, on the four architectures that C had been implemented on
(at the time of writing), a char occupied 8 bits on three of them.
On the fourth, a char occupied 9 bits.
Sounds like a Univac 1100-series mainframe in ASCII mode.
Some of the legacy code was stingy like they had to pay for every
byte.
On 6/25/25 12:44 PM, John Ames wrote:
On Wed, 25 Jun 2025 09:32:13 -0700
John Ames <commodorejohn@gmail.com> wrote:
Regardless, easy fix - always allocate at least one byte/word/
whatever more than you THINK you need. Minimal penalty - possibly
BIG gains.
That strikes me as a terrible strategy - allocating N elements extra
won't save you from overstepping into N+1 if and when you finally do
(Additionally, it won't save you from whatever weird undefined behavior
may result from reading an element N which isn't even part of the
"true" range and may have uninitialized/invalid data.)
I've had good luck doing it that way since K&R.
Doesn't hurt to null the entire space after
allocating. Leave a speck of extra space and
it covers a lot of potential little write
issues. You still have to take care when reading.
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT):
On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from >>>>>> their own common mistakes (of not carefully checking error return codes >>>>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not >>>>>> the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the >>>>> problem even though I'm probably screwed at that point. It has pointed out
errors for calloc if you've manged to come up with a negative size.
I have worked with programmers that assumed nothing bad would ever happen.
Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The
code had _intended_ to dynamically allocate storage for a string
and the terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
I still multiply by sizeof(char), half because of habit and half to make
it clear to myself I'm making a char array, even if its "redundant". I
kinda thought that was the "cannonical" way to do that, since you could
have a weird edge case with a system defining char as something else?
candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:34 this Tuesday (GMT): >>> On 2025-06-22, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
Robert Riches <spamtrap42@jacob21819.net> wrote at 03:43 this Saturday (GMT):
On 2025-06-21, rbowman <bowman@montana.com> wrote:
On Fri, 20 Jun 2025 23:07:20 -0000 (UTC), Rich wrote:
Very likely, but the idea was to protect the typical programmer from >>>>>>> their own common mistakes (of not carefully checking error return codes >>>>>>> or buffer lengths, etc.). I.e. the typical 9-5 contract programmer, not
the Dennis Ritchie's of the world.
I'm paranoid enough that I check the return of malloc and try to log the >>>>>> problem even though I'm probably screwed at that point. It has pointed out
errors for calloc if you've manged to come up with a negative size. >>>>>>
I have worked with programmers that assumed nothing bad would ever happen.
Sadly, some had years of experience.
Some years ago, I heard of a bug related to use of malloc. The
code had _intended_ to dynamically allocate storage for a string
and the terminating null byte. It was _intended_ to do this:
dest = malloc(strlen(src)+1);
Instead, a paren was misplaced:
dest = malloc(strlen(src))+1;
IIUC, the next line copied the src string into the newly-
allocated destination.
Aren't you supposed to multiply by sizeof as well?
Multiply by sizeof what? sizeof(char)? This was in the
pre-Unicode days. Even now with Unicode, IIUC sizeof(char) is
still always 1.
I still multiply by sizeof(char), half because of habit and half to make
it clear to myself I'm making a char array, even if its "redundant". I
kinda thought that was the "cannonical" way to do that, since you could
have a weird edge case with a system defining char as something else?
Per Wikipedia, 'char' in C is defined as "at least 8 bits" (https://en.wikipedia.org/wiki/C_data_types).
And that 'at least'
could have burned one in the past for "odd systems" that might have had
9 bit characters or 16 bit characters.
In todays world, for all but the most esoteric (embedded and/or FPGA) assuming char is exactly 8 bits is right often enough that no one
notices. But multiplying by sizeof(char) does avoid it becoming an
issue later on any unusual setups.
In todays world, for all but the most esoteric (embedded and/or FPGA) assuming char is exactly 8 bits is right often enough that no one
notices. But multiplying by sizeof(char) does avoid it becoming an
issue later on any unusual setups.
On 20/07/2025 15:42, Rich wrote:
In todays world, for all but the most esoteric (embedded and/or FPGA)
assuming char is exactly 8 bits is right often enough that no one
notices. But multiplying by sizeof(char) does avoid it becoming an
issue later on any unusual setups.
That's what I like. Absolutely emphasises the point to the next
programmer even if the compiler doesn't need to know
In todays world, for all but the most esoteric (embedded and/or FPGA) assuming char is exactly 8 bits is right often enough that no one
notices. But multiplying by sizeof(char) does avoid it becoming an
issue later on any unusual setups.
The problem, which John correctly pointed out, is that if the
programmer is sloppy enough that this little extra "saves" them today,
then it is still a ticking time bomb waiting to go off when someone
futzes the code later and instead of the expected entry of a "serial
number" of 8 digits and a buffer of 16 bytes, the futzer inserts 9, 10,
11, 12, 13, 14, 15, 16, 17 (boom).
Allocating "a little extra" is a feel good way to presume one is
avoiding buffer overflow issues, but it does nothing to actually
prevent them from going boom.
On Mon, 21 Jul 2025 08:42:04 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
Conservative upper bounds of this kind address two issues:
1) The possibility that you made a mistake in working out the upper
bound. Off-by-one errors are such a common category that they get
their own name; adding even 1 byte of headroom neutralizes them.
If you think only “sloppy” programmers make this kind of mistake
then you’re deluded. A more competent programmer may make fewer
mistakes but no human is perfect.
2) Approximation can make analysis easier. Why spend an hour proving
that the maximum size something can be is 37 bytes if a few seconds
mental arithmetic will prove it’s at most 64 bytes? (Unless you
have 1980s quantities of RAM, of course.)
Sure, memory is cheap and we can often afford reasonably over-specced
buffer sizes in Our Modern Age - but the fundamental problem remains. Treating "a little extra just to be on the safe side" as a ward against buffer overruns or other boundary errors is pretty much guaranteed to
run into trouble down the line, and no amount of "nobody's perfect...!"
will change that. If you're not working in a language that does bounds- checking for you, and your design is not one where you can say with
*100% certainty* that boundary errors are literally impossible, CHECK
YER DANG BOUNDS. Simple as that.
On Mon, 21 Jul 2025 08:42:04 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
Conservative upper bounds of this kind address two issues:
1) The possibility that you made a mistake in working out the upper
bound. Off-by-one errors are such a common category that they get
their own name; adding even 1 byte of headroom neutralizes them.
If you think only “sloppy” programmers make this kind of mistake
then you’re deluded. A more competent programmer may make fewer
mistakes but no human is perfect.
2) Approximation can make analysis easier. Why spend an hour proving
that the maximum size something can be is 37 bytes if a few seconds
mental arithmetic will prove it’s at most 64 bytes? (Unless you
have 1980s quantities of RAM, of course.)
Sure, memory is cheap and we can often afford reasonably over-specced
buffer sizes in Our Modern Age - but the fundamental problem remains. Treating "a little extra just to be on the safe side" as a ward against buffer overruns or other boundary errors is pretty much guaranteed to
run into trouble down the line, and no amount of "nobody's perfect...!"
will change that. If you're not working in a language that does bounds- checking for you, and your design is not one where you can say with
*100% certainty* that boundary errors are literally impossible, CHECK
YER DANG BOUNDS. Simple as that.
John Ames <commodorejohn@gmail.com> writes:
On Mon, 21 Jul 2025 08:42:04 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
Conservative upper bounds of this kind address two issues:
1) The possibility that you made a mistake in working out the upper
bound. Off-by-one errors are such a common category that they get
their own name; adding even 1 byte of headroom neutralizes them.
If you think only “sloppy” programmers make this kind of mistake
then you’re deluded. A more competent programmer may make fewer
mistakes but no human is perfect.
2) Approximation can make analysis easier. Why spend an hour proving
that the maximum size something can be is 37 bytes if a few seconds
mental arithmetic will prove it’s at most 64 bytes? (Unless you
have 1980s quantities of RAM, of course.)
Sure, memory is cheap and we can often afford reasonably over-specced
buffer sizes in Our Modern Age - but the fundamental problem remains.
Treating "a little extra just to be on the safe side" as a ward against
buffer overruns or other boundary errors is pretty much guaranteed to
run into trouble down the line, and no amount of "nobody's perfect...!"
will change that. If you're not working in a language that does bounds-
checking for you, and your design is not one where you can say with
*100% certainty* that boundary errors are literally impossible, CHECK
YER DANG BOUNDS. Simple as that.
In real life a buffer overrun is not the only outcome to be avoided. If
you need 20 bytes and you’ve only got 10, _something_ is going to go
wrong. A bounds check will avoid the outcome being a buffer overrun, but you’re still going to have to report an error, or exit the program, or
some other undesired behaviour, when what you actually wanted was the
full 20-byte result. That’s what a conservative bound helps you with.
The top entry in my list of Famous Last Words is "Oh, don't worry
about that - it'll never happen." I had learned that "never" is
usually about six months.
The top entry in my list of Famous Last Words is "Oh, don't worry about
that - it'll never happen." I had learned that "never" is usually about
six months. At the very least, if your program issues an appropriate
error message before aborting, you'll have a chance of finding and
fixing the deficiency. These days, I've gotten into using realloc() to enlarge the area in question; if it works, I quietly continue, and if
not I put out a nasty error message and quit.
On Mon, 21 Jul 2025 20:47:23 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
In real life a buffer overrun is not the only outcome to be avoided.
If you need 20 bytes and you’ve only got 10, _something_ is going to
go wrong. A bounds check will avoid the outcome being a buffer
overrun, but you’re still going to have to report an error, or exit
the program, or some other undesired behaviour, when what you
actually wanted was the full 20-byte result. That’s what a
conservative bound helps you with.
Sure - there's nothing wrong with "reserve a bit more than you think
you'll need" in and of itself. But what's been at issue from the start
of this branch discussion is specifically the practice (as was being advocated) of doing this *as a safeguard* against buffer overruns - a
problem that it does not actually *solve,* just forestalls long enough
for some buggy solution to get embedded and only discovered 20 yrs.
later at some Godforsaken field installation deep in the Pottsylvanian hinterlands* rather than being caught during development/testing or in
some early deployment.
* (At which point, the field-service tech having finally arrived back
at the office with a pack of hyenas and the curse of Baba Yaga on
his/her heels, every other install in the world will abruptly start
breaking.)
My point is simply that, unless you're using a language where bounds- checking is provided for "free" behind the scenes, boundary errors will *always* be a hazard, and working in conscious recognition of that is a
far more responsible approach than relying on superstitious warding
practices - even if the practices in question may be valid design
choices for other reasons.
On Wed, 23 Jul 2025 07:22:21 +0100
Pancho <Pancho.Jones@protonmail.com> wrote:
You appear to be advocating for using an "assert" type paradigm. This
doesn't need to be coupled to actual reservation size.
I'm not so much advocating for any specific coding practice in any
specific language - asserts work, but so does designing algorithms such
that bounds violations can never happen (e.g. #define BUFFER_BOUND and
then loop from 0 to BUFFER_BOUND - if there's no other indexing, it
will never go off the end, unless the compiler is just broken,) where possible.
My point is simply that, unless you're using a language where bounds- checking is provided for "free" behind the scenes, boundary errors will *always* be a hazard, and working in conscious recognition of that is a
far more responsible approach than relying on superstitious warding
practices - even if the practices in question may be valid design
choices for other reasons.
On 23/07/2025 16:04, John Ames wrote:
My point is simply that, unless you're using a language where bounds-
checking is provided for "free" behind the scenes, boundary errors will
*always* be a hazard, and working in conscious recognition of that is
a far more responsible approach than relying on superstitious warding
practices - even if the practices in question may be valid design
choices for other reasons.
I have to agree, and philosophically it is a criticism of our whole 'kindergarten' approach to life In Europe.
If people expect all potential hazards to have been removed, they will neither recognise nor respond appropriately when they meet one.
Darwin might have a theory about that.
On Wed, 23 Jul 2025 21:53:47 +0100
Pancho <Pancho.Jones@protonmail.com> wrote:
If n is small, it probably isn't worth the time thinking about it, so
you just allocate n^2 elements. There is nothing superstitious or
dangerous about this. It just recognises that the extra coding time
is not worth the memory cost.
That's fair enough - but it's also not what was being discussed. This
branch of the discussion started off, specifically, with the suggestion
that allocating extra was a helpful ward against running off the end of
a buffer/array and stomping on the next allocation, which it really,
really isn't.
On Wed, 23 Jul 2025 20:04:14 +0100, The Natural Philosopher wrote:
On 23/07/2025 16:04, John Ames wrote:
My point is simply that, unless you're using a language where bounds-
checking is provided for "free" behind the scenes, boundary errors will
*always* be a hazard, and working in conscious recognition of that is
a far more responsible approach than relying on superstitious warding
practices - even if the practices in question may be valid design
choices for other reasons.
I have to agree, and philosophically it is a criticism of our whole
'kindergarten' approach to life In Europe.
If people expect all potential hazards to have been removed, they will
neither recognise nor respond appropriately when they meet one.
Darwin might have a theory about that.
Your favorite philosopher, Nietzasche, did. "Was mich nicht umbringt,
macht mich stärker."
On Wed, 23 Jul 2025 21:53:47 +0100
Pancho <Pancho.Jones@protonmail.com> wrote:
If n is small, it probably isn't worth the time thinking about it, so
you just allocate n^2 elements. There is nothing superstitious or
dangerous about this. It just recognises that the extra coding time
is not worth the memory cost.
That's fair enough - but it's also not what was being discussed. This
branch of the discussion started off, specifically, with the suggestion
that allocating extra was a helpful ward against running off the end of
a buffer/array and stomping on the next allocation, which it really,
really isn't.
On 2025-07-23, John Ames <commodorejohn@gmail.com> wrote:
On Wed, 23 Jul 2025 21:53:47 +0100 Pancho <Pancho.Jones@protonmail.com>
wrote:
If n is small, it probably isn't worth the time thinking about it, so
you just allocate n^2 elements. There is nothing superstitious or
dangerous about this. It just recognises that the extra coding time is
not worth the memory cost.
That's fair enough - but it's also not what was being discussed. This
branch of the discussion started off, specifically, with the suggestion
that allocating extra was a helpful ward against running off the end of
a buffer/array and stomping on the next allocation, which it really,
really isn't.
https://en.wikipedia.org/wiki/Cargo_cult_programming
Pancho <Pancho.Jones@protonmail.com> wrote:
You appear to have a prejudice for equality. A prejudice that
programmers should think hard about every problem they encounter. A
prejudice that a simple, but good enough answer is lazy.
I'm not advocating for approaching every aspect of development with a monomania for absolute ideal design and optimal implementation, and I
have no idea where you're getting that from.
What I *am* saying is that dealing with a certain class of bug-hazards
is inevitable when using tools that don't include built-in safeguards
against them, and that you ignore that - or ward against it via magical thinking - at your peril. Over-speccing because it's simpler than
working out the Most Optimum answer is one thing; over-speccing because
you hope it'll save you from dealing with genuine bugs is superstitious folly.
https://en.wikipedia.org/wiki/Cargo_cult_programming
On 24 Jul 2025 18:05:51 GMT rbowman <bowman@montana.com> wrote:
The intersection of cargo cult and vibe programming should be able to
generate a mass of unmaintainable crap that makes the sins of my
generation look benign.
It's already happening. I wish the author here had left the original
article up, but the comments on HN alone should give you some idea what
kind of absolute fiascoes we're gonna see in the future:
https://news.ycombinator.com/item?id=44512368
On Thu, 24 Jul 2025 14:42:56 GMT, Charlie Gibbs wrote:
https://en.wikipedia.org/wiki/Cargo_cult_programming
Seems an apt description of, for example, those who say you must never
write actual SQL code in your programs, always use an ORM or templating system or something.
On Sun, 20 Jul 2025 16:51:57 +0100, The Natural Philosopher wrote:
On 20/07/2025 15:42, Rich wrote:
In todays world, for all but the most esoteric (embedded and/or FPGA)
assuming char is exactly 8 bits is right often enough that no one
notices. But multiplying by sizeof(char) does avoid it becoming an
issue later on any unusual setups.
That's what I like. Absolutely emphasises the point to the next
programmer even if the compiler doesn't need to know
That's an awfully big leap for the next programmer to make, going
from "I wonder why he multiplies this value by 1" to "Oho!! That
MUST mean that CHAR_BIT is not 8!"
Try including a clear/concise COMMENT after most every line in your
code - a sort of narration of what/why.
On Thu, 24 Jul 2025 21:51:24 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:
One person might have been arguing for that, though I’m not even
confident of that since their posts have expired here; they’ve been
out of this thread for that long. You don’t have to argue that point
with everyone else.
I wouldn't be, if people didn't keep responding to what they *think* I
said, as opposed to what I actually *did* say.
On 7/20/25 12:15 PM, Lew Pitcher wrote:
On Sun, 20 Jul 2025 16:51:57 +0100, The Natural Philosopher wrote:
On 20/07/2025 15:42, Rich wrote:That's an awfully big leap for the next programmer to make, going
In todays world, for all but the most esoteric (embedded and/or
FPGA) assuming char is exactly 8 bits is right often enough that no
one notices. But multiplying by sizeof(char) does avoid it
becoming an issue later on any unusual setups.
That's what I like. Absolutely emphasises the point to the next
programmer even if the compiler doesn't need to know
from "I wonder why he multiplies this value by 1" to "Oho!! That
MUST mean that CHAR_BIT is not 8!"
Try including a clear/concise COMMENT after most every
line in your code - a sort of narration of what/why.
Almost every function I write has a 10-20 line comment
at the top explaining what/why/how as well.
Do that and 'future programmers' should Get It.
If they don't then they shouldn't be programmers.
Bytes/words/etc are NOT always multiples of 8 even now. DSP
processors often use 24 bit words, has to do with, the common three
8-bit input channels. If you get a job maintaining 'legacy' systems
then you should NEVER assume 4/8/16/32/64.
c186282 <c186282@nnada.net> writes:
On 7/20/25 12:15 PM, Lew Pitcher wrote:
On Sun, 20 Jul 2025 16:51:57 +0100, The Natural Philosopher wrote:
On 20/07/2025 15:42, Rich wrote:That's an awfully big leap for the next programmer to make, going
In todays world, for all but the most esoteric (embedded and/or
FPGA) assuming char is exactly 8 bits is right often enough that no
one notices. But multiplying by sizeof(char) does avoid it
becoming an issue later on any unusual setups.
That's what I like. Absolutely emphasises the point to the next
programmer even if the compiler doesn't need to know
from "I wonder why he multiplies this value by 1" to "Oho!! That
MUST mean that CHAR_BIT is not 8!"
Try including a clear/concise COMMENT after most every
line in your code - a sort of narration of what/why.
Almost every function I write has a 10-20 line comment
at the top explaining what/why/how as well.
Do that and 'future programmers' should Get It.
If they don't then they shouldn't be programmers.
Bytes/words/etc are NOT always multiples of 8 even now. DSP
processors often use 24 bit words, has to do with, the common three
8-bit input channels. If you get a job maintaining 'legacy' systems
then you should NEVER assume 4/8/16/32/64.
Agreed. A concrete example is https://downloads.ti.com/docs/esd/SPRU514/data-types-stdz0555922.html
where char is a 16-bit type. This links back to the nonsensical earlier
claim that multiplying by sizeof(char) would somehow ‘avoid it becoming
an issue’ because as that page notes, sizeof(char) remains equal to 1 on that platform (as it has to).
On Fri, 25 Jul 2025 00:31:01 -0400, c186282 wrote:
Try including a clear/concise COMMENT after most every line in your
code - a sort of narration of what/why.
I only add a comment if I'm doing something not apparent to a competent programmer.
On 7/25/25 1:53 AM, rbowman wrote:
On Fri, 25 Jul 2025 00:31:01 -0400, c186282 wrote:
Try including a clear/concise COMMENT after most every line in your >>> code - a sort of narration of what/why.
I only add a comment if I'm doing something not apparent to a competent
programmer.
Ummm ... I'd suggest doing it ALWAYS ... not only
for "Them" but for YOU few years down the line.
It's not hard.
There's a 'psychic zone' involved in programming.
You JUST GET IT at the time. However The Time
tends to PASS. Then you wonder WHY you did
that, what it accomplishes.
Just sayin'
On 25/07/2025 10:05, c186282 wrote:
On 7/25/25 1:53 AM, rbowman wrote:I have looked at code I wrote years ago and beyond thinking 'this guy
On Fri, 25 Jul 2025 00:31:01 -0400, c186282 wrote:
Try including a clear/concise COMMENT after most every line in your >>>> code - a sort of narration of what/why.
I only add a comment if I'm doing something not apparent to a competent
programmer.
Ummm ... I'd suggest doing it ALWAYS ... not only
for "Them" but for YOU few years down the line.
codes the way I would' I have simply not recognised anything in it at all.
I get into 'the zone' and become essentially possessed by code demons
who write it all for me.
And afterwards its hard to remember doing it.
It's not hard.Exactly.
There's a 'psychic zone' involved in programming.
You JUST GET IT at the time. However The Time
tends to PASS. Then you wonder WHY you did
that, what it accomplishes.
Just sayin'
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed
while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively over specced. I don't think they really understood enough to make appropriate structures for the required load.
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed
while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed
while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively over
specced. I don't think they really understood enough to make appropriate
structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just make it bloody big, and thank god they did. London's main sewer is still able to
cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
On 2025-07-26, The Natural Philosopher <tnp@invalid.invalid> wrote:
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed >>>> while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively over
specced. I don't think they really understood enough to make appropriate >>> structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just make it
bloody big, and thank god they did. London's main sewer is still able to
cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
For the Romans, can you imagine doing finite element analysis by
hand using ROMAN NUMERALS????? Even the thought of doing long
division in Roman numerals scares me--despite being a math nerd
since early childhood.
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed
while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively over
specced. I don't think they really understood enough to make
appropriate structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just make it bloody big, and thank god they did. London's main sewer is still able to
cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
On 7/26/25 18:02, The Natural Philosopher wrote:
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they
designed while the keystone was put in place and the supports
removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively
over specced. I don't think they really understood enough to make
appropriate structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just
make it bloody big, and thank god they did. London's main sewer is
still able to cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
A bit like the old software accounting systems. I don't know why they
are reliable, but I doubt it is just down to good design.
For the Romans, can you imagine doing finite element analysis by
hand using ROMAN NUMERALS????? Even the thought of doing long
division in Roman numerals scares me--despite being a math nerd
since early childhood.
On 7/26/25 18:02, The Natural Philosopher wrote:
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed >>>> while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively
over specced. I don't think they really understood enough to make
appropriate structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just make
it bloody big, and thank god they did. London's main sewer is still
able to cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
A bit like the old software accounting systems. I don't know why they
are reliable, but I doubt it is just down to good design.
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
On 7/26/25 18:02, The Natural Philosopher wrote:
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed >>>> while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively
over specced. I don't think they really understood enough to make
appropriate structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just make
it bloody big, and thank god they did. London's main sewer is still
able to cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
A bit like the old software accounting systems. I don't know why they
are reliable, but I doubt it is just down to good design.
On 27/07/2025 10:23, Pancho wrote:
On 7/26/25 18:02, The Natural Philosopher wrote:
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed >>>>> while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively
over specced. I don't think they really understood enough to make
appropriate structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just make
it bloody big, and thank god they did. London's main sewer is still
able to cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
A bit like the old software accounting systems. I don't know why they
are reliable, but I doubt it is just down to good design.
An Ex GF of mine trained on IBM kit and COBOL in an IBM software house
back in the day. (1982)
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
Back then there was a rigorous process of business analysis, code and
data specification, coding and stress testing.
And it was expensive. Damned expensicve. But it damn well worked.
Pancho <Pancho.Jones@protonmail.com> writes:
On 7/26/25 18:02, The Natural Philosopher wrote:
On 26/07/2025 17:54, Pancho wrote:
On 7/25/25 18:39, John Ames wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they
designed while the keystone was put in place and the supports
removed.
The Romans built bridges that stayed the #&@! up."
I think this is an urban myth. Naming a Roman-era work that describes
the practice would settle the question.
AIUI, The reason Roman bridges stayed put, is that they massively
over specced. I don't think they really understood enough to make
appropriate structures for the required load.
The same is true of much Victorian engineering.
Lacking the detailed mathematical analyses it was easier to just
make it bloody big, and thank god they did. London's main sewer is
still able to cope with the load.
On the other hand, many structures have failed. We only see the ones
that didn't fall down.
Yes, I expect lots of Roman bridges collapsed...
A bit like the old software accounting systems. I don't know why they
are reliable, but I doubt it is just down to good design.
The longer it’s been around, the longer it’s had for the bugs to be
found and fixed. Old software also usually does fewer and simpler things
than more recent software, so less room to contain bugs.
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
One poster said they just heavily over-constructed, more brute-force
than engineering acumen. Likely true to a point. However they clearly
had the basics of how to engineer strong and functional structures as
well.
Leave it to Big Blue and CDC and such and 'word-processing'
workstations would cost $50k each and only work with their
mini/mainframes.
On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:
One poster said they just heavily over-constructed, more brute-force
than engineering acumen. Likely true to a point. However they clearly
had the basics of how to engineer strong and functional structures as
well.
If you really want to scratch your head...
https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm
The punch line is there is no evidence the culture used wheels for transportation. The concept was not unknown; some finds included kids'
pull toys.
On Sun, 27 Jul 2025 21:31:57 -0400, c186282 wrote:
Leave it to Big Blue and CDC and such and 'word-processing'
workstations would cost $50k each and only work with their
mini/mainframes.
Y2K was the watershed for our clients. IBM only patched the latest OS and
it wouldn't run on older systems. The sites looked at the cost of
replacing their whole RS/6000 hardware and Windows started looking pretty damn good.
Despite the importance of the 911 system not many government agencies
throw wads of cash at the PSAPs.
On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:
One poster said they just heavily over-constructed, more
brute-force than engineering acumen. Likely true to a point.
However they clearly had the basics of how to engineer strong and
functional structures as well.
If you really want to scratch your head...
https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm
The punch line is there is no evidence the culture used wheels for transportation. The concept was not unknown; some finds included
kids' pull toys.
If the Empire had lasted just a BIT longer they'd
have had steam power, electricity, hydraulics, TNT
and eventually nukes a lot sooner than the post-empire
did. This MIGHT have been bad because of "thinking" -
imperial, enslaving, genocidal. In practice the Empire
wasn't much diff from the NAZIs.
On 7/28/25 12:45 AM, rbowman wrote:
On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:
One poster said they just heavily over-constructed, more brute-force >>> than engineering acumen. Likely true to a point. However they
clearly
had the basics of how to engineer strong and functional
structures as
well.
If you really want to scratch your head...
https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm
The punch line is there is no evidence the culture used wheels for
transportation. The concept was not unknown; some finds included kids'
pull toys.
"History" is complicated - SO many
revisionist versions.
On Fri, 25 Jul 2025 07:06:27 +0100
Pancho <Pancho.Jones@protonmail.com> wrote:
Fortunately I don't develop SSL, chip microcode or aircraft
controllers. People accept my code falls over occasionally.
To be perfectly frank, it's *very* fortunate that you don't develop
aircraft controllers.
This is the way structural engineering works. Bridge building etc.
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed
while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
But to make a useful aircraft means a motor built from aluminium running
on gasoline and spark ignition.
Once you have an industrial base that has those things, heavier than air aircraft are simply inevitable
On Sat, 26 Jul 2025 17:54:15 +0100 Pancho <Pancho.Jones@protonmail.com> wrote:
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they
designed while the keystone was put in place and the supports
removed.
The Romans built bridges that stayed the #&@! up."
AIUI, The reason Roman bridges stayed put, is that they massively over
specced. I don't think they really understood enough to make
appropriate structures for the required load.
That may well be so - but I'd be willing to bet that they *didn't* make
a habit of *not checking where they'd put the end of the bridge* and
trusting that it'd work itself out as long as they built extra.
On 7/28/25 12:45 AM, rbowman wrote:
On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:
One poster said they just heavily over-constructed, more
brute-force than engineering acumen. Likely true to a point.
However they clearly had the basics of how to engineer strong and
functional structures as well.
If you really want to scratch your head...
https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm
The punch line is there is no evidence the culture used wheels for
transportation. The concept was not unknown; some finds included kids'
pull toys.
"History" is complicated - SO many revisionist versions.
On 2025-07-25, John Ames <commodorejohn@gmail.com> wrote:
On Fri, 25 Jul 2025 07:06:27 +0100 Pancho <Pancho.Jones@protonmail.com>
wrote:
Fortunately I don't develop SSL, chip microcode or aircraft
controllers. People accept my code falls over occasionally.
To be perfectly frank, it's *very* fortunate that you don't develop
aircraft controllers.
Pancho seems to have adopted Microsoft's quality criteria:
"Sort of works, most of the time."
Microsoft's crime against humanity is getting people to lower their
standards enough to accept bad software.
This is the way structural engineering works. Bridge building etc.
Funny you should cite bridge-building. As a friend once observed:
"The Romans made their architects stand under the arches they designed
while the keystone was put in place and the supports removed.
The Romans built bridges that stayed the #&@! up."
It is hard to imagine a more stupid decision or more dangerous way
of making decisions than by putting those decisions in the hands of
people who pay no price for being wrong. -- Thomas Sowell
On Mon, 28 Jul 2025 02:14:50 -0400, c186282 wrote:
On 7/28/25 12:45 AM, rbowman wrote:
On Sun, 27 Jul 2025 21:23:31 -0400, c186282 wrote:
One poster said they just heavily over-constructed, more
brute-force than engineering acumen. Likely true to a point.
However they clearly had the basics of how to engineer strong and >>>> functional structures as well.
If you really want to scratch your head...
https://www.nps.gov/chcu/learn/historyculture/chacoan-roads.htm
The punch line is there is no evidence the culture used wheels for
transportation. The concept was not unknown; some finds included kids'
pull toys.
"History" is complicated - SO many revisionist versions.
I like visiting historic sites and I find it amusing when I revisit years later and find the narrative has changed. At one time there was a Custer Battlefield National Monument. Now it's the Little Bighorn Battlefield National Monument. fwiw that effete bastard George HW Bush signed that
law. Monuments and markers have been added for the Indians. I suppose
that's fitting since they won.
https://en.wikipedia.org/wiki/ Little_Bighorn_Battlefield_National_Monument#/media/File:CheyenneStone.JPG
I'm not sure they sent the best and brightest to fight the Indian Wars.
https://en.wikipedia.org/wiki/Fetterman_Fight
It is hard to imagine a more stupid decision or more dangerous way
of making decisions than by putting those decisions in the hands of
people who pay no price for being wrong. -- Thomas Sowell
The reason the empire collapsed is because they didn't have those
things. Britain lost America because of the lack of the Telegraph. You
cant run a colony 3000 miles away on packet ships and sealing wax sealed letters.
Well the Native Americans won at the Little Bighorn but thatfollowed a
massacre by Custer of a village that was peaceful. That enable the war leaders to pull together a winning force.
The VMS software development process seems almost inconceivable to me
now. No unit tests, no systematic logging, no QA, no source code
control, no Google, crappy languages, slow builds, vt00 terminals,
crappy editor (sorry Steve). It took ages to develop stuff.
On Tue, 29 Jul 2025 10:07:13 +0100, Pancho wrote:
The VMS software development process seems almost inconceivable to me
now. No unit tests, no systematic logging, no QA, no source code
control, no Google, crappy languages, slow builds, vt00 terminals,
crappy editor (sorry Steve). It took ages to develop stuff.
It had source-code control, but it was of the clunky, bureaucratic kind
that dated from the era where it was assumed that letting two different people check out the same source file for modification would bring about
the End Times or something.
Their answer to Unix makefiles was similarly clunky. I never used either.
(Also, remember in those days companies charged extra for development
tools like these.)
The symbolic debugger was pretty good. It benefited a lot from the commonality of different language runtimes on the VAX, down to even how exceptions were handled.
As for editors -- I hated EDT (having become accustomed to a TECO-based editor before that), but TPU was quite tolerable. DEC were basically
trying to invent their own version of Emacs, poorly, and introducing yet another proprietary language of their own for the purpose.
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
Another way is to have a program which does nothing. Without
functionalities come zero bugs. When you want something which does a lot
of things and you don't want to wait long years for it, you have to compromise.
After a while, though, IBM /changed/ IEFBR14. MVS required a
returncode in register 15. Up until the time of this IEFBR14 change,
the OS initialized register 15 with 0 prior to invoking the called
program, and a return value of 0 in register 15 meant "all is well"
(like the Unix "true" command).
However, an unrelated MVS change had now left register 15 in an
unknown state before MVS invoked the called program, and (because
IEFBR14 did not manipulate register 15 in any way), IEFBR14 now died ("abended" in IBM-speak) in random ways.
So, the solution was to "fix" IEFBR14 so that it initialized
register 15 to 0 before returning. This doubled the linecount from
one instruction to two, and IEFBR14 became XR 15,15 BR 14
So, even a program that "does nothing" can have bugs in it.
Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
Another way is to have a program which does nothing. Without
functionalities come zero bugs. When you want something which does a lot
of things and you don't want to wait long years for it, you have to compromise.
On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
Another way is to have a program which does nothing. Without
functionalities come zero bugs. When you want something which does a lot
of things and you don't want to wait long years for it, you have to
compromise.
Yea ... but only to a POINT. It's better to
put another week or month into "hardening" than
to suffer the results of too many 'compromises'.
I typically wrote code that WORKED ... but
what if it DIDN'T, what if something screwed
up somewhere for no obvious reason ? I'd go
back over the code and add lots of TRY/EXCEPT
around critical-seeming sections with
appropriate "Oh SHIT !" responses. That'd
take more time - but WORTH it.
Oh, create a log-file writer you can use in
the 'EXCEPT' condition ... can be very simple,
just a reference code to where the fail was.
All my pgms were full of that ... just an
error string var, updated as you moved thru
the code, a decimal number. It'd narrow
down the point of error nicely with very
minimal code bloat.
On 7/28/25 17:34, Charlie Gibbs wrote:
On 2025-07-25, John Ames <commodorejohn@gmail.com> wrote:
On Fri, 25 Jul 2025 07:06:27 +0100Pancho seems to have adopted Microsoft's quality criteria:
Pancho <Pancho.Jones@protonmail.com> wrote:
Fortunately I don't develop SSL, chip microcode or aircraft
controllers. People accept my code falls over occasionally.
To be perfectly frank, it's *very* fortunate that you don't develop
aircraft controllers.
"Sort of works, most of the time."
Pancho has adopted Microsoft's criteria of giving customers what they want.
To continue the bridge analogy. When the US army was trying to cross the Rhine in March 1945, they didn't commission some solid Roman style bridges, capable of lasting 1000 years. No, they used pontoon bridges.
Microsoft's crime against humanity is getting people to
lower their standards enough to accept bad software.
Professionally, I started on VMS, I can assure you the most recent software
I developed on Windows was hugely better, more reliable than the stuff I wrote for VMS.
The VMS software development process seems almost inconceivable to me now.
No unit tests, no systematic logging, no QA, no source code control, no Google, crappy languages, slow builds, vt00 terminals, crappy editor (sorry Steve). It took ages to develop stuff.
On 8/2/25 07:24, c186282 wrote:
On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
Another way is to have a program which does nothing. Without
functionalities come zero bugs. When you want something which does a lot >>> of things and you don't want to wait long years for it, you have to
compromise.
Yea ... but only to a POINT. It's better to
put another week or month into "hardening" than
to suffer the results of too many 'compromises'.
In my case, that was for the client to decide. You explain the risks,
they decide how they want you to spend your time. They nearly always had something else they needed me to do.
If I were developing my own product, for sale, or for use in my own
business. I would probably make much the same decision as the
hypothetical client mentioned above.
I typically wrote code that WORKED ... but
what if it DIDN'T, what if something screwed
up somewhere for no obvious reason ? I'd go
back over the code and add lots of TRY/EXCEPT
around critical-seeming sections with
appropriate "Oh SHIT !" responses. That'd
take more time - but WORTH it.
Why would you add lots of try/catch blocks. Every language I have used
had the capability for exceptions to provide a stack trace. A single
high level try/catch block will catch everything.
The only reason for nested local try/catch blocks is if you know how to handle a specific exception. Back when the world was young, and I still worked, it was common to see bad code, with exception blocks that did
nothing apart from log and rethrow, or often far worse, silently ate the exception.
Oh, create a log-file writer you can use in
the 'EXCEPT' condition ... can be very simple,
just a reference code to where the fail was.
All my pgms were full of that ... just an
error string var, updated as you moved thru
the code, a decimal number. It'd narrow
down the point of error nicely with very
minimal code bloat.
Yes, I agree, standard logger frameworks were one of the great leaps
forward. So simple and yet so profound. I look back and think, why
didn't I implement a standard logger framework day one of my career.
The only reason for nested local try/catch blocks is if you know how
to handle a specific exception. Back when the world was young, and I
still worked, it was common to see bad code, with exception blocks
that did nothing apart from log and rethrow, or often far worse,
silently ate the exception.
On Sat, 2 Aug 2025 11:34:23 +0100, Pancho wrote:
The only reason for nested local try/catch blocks is if you know how
to handle a specific exception. Back when the world was young, and I
still worked, it was common to see bad code, with exception blocks
that did nothing apart from log and rethrow, or often far worse,
silently ate the exception.
Yup. Catch the specific exceptions you care about, let the rest go to
the default handler, and leave it to output a stack trace to stderr.
You *are* capturing stderr to a log somewhere, aren’t you?
On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
Another way is to have a program which does nothing. Without
functionalities come zero bugs. When you want something which does a lot
of things and you don't want to wait long years for it, you have to
compromise.
Yea ... but only to a POINT. It's better to
put another week or month into "hardening" than
to suffer the results of too many 'compromises'.
On 8/2/25 07:24, c186282 wrote:
On 8/1/25 3:13 PM, Stéphane CARPENTIER wrote:
Le 27-07-2025, Lawrence D'Oliveiro <ldo@nz.invalid> a écrit :
On Sun, 27 Jul 2025 12:11:40 +0100, The Natural Philosopher wrote:
The company had a NO BUGS ALLOWED policy. 'Zero Tolerance'.
One easy way to achieve that is not to have a bug-reporting mechanism.
Another way is to have a program which does nothing. Without
functionalities come zero bugs. When you want something which does a lot >>> of things and you don't want to wait long years for it, you have to
compromise.
Yea ... but only to a POINT. It's better to
put another week or month into "hardening" than
to suffer the results of too many 'compromises'.
In my case, that was for the client to decide. You explain the risks,
they decide how they want you to spend your time. They nearly always had something else they needed me to do.
If I were developing my own product, for sale, or for use in my own
business. I would probably make much the same decision as the
hypothetical client mentioned above.
Fixing a bug is not always more important to the client than developing
a new functionality. It depends on the impact of the bug and the impact
of the functionality.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 153:33:57 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,840 |